Wanted:

New representations of motion

ECCV'16 Workshop on:
Brave new ideas for motion representations in videos

Together with the European Conference on Computer Vision (ECCV) 2016, Amsterdam.

Big thank to everyone who attended the very successful workshop.

Workshop day Speaker

Description of the workshop and its relevance

In the late years Deep Learning has been a great force of change on most computer vision tasks. In video analysis problems, however, such as action recognition and detection, motion analysis and tracking, shallow architectures remain surprisingly competitive. What is the reason for this conundrum? Larger datasets are part of the solution. The recently proposed Sports1M helped recently in the realistic training of large motion networks. Still, the breakthrough has not yet arrived.

Assuming that the recently proposed video datasets are large enough for training deep networks for video, another likely culprit for the standstill in video analysis is the capacity of the existing deep models. More specifically, the existing deep networks for video analysis might not be sophisticated enough to address the complexity of motion information. This makes sense, as videos introduce an exponential complexity as compared to static images. Unfortunately, state-of-the-art motion representation models are extensions of existing image representations rather than motion dedicated ones. Brave, new and motion-specific representations are likely to be needed for a breakthrough in video analysis.

Calling papers for brave new ideas

Attempting to publish a wild, but intriguing idea can be daunting, resulting in slow progress. On one hand a controversial new idea might be rejected by top-tier conferences, without the right experimental justification. On the other hand, researchers may not want to reveal a smart idea too soon in the fear of not receiving the right credit. To make amends with these two factors, the workshop will admit maximum 4-page papers describing novel, previously unseen ideas without necessarily requiring exhaustive quantitative justifications. Moreover, to make sure proper accreditation is given in the future, the workshop will have an open-review process, where all submitted papers should first be uploaded to arXiv.

Expert speakers

To kickstart the discussion we have confirmed speakers from different fields that relate to the understanding and modelling of motion and sequence data: Computer Vision (Ivan Laptev), Machine Learning (Max Welling), Neuro-Science (Aman Saleem).

Topics

The workshop focuses on motion representations related, but not limited, to the following topics:

- Influence of motion in object recognition, object affordance, scene understanding.
- Object and optical flow
- Motion prediction, causal reasoning and forecasting
- Event and action recognition
- Spatio-temporal action localization
- Modeling human motion in videos and video streams
- Motion segmentation and saliency
- Tracking of objects in space and time
- Unsupervised action, actom discovery using ego motion
- Applications of motion understanding and video dynamics in sports, healthcare, autonomous driving, driver assistance and robotics

Program

Date: Oct 8, 2016.

Time Event Topic
8.45 - 9.00 Welcome to the workshop Information
9.00 - 9.45 Invited machine learning speaker:
Professor Max Welling
Motion and Unsupervised learning
9.45 - 10.00 Brave new idea Temporal Convolutional Networks: A Unified Approach to Action Segmentation
10.00 - 10.15 Break Coffee
10.15 - 11.00 Invited neuroscience speaker:
Dr. Aman Saleem
Motion from a Neuroscience perspective
11.00 - 11.45 Invited computer vision speaker:
Professor Ivan Laptev
Computer Vision and motion representations
11.45 - 13.30 Lunch (on your own) Poster session
13.30 - 14.15 Invited graphics speaker:
Professor Elmar Eisemann
Motion representations from a graphics perspective.
14.15 - 14.30 Brave new idea Making a Case for Learning Motion Representations with Phase
14.30 - 14.45 Break Coffee
14.45 - 15.00 Brave new idea Segmentation Free Object Discovery in Video
15.00 - 15.15 Brave new idea Back to Basics: Unsupervised Learning of Optical Flow via Brightness Constancy and Motion Smoothness
15.15 - 15.30 Brave new idea Human Action Recognition without Human
15.30 - 15.45 Conclusion/Summary A summary of the workshop and the related literature
15.45 - 17.30 Poster session Poster session

Important Dates

Together with the European Conference on Computer Vision (ECCV) 2016, Amsterdam.

Date of the workshop: 8th of October 2016

Submission

Constructive discussion

The workshop's goal is a constructive, creative and open conversation. In principle we will accept all papers. All reviews will be made publicly available. Reviewers can choose to remain anonymous or to reveal their identity to encourage collaboration and positive feedback. We include poster presentations and will select a few of the best and bravest papers for an oral presentation.

Instructions

All papers should use provided latex format (latex_format.zip) or similar text processing tool. We accept only PDF files. Papers should be only 4 pages including references. Example paper in PDF format can be downloaded here. Paper submission is not anonymous. It is expected the authors would upload their papers to arxiv.org first. We follow the "open review" policy. The reviews will be released online after the acceptance of the paper. All author names, emails and the institution should be mentioned just bellow the title of the paper.

Submit

To submit your paper use OpenReview.

Proceedings

Accepted papers will be included in the Springer ECCV workshop proceedings.

Registration & venue

The workshop is together with the European Conference on Computer Vision (ECCV) 2016, Amsterdam.

Accepted papers must have at least one registered author (this can be a student). Deadline for early registration is July, 31st 2016.

See registration Fee and other details: https://www.aanmelder.nl/eccv2016/subscribe.

The workshop will be held at the Oudemanhuispoort location of the University of Amsterdam.
Oudemanhuispoort 4 – 6, 1012 CN Amsterdam.