Together with the Computer Vision and Pattern Recognition (CVPR) 2017. This is the second time we organize this workshop following the succesful previous workshop at ECCV 2016.
In the late years Deep Learning has been a great force of change on most computer vision tasks. In video analysis problems, however, such as action recognition and detection, motion analysis and tracking, shallow architectures remain surprisingly competitive. What is the reason for this conundrum? Larger datasets are part of the solution. The recently proposed Sports1M helped recently in the realistic training of large motion networks. Still, the breakthrough has not yet arrived.
Assuming that the recently proposed video datasets are large enough for training deep networks for video, another likely culprit for the standstill in video analysis is the capacity of the existing deep models. More specifically, the existing deep networks for video analysis might not be sophisticated enough to address the complexity of motion information. This makes sense, as videos introduce an exponential complexity as compared to static images. Unfortunately, state-of-the-art motion representation models are extensions of existing image representations rather than motion dedicated ones. Brave, new and motion-specific representations are likely to be needed for a breakthrough in video analysis.
Attempting to publish a wild, but intriguing idea can be daunting, resulting in slow progress. On one hand a controversial new idea might be rejected by top-tier conferences, without the right experimental justification. On the other hand, researchers may not want to reveal a smart idea too soon in the fear of not receiving the right credit. To make amends with these two factors, the workshop will admit maximum 8-page papers describing novel, previously unseen ideas without necessarily requiring exhaustive quantitative justifications. Authors can also submit 4 page brief papers but those will not be included in the proceedings (please refer to the instructions given bellow). Moreover, to make sure proper accreditation is given in the future, the workshop will have an open-review process, where all submitted papers should first be uploaded to arXiv.
To kickstart the discussion we have confirmed speakers from different fields, details will follow.
The workshop focuses on motion representations related, but not limited, to the following topics:
- Influence of motion in object recognition, object affordance, scene understanding.
- Object and optical flow
- Motion prediction, causal reasoning and forecasting
- Event and action recognition
- Spatio-temporal action localization
- Modeling human motion in videos and video streams
- Motion segmentation and saliency
- Tracking of objects in space and time
- Unsupervised action, actom discovery using ego motion
- Applications of motion understanding and video dynamics in sports, healthcare, autonomous driving, driver assistance and robotics
Date: 21 july 2017.
Time | Event | Description |
---|---|---|
8.45 - 9.00 | Welcome to the workshop | Information |
9.00 - 10.00 | Invited speaker 1 Dr. Miki Rubinstein | Talk 1 |
10.00 - 10.20 | Break | Coffee |
10.20 - 11.20 | Invited speaker 2 Professor Rene Vidal | Talk 2 |
11.25 - 11.45 | Unsupervised Human Action Detection by Action Matching | Oral Presentation 1 by Basura Fernando, Sareh Shirazi, Stephen Gould |
11.45 - 13.30 | Lunch (on your own) | Poster session |
13.30 - 14.15 | Invited speaker 4 Professor Jiebo Luo | Talk 4 |
14.15 - 14.30 | RATM: Recurrent Attentive Tracking Model | Oral Presentation 2 by Samira Ebrahimi Kahou, Vincent Michalski, Roland Memisevic, Christopher Pal, Pascal Vincent |
14.30 - 14.45 | Break | Coffee |
14.45 - 15.30 | Invited speaker 5 Professor Cees G.M. Snoek | Talk 5 |
15.30 - 15.45 | Interpretable 3D Human Action Analysis with Temporal Convolutional Networks | Oral Presentation 3 by Tae Soo Kim, Austin Reiter |
15.45 - 16.00 | Learning Dynamic GMM for Attention Distribution on Single-face Videos | Oral Presentation 4 by Yun Ren, Zulin Wang, Mai Xu, Haoyu Dong, Shengxi Li |
16.00 - 16.45 | Invited speaker 3 Professor Alan Yuille | Talk 3 |
16.45 - 17.30 | Poster session | Poster session |
Together with the Computer Vision and Pattern Recognition (CVPR) 2017.
Date of the workshop: 21 July 2017
4 Page Submission Deadline
4 Page Acceptance
8 Page Acceptance
8 Page Camera ready
The workshop's goal is a constructive, creative and open conversation. In principle we will accept all papers. All reviews will be made publicly available. Reviewers can choose to remain anonymous or to reveal their identity to encourage collaboration and positive feedback. We include poster presentations and will select a few of the best and bravest papers for an oral presentation.
You can submit papers in two different formats.
1.Full paper submission should include 8 pages of text and should use the CVPR 2017 camera ready format as per the instructions given here. Full paper submission should include 8 pages (excluding references) and will be included in the proceedings of the CVPR17 workshops. Therefore, the deadline for full paper submission is 7th April 2017.
2. Authors can also submit 4 Page papers which will be peer reviewed. However, they will not be include in the proceedings. Please follow the
CVPR 2017 camera ready format as per the instructions given here but limit your paper to 4 pages excluding references.
All papers should have the names of the authors, institute and the email address in the header of the paper as per the camera ready format of CVPR 2017. Authors are encouraged to upload their papers in archive.
Please use OpenReview to submit your paper.
Will appear soon.
The workshop is together with the Computer Vision and Pattern Recognition (CVPR) 2017.
Accepted papers must have at least one registered author (this can be a student).
Venue TBD.
Stratis Gavves, Basura Fernando, Chenliang Xu, Yan Yan, Hakan Bilen, Xuming He, Michael Ying Yang, Jan van Gemert.