Code underlying the publication: "Are current long-term video understanding datasets long-term?"
doi: 10.4121/6bdd2eda-9f73-430c-9e60-685e810d6333
Many real-world applications, from sport analysis to surveillance, benefit from automatic long-term action recognition. In the current deep learning paradigm for automatic action recognition, it is imperative that models are trained and tested on datasets and tasks that evaluate if such models actually learn and reason over long-term information. In this work, we propose a method to evaluate how suitable a video dataset is to evaluate models for long-term action recognition. To this end, we define a long-term action as excluding all the videos that can be correctly recognized using solely short-term information. We test this definition on existing long-term classification tasks on three popular real-world datasets, namely Breakfast, CrossTask and LVU, to determine if these datasets are truly evaluating long-term recognition. Our method involves conducting user studies, where we ask humans to annotate videos from these datasets. Our study reveals that these datasets can be effectively solved using shortcuts based on short-term information. In this repository, we provide the code and data. The code includes the HTML files for the user studies and the data analysis. The data includes the input to the user studies (e.g., video urls) and the responses collected on Amazon Mechanical Turk.
- 2024-05-24 first online, published, posted
TNO, Netherlands Organisation for Applied Scientific Research, Intelligent Imaging Group
DATA
To access the source code, use the following command:
git clone https://data.4tu.nl/v3/datasets/6bcbfb60-7eb0-4380-9690-26e6f3b2f174.git "longterm_datasets"