The Video Browser Showdown (VBS):
The VBS is an international video content search competition that evaluates the state-of-the-art of interactive video retrieval systems. It is performed annually as a special event at the International Conference on MultiMedia Modeling (MMM) since 2012. It aims at pushing research on large-scale video retrieval systems that are effective, fast, and easy to use for content search scenarios that are truly relevant in practice (e.g., known-item search in an ever-increasing video archive, as nowadays ubiquitous in many domains of our digital world).
Experts and Novices:
The VBS consists of an expert session and a novice session. In the expert session the developers of the systems themselves try to solve different types of content search queries that are issued in an ad-hoc manner. Although the dataset is available to the researchers several months before the actual competition, the queries are unknown in advance and issued on-site. In the novice session volunteers from the MMM conference audience (without help from the experts) are required to solve another set of tasks. This should ensure that the interactive video retrieval tools do not only improve in terms of retrieval performance but also in terms of usage (i.e., ease-of-use).
Content Search Tasks:
For Known-Item Search (KIS) tasks a single video clip (20 secs long) is randomly selected from the dataset and visually presented with the projector on-site. The participants need to find exactly the single instance presented, as fast as possible. Another task variation of this kind is textual KIS, where instead of a visual presentation, the searched segment is described only by text.
For Ad-hoc Video Search (AVS) tasks here, a rather general description of many shots is presented (e.g., “Find all shots showing cars in front of trees”) and participants need to find as many correct examples (instances) as fast as possible.
Each query has a time limit of 5 minutes (7 for textual tasks) and is rewarded on success with a score that depends on several factors: the required search time, the number of false submissions (which are penalized), and the number of different instances found for AVS tasks. For the latter case it is also considered, how many different “ranges” were submitted for an AVS tasks. For example, many different but temporally close shots in the same video count much less than several different shots from different videos.
VBS2020 will use the V3C1 dataset in collaboration with TRECVID, which consists of 7475 video files, amounting for 1000h of video content (1082659 predefined segments) and 1.3 TB in size.
To participate please submit an extended demo paper (4-6 pages in Springer LNCS format) via the MMM 2020 Submission System (please select “MMM2020_VBS” as track). The submission should include a detailed description of the video search tool (including a screenshot of the tool) and describe how it supports interactive search in video. Submissions will be peer-reviewed to ensure maximum quality. Accepted papers will be published in the proceedings of the MMM conference and should also be presented as poster during the VBS session.