Treffer: Scaling up SoccerNet with multi-view spatial localization and re-identification.

Title:
Scaling up SoccerNet with multi-view spatial localization and re-identification.
Source:
Scientific Data; 6/21/2022, Vol. 9 Issue 1, p1-9, 9p
Database:
Complementary Index

Weitere Informationen

Soccer videos are a rich playground for computer vision, involving many elements, such as players, lines, and specific objects. Hence, to capture the richness of this sport and allow for fine automated analyses, we release SoccerNet-v3, a major extension of the SoccerNet dataset, providing a wide variety of spatial annotations and cross-view correspondences. SoccerNet's broadcast videos contain replays of important actions, allowing us to retrieve a same action from different viewpoints. We annotate those live and replay action frames showing same moments with exhaustive local information. Specifically, we label lines, goal parts, players, referees, teams, salient objects, jersey numbers, and we establish player correspondences between the views. This yields 1,324,732 annotations on 33,986 soccer images, making SoccerNet-v3 the largest dataset for multi-view soccer analysis. Derived tasks may benefit from these annotations, like camera calibration, player localization, team discrimination and multi-view re-identification, which can further sustain practical applications in augmented reality and soccer analytics. Finally, we provide Python codes to easily download our data and access our annotations. Measurement(s) Localization of soccer features Technology Type(s) Manual annotations [ABSTRACT FROM AUTHOR]

Copyright of Scientific Data is the property of Springer Nature and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)