Konferenzbeitrag
Extracting Production Style Features of Educational Videos with Deep Learning
Volltext URI
Dokumententyp
Text/Conference Paper
Zusatzinformation
Datum
2022
Autor:innen
Zeitschriftentitel
ISSN der Zeitschrift
Bandtitel
Verlag
Gesellschaft für Informatik e.V.
Zusammenfassung
Enforced by the pandemic, the production of videos in educational settings and their availability on learning platforms allow new forms of video-based learning. This has a strong benefit of covering multiple topics with different design styles and facilitating the learning experience. Consequently, research interest in video-based learning has increased remarkably, with many studies focusing on examining the diverse visual properties of videos and their impact on learner engagement and knowledge gain. However, manually analysing educational videos to collect metadata and to classify videos for quality assessment is a time-consuming activity. In this paper, we address the problem of automatic video feature extraction related to video production design. To this end, we introduce a novel use case for object detection models to recognize the human embodiment and the type of teaching media used in the video. The results obtained on a small-scale custom dataset show the potential of deep learning models for visual video analysis. This will allow for future use in developing an automatic video assessment system to reduce the workload for teachers and researchers.