Social Signal Interpretation (SSI)
dc.contributor.author | Wagner, Johannes | |
dc.contributor.author | Lingenfelser, Florian | |
dc.contributor.author | Bee, Nikolaus | |
dc.contributor.author | André, Elisabeth | |
dc.date.accessioned | 2018-01-08T09:15:15Z | |
dc.date.available | 2018-01-08T09:15:15Z | |
dc.date.issued | 2011 | |
dc.description.abstract | The development of anticipatory user interfaces is a key issue in human-centred computing. Building systems that allow humans to communicate with a machine in the same natural and intuitive way as they would with each other requires detection and interpretation of the user’s affective and social signals. These are expressed in various and often complementary ways, including gestures, speech, mimics etc. Implementing fast and robust recognition engines is not only a necessary, but also challenging task. In this article, we introduce our Social Signal Interpretation (SSI) tool, a framework dedicated to support the development of such online recognition systems. The paper at hand discusses the processing of four modalities, namely audio, video, gesture and biosignals, with focus on affect recognition, and explains various approaches to fuse the extracted information to a final decision. | |
dc.identifier.pissn | 1610-1987 | |
dc.identifier.uri | https://dl.gi.de/handle/20.500.12116/11222 | |
dc.publisher | Springer | |
dc.relation.ispartof | KI - Künstliche Intelligenz: Vol. 25, No. 3 | |
dc.relation.ispartofseries | KI - Künstliche Intelligenz | |
dc.subject | Affective computing | |
dc.subject | Human-centred computing | |
dc.subject | Machine learning | |
dc.subject | Multimodal fusion | |
dc.subject | Real-time recognition | |
dc.subject | Social signal processing | |
dc.title | Social Signal Interpretation (SSI) | |
dc.type | Text/Journal Article | |
gi.citation.endPage | 256 | |
gi.citation.startPage | 251 |