Auflistung nach Autor:in "Leusmann, Jan"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragKnuckleTouch: Enabling Knuckle Gestures on Capacitive Touchscreens using Deep Learning(Mensch und Computer 2019 - Tagungsband, 2019) Schweigert, Robin; Leusmann, Jan; Hagenmayer, Simon; Weiß, Maximilian; Le, Huy Viet; Mayer, Sven; Bulling, AndreasWhile mobile devices have become essential for social communication and have paved the way for work on the go, their interactive capabilities are still limited to simple touch input. A promising enhancement for touch interaction is knuckle input but recognizing knuckle gestures robustly and accurately remains challenging. We present a method to differentiate between 17 finger and knuckle gestures based on a long short-term memory (LSTM) machine learning model. Furthermore, we introduce an open source approach that is ready-to-deploy on commodity touch-based devices. The model was trained on a new dataset that we collected in a mobile interaction study with 18 participants. We show that our method can achieve an accuracy of 86.8% on recognizing one of the 17 gestures and an accuracy of 94.6% to differentiate between finger and knuckle. In our evaluation study, we validate our models and found that the LSTM gestures recognizing archived an accuracy of 88.6%. We show that KnuckleTouch can be used to improve the input expressiveness and to provide shortcuts to frequently used functions.
- KonferenzbeitragUnderstanding Visual-Haptic Integration of Avatar Hands using a Fitts' Law Task in Virtual Reality(Mensch und Computer 2019 - Tagungsband, 2019) Schwind, Valentin; Leusmann, Jan; Henze, NielsVirtual reality (VR) is becoming more and more ubiquitous to interact with digital content and often requires renderings of avatars as they enable improved spatial localization and high levels of presence. Previous work shows that visual-haptic integration of virtual avatars depends on body ownership and spatial localization in VR. However, there are different conclusions about how and which stimuli of the own appearance are integrated into the own body scheme. In this work, we investigate if systematic changes of model and texture of a users' avatar affect the input performance measured in a two-dimensional Fitts' law target selection task. Interestingly, we found that the throughput remained constant between our conditions and that neither model nor texture of the avatar significantly affected the average duration to complete the task even when participants felt different levels of presence and body ownership. In line with previous work, we found that the illusion of virtual limb-ownership does not necessarily correlate to the degree to which vision and haptics are integrated into the own body scheme. Our work supports findings indicating that body ownership and spatial localization are potentially independent mechanisms in visual-haptic integration.