Auflistung nach Schlagwort "stereotypes"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragDevelopment of the InteractionSuitcase in virtual reality to support inter- and transcultural learning processes in English as Foreign Language education(DELFI 2021, 2021) Hein, Rebecca; Steinbock, Jeanine; Eisenmann, Maria; Latoschik, Marc Erich; Wienrich, CarolinImmersion programs and the experiences they offer learners are irreplaceable. In times of Covid-19, social VR applications can offer enormous potential for the acquisition of inter- and transcultural competencies (ITC). Virtual objects (VO) could initiate communication and reflection processes between learners with different cultural backgrounds and therefore offer an exciting approach. Accordingly, we address the following research questions: (1) What is a sound way to collect objects for the InteractionSuitcase to promote ITC acquisition by means of Social VR? (2) For which aspects do students use the objects when developing an ITC learning scenario? (3) Which VO are considered particularly supportive to initiate and facilitate ITC learning? To answer these research questions, the virtual InteractionSuitcase will be designed and implemented. This paper presents the empirical preliminary work and interim results of the development and evaluation of the InteractionSuitcase, its usage, and the significance of this project for Human- Computer Interaction (HCI) and English as Foreign Language (EFL) research.
- TextdokumentMeasuring Gender Bias in German Language Generation(INFORMATIK 2022, 2022) Kraft,Angelie; Zorn,Hans-Peter; Fecht,Pascal; Simon,Judith; Biemann,Chris; Usbeck,RicardoMost existing methods to measure social bias in natural language generation are specified for English language models. In this work, we developed a German regard classifier based on a newly crowd-sourced dataset. Our model meets the test set accuracy of the original English version. With the classifier, we measured binary gender bias in two large language models. The results indicate a positive bias toward female subjects for a German version of GPT-2 and similar tendencies for GPT-3. Yet, upon qualitative analysis, we found that positive regard partly corresponds to sexist stereotypes. Our findings suggest that the regard classifier should not be used as a single measure but, instead, combined with more qualitative analyses.