Auflistung nach Schlagwort "German"
1 - 4 von 4
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelAssessing the Attitude Towards Artificial Intelligence: Introduction of a Short Measure in German, Chinese, and English Language(KI - Künstliche Intelligenz: Vol. 35, No. 1, 2021) Sindermann, Cornelia; Sha, Peng; Zhou, Min; Wernicke, Jennifer; Schmitt, Helena S.; Li, Mei; Sariyska, Rayna; Stavrou, Maria; Becker, Benjamin; Montag, ChristianIn the context of (digital) human–machine interaction, people are increasingly dealing with artificial intelligence in everyday life. Through this, we observe humans who embrace technological advances with a positive attitude. Others, however, are particularly sceptical and claim to foresee substantial problems arising from such uses of technology. The aim of the present study was to introduce a short measure to assess the Attitude Towards Artificial Intelligence (ATAI scale) in the German, Chinese, and English languages. Participants from Germany (N = 461; 345 females), China (N = 413; 145 females), and the UK (N = 84; 65 females) completed the ATAI scale, for which the factorial structure was tested and compared between the samples. Participants from Germany and China were additionally asked about their willingness to interact with/use self-driving cars, Siri, Alexa, the social robot Pepper, and the humanoid robot Erica, which are representatives of popular artificial intelligence products. The results showed that the five-item ATAI scale comprises two negatively associated factors assessing (1) acceptance and (2) fear of artificial intelligence. The factor structure was found to be similar across the German, Chinese, and UK samples. Additionally, the ATAI scale was validated, as the items on the willingness to use specific artificial intelligence products were positively associated with the ATAI Acceptance scale and negatively with the ATAI Fear scale, in both the German and Chinese samples. In conclusion we introduce a short, reliable, and valid measure on the attitude towards artificial intelligence in German, Chinese, and English language.
- ZeitschriftenartikelDraw mir a Sheep: A Supersense-based Analysis of German Case and Adposition Semantics(KI - Künstliche Intelligenz: Vol. 35, No. 0, 2021) Prange, Jakob; Schneider, NathanAdpositions and case markers are ubiquitous in natural language and express a wide range of meaning relations that can be of crucial relevance for many NLP and AI tasks. However, capturing their semantics in a comprehensive yet concise, as well as cross-linguistically applicable way has remained a challenge over the years. To address this, we adapt the largely language-agnostic SNACS framework to German, defining language-specific criteria for identifying adpositional expressions and piloting a supersense-annotated German corpus. We compare our approach with prior work on both German and multilingual adposition semantics, and discuss our empirical findings in the context of potential applications.
- TextdokumentExploring the Use of the Pronoun I in German Academic Texts with Machine Learning(INFORMATIK 2020, 2021) Andresen, Melanie; Knorr, DagmarThe use of the pronoun ich (‘I’) in academic language is a source of constant debate and a frequent cause of insecurity for students. We explore manually annotated instances of I from a German learner corpus. Using machine learning techniques, we investigate to what extent it is possible to automatically distinguish between different types of I usage (author I vs. narrator I). We additionally inspect which context words are good indicators of one type or the other. The results show that an automatic classification is not straightforward, but the distinctive features are in line with previous research. The results of the automatic classification are not perfect, but would greatly facilitate manual annotation. The distinctive words are in line with previous research and indicate that the author I is a more homogeneous class.
- KonferenzbeitragOn the State of German (Abstractive) Text Summarization(BTW 2023, 2023) Aumiller, Dennis; Fan, Jing; Gertz, MichaelWith recent advancements in the area of Natural Language processing, the focus is slowly shifting from a purely English-centric view towards more language-specific solutions, including German.Especially practical for businesses to analyze their growing amount of textual data are text summarization systems, which transform long input documents into compressed and more digestible summary texts.In this work, we assess the particular landscape of German abstractive text summarization and investigate the reasons why practically useful solutions for abstractive text summarization are still absent in industry. Our focus is two-fold, analyzing a) training resources, and b) publicly available summarization systems.We are able to show that popular existing datasets exhibit crucial flaws in their assumptions about the original sources, which frequently leads to detrimental effects on system generalization and evaluation biases. We confirm that for the most popular training dataset, MLSUM, over 50% of the training set is unsuitable for abstractive summarization purposes. Furthermore, available systems frequently fail to compare to simple baselines, and ignore more effective and efficient extractive summarization approaches. We attribute poor evaluation quality to a variety of different factors, which are investigated in more detail in this work:A lack of qualitative (and diverse) gold data considered for training, understudied (and untreated) positional biases in some of the existing datasets, and the lack of easily accessible and streamlined pre-processing strategies or analysis tools. We therefore provide a comprehensive assessment of available models on the cleaned versions of datasets, and find that this can lead to a reduction of more than 20 ROUGE-1 points during evaluation. As a cautious reminder for future work, we finally highlight the problems of solely relying on n-gram based scoring methods by presenting particularly problematic failure cases. Code for dataset filtering and reproducing results can be found online: https://github.com/anonymized-user/anonymized-repository