Auflistung nach Autor:in "Glinka, Katrin"
1 - 4 von 4
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragIdentifying Explanation Needs of End-users: Applying and Extending the XAI Question Bank(Mensch und Computer 2023 - Tagungsband, 2023) Sipos, Lars; Schäfer, Ulrike; Glinka, Katrin; Müller-Birn, ClaudiaExplainable Artificial Intelligence (XAI) is concerned with making the decisions of AI systems interpretable to humans. Explanations are typically developed by AI experts and focus on algorithmic transparency and the inner workings of AI systems. Research has shown that such explanations do not meet the needs of users who do not have AI expertise. As a result, explanations are often ineffective in making system decisions interpretable and understandable. We aim to strengthen a socio-technical view of AI by following a Human-Centered Explainable Artificial Intelligence (HC-XAI) approach, which investigates the explanation needs of end-users (i.e., subject matter experts and lay users) in specific usage contexts. One of the most influential works in this area is the XAI Question Bank (XAIQB) by Liao et al. The authors propose a set of questions that end-users might ask when using an AI system, which in turn is intended to help developers and designers identify and address explanation needs. Although the XAIQB is widely referenced, there are few reports of its use in practice. In particular, it is unclear to what extent the XAIQB sufficiently captures the explanation needs of end-users and what potential problems exist in the practical application of the XAIQB. To explore these open questions, we used the XAIQB as the basis for analyzing 12 think-aloud software explorations with subject matter experts, i.e., art historians. We investigated the suitability of the XAIQB as a tool for identifying explanation needs in a specific usage context. Our analysis revealed a number of explanation needs that were missing from the question bank, but that emerged repeatedly as our study participants interacted with an AI system. We also found that some of the XAIQB questions were difficult to distinguish and required interpretation during use. Our contribution is an extension of the XAIQB with 11 new questions. In addition, we have expanded the descriptions of all new and existing questions to facilitate their use. We hope that this extension will enable HCI researchers and practitioners to use the XAIQB in practice and may provide a basis for future studies on the identification of explanation needs in different contexts.
- ZeitschriftenartikelThe Individual in the Data — the Aspect of Personal Relevance in Designing Casual Data Visualisations(i-com: Vol. 16, No. 3, 2017) Meier, Sebastian; Glinka, KatrinOver the last two decades, data visualisation has diffused into the broader realm of mass communication. Before this shift, tools and displays of data-driven geographic- and information visualisation were mostly used in expert contexts. By now, they are also used in casual contexts, for example on newspaper websites, government data portals and many other public outlets. This diversification of the audience poses new challenges within the visualisation community. In this paper we propose personal relevance as one factor to be taken into account when designing casual data visualisations, which are meant for the communication with non-experts. We develop a conceptual model and present a related set of design techniques for interactive web-based visualisations that are aimed at activating personal relevance. We discuss our proposed techniques by applying them to a use case on the visualisation of air pollution in London (UK).
- WorkshopbeitragPrivacy Needs Reflection: Conceptional Design Rationales for Privacy-Preserving Explanation User Interfaces(Mensch und Computer 2021 - Workshopband, 2021) Sörries, Peter; Müller-Birn, Claudia; Glinka, Katrin; Boenisch, Franziska; Margraf, Marian; Sayegh-Jodehl, Sabine; Rose, MatthiasThe application of machine learning (ML) in the medical domain has recently received a lot of attention. However, the constantly growing need for data in such ML-based approaches raises many privacy concerns, particularly when data originate from vulnerable groups, for example, people with a rare disease. In this context, a challenging but promising approach is the design of privacy-preserving computation technologies (e.g. differential privacy). However, design guidance on how to implement such approaches in practice has been lacking. In our research, we explore these challenges in the design process by involving stakeholders from medicine, security, ML, and human-computer interaction, as well as patients themselves. We emphasize the suitability of reflective design in this context by considering the concept of privacy by design. Based on a real-world use case situated in the healthcare domain, we explore the existing privacy needs of our main stakeholders, i.e. medical researchers or physicians and patients. Stakeholder needs are illustrated within two scenarios that help us to reflect on contradictory privacy needs. This reflection process informs conceptional design rationales and our proposal for privacy-preserving explanation user interfaces. We propose that the latter support both patients’ privacy preferences for a meaningful data donation and experts’ understanding of the privacy-preserving computation technology employed.
- KonferenzbeitragTo Classify is to Interpret: Building Taxonomies from Heterogeneous Data through Human-AI Collaboration(Mensch und Computer 2023 - Tagungsband, 2023) Meier, Sebastian; Glinka, KatrinTaxonomy building is a task that requires interpreting and classifying data within a given frame of reference, which comes to play in many areas of application that deal with knowledge and information organization. In this paper, we explore how taxonomy building can be supported with systems that integrate machine learning (ML). However, relying only on black-boxed ML-based systems to automate taxonomy building would sideline the users’ expertise. We propose an approach that allows the user to iteratively take into account multiple model’s outputs as part of their sensemaking process. We implemented our approach in two real-world use cases. The work is positioned in the context of HCI research that investigates the design of ML-based systems with an emphasis on enabling human-AI collaboration.