Auflistung nach Schlagwort "XAI"
1 - 5 von 5
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragExplainable AI in grassland monitoring: Enhancing model performance and domain adaptability(44. GIL - Jahrestagung, Biodiversität fördern durch digitale Landwirtschaft, 2024) Shanghua Liu, Anna HedströmGrasslands are known for their high biodiversity and ability to provide multiple ecosystem services. Challenges in automating the identification of indicator plants are key obstacles to large-scale grassland monitoring. These challenges stem from the scarcity of extensive datasets, the distributional shifts between generic and grassland-specific datasets, and the inherent opacity of deep learning models. This paper delves into the latter two challenges, with a specific focus on transfer learning and eXplainable Artificial Intelligence (XAI) approaches to grassland monitoring, highlighting the novelty of XAI in this domain. We analyze various transfer learning methods to bridge the distributional gaps between generic and grassland-specific datasets. Additionally, we showcase how explainable AI techniques can unveil the model's domain adaptation capabilities, employing quantitative assessments to evaluate the model's proficiency in accurately centering relevant input features around the object of interest. This research contributes valuable insights for enhancing model performance through transfer learning and measuring domain adaptability with explainable AI, showing significant promise for broader applications within the agricultural community.
- ZeitschriftenartikelI Don’t Know, Is AI Also Used in Airbags? - An Empirical Study of Folk Concepts and People’s Expectations of Current and Future Artificial Intelligence(i-com: Vol. 20, No. 1, 2021) Alizadeh, Fatemeh; Stevens, Gunnar; Esau, MargaritaIn 1991, researchers at the center for the Learning Sciences of Carnegie Mellon University were confronted with the confusing question of “where is AI?” from users, who were interacting with artificial intelligence (AI) but did not realize it. After three decades of research, we are still facing the same issue with the unclear understanding of AI among people. The lack of mutual understanding and expectations among AI users and designers and the ineffective interactions with AI that result raises the question of “how AI is generally perceived today?” To address this gap, we conducted 50 semi-structured interviews on perception and expectations of AI. Our results revealed that for most, AI is a dazzling concept that ranges from a simple automated device up to a full controlling agent and a self-learning superpower. We explain how these folk concepts shape users’ expectations when interacting with AI and envisioning its current and future state.
- ZeitschriftenartikelIdentifying Competitive Attributes Based on an Ensemble of Explainable Artificial Intelligence(Business & Information Systems Engineering: Vol. 64, No. 4, 2022) Lee, YounghoonCompetitor analysis is a fundamental requirement in both strategic and operational management, and the competitive attributes of reviewer comments are a crucial determinant of competitor analysis approaches. Most studies have focused on identifying competitors or detecting comparative sentences, not competitive attributes. Thus, the authors propose a method based on explainable artificial intelligence (XAI) that can detect competitive attributes from consumers’ perspectives. They construct a model to classify the reviewer comments for each competitive product and calculate the importance of each keyword in the reviewer comments during the classification process. This is based on the assumption that keywords significantly influence product classification. The authors also propose an additional novel methodology that combines various XAI techniques such as local interpretable model-agnostic explanations, Shapley additive explanations, logistic regression, gradient-based class activation map, and layer-wise relevance propagation to build a robust model for calculating the importance of competitive attributes for various data sources.
- KonferenzbeitragTowards a User-Empowering Architecture for Trustability Analytics(BTW 2023, 2023) Bruchhaus, Sebastian; Reis, Thoralf; Bornschlegl, Marco Xaver; Störl, Uta; Hemmje, MatthiasMachine learning (ML) thrives on big data like huge data sets and streams from IOT devices. Those technologies are becoming increasingly commonplace in our day to day existence. Learning autonomous intelligent actors (AIAs) impact our lives already in the form of, e.g. chat bots, medical expert systems, and facial recognition systems. Doubts concerning ethical, legal, and social implications of such AIAs become increasingly compelling in consequence. Our society now finds itself confronted with decisive questions: Should we trust AI? Is it fair, transparent, and respecting privacy? An individual psychological threshold for cooperation with AIAs has been postulated. In Shaefer’s words: “No trust, no use”. On the other hand, ignorance of an AIA’s weak points and idiosyncrasies can lead to overreliance. This paper proposes a prototypical microservice architecture for trustability analytics. Its architecture shall introduce self-awareness concerning trustability into the AI2VIS4BigData reference model for big data analysis and visualization by borrowing the concept of a “looking-glass self” from psychology.
- KonferenzbeitragZum Einsatz von Maschinellem Lernen in der Umweltverwaltung: Der Simplex4Learning Ansatz(INFORMATIK 2024, 2024) Abecker, Andreas; Budde, Matthias; Fuchs-Kittowski, Frank; Großmann, Janik; Koch, Werner; Lachowitzer, Jonas; Lossow, Stefan; Rodner, Erik; Rudolf, Heino; Schulze, PaulZiel des im Herbst 2023 gestarteten Forschungsvorhabens Simplex4Learning ist es, die großen und heterogenen Datenbestände der Umweltbehörden für intelligente Analysen mit Methoden des maschinellen Lernens besser zu erschließen und diese Verfahren für Domänenexperten aus dem Umweltbereich ohne vertiefte ML-Kenntnisse praktikabel anwendbar zu machen. Realisiert wird dies (1) durch die Weiterentwicklung der Simplex4Data-Methode zur Datenbereitstellung für ML, ergänzt um (2) AutoML- und MLOps-Funktionalitäten, (3) Funktionalitäten zum Erklären von ML-Ergebnissen, (4) ein ML-Pattern Repository zum Wiederverwenden generalisierter ML-Workflows, all das (5) exemplarisch angebunden an die Datenanalyseplattform Disy Cadenza und das Data Warehouse System Simplex4Data. Der Arbeitsplan des Projekts ist an den konkreten Beispieldaten und Anwendungsfällen von Landesbehörden aus drei Bundesländern orientiert. Der vorliegende Beitrag als „Work-in-Progress“-Bericht skizziert Motivation und Ausgangslage des Vorhabens, den technischen Lösungsansatz und erste Zwischenergebnisse.