Auflistung nach Schlagwort "Interpretability"
1 - 3 von 3
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragExplainable Online Reinforcement Learning for Adaptive Systems(Software Engineering 2023, 2023) Feit, Felix; Metzger, Andreas; Pohl, KlausThis talk presents our work on explainable online reinforcement learning for self-adaptive systems published at the 3rd IEEE Intl. Conf. on Autonomic Computing and Self-Organizing Systems.
- KonferenzbeitragExplaining ECG Biometrics: Is It All In The QRS?(BIOSIG 2020 - Proceedings of the 19th International Conference of the Biometrics Special Interest Group, 2020) Pinto, João Ribeiro; Cardoso, Jaime S.The literature seems to indicate that the QRS complex is the most important component of the electrocardiogram (ECG) for biometrics. To verify this claim, we use interpretability tools to explain how a convolutional neural network uses ECG signals to identify people, using on-theperson (PTB) and off-the-person (UofTDB) signals. While the QRS complex appears indeed to be a key feature on ECG biometrics, especially with cleaner signals, results indicate that, for larger populations in off-the-person settings, the QRS shares relevance with other heartbeat components, which it is essential to locate. These insights indicate that avoiding excessive focus on the QRS complex, using decision explanations during training, could be useful for model regularisation.
- TextdokumentGAFAI: Proposal of a Generalized Audit Framework for AI(INFORMATIK 2022, 2022) Markert,Thora; Langer,Fabian; Danos,VasiliosML based AI applications are increasingly used in various fields and domains. Despite the enormous and promising capabilities of ML, the inherent lack of robustness, explainability and transparency limits the potential use cases of AI systems. In particular, within every safety or security critical area, such limitations require risk considerations and audits to be compliant with the prevailing safety and security demands. Unfortunately, existing standards and audit schemes do not completely cover the ML specific issues and lead to challenging or incomplete mapping of the ML functionality to the existing methodologies. Thus, we propose a generalized audit framework for ML based AI applications (GAFAI) as an anticipation and assistance to achieve auditability. This conceptual risk and requirement driven approach based on sets of generalized requirements and their corresponding application specific refinements as contributes to close the gaps in auditing AI.