Zeitschriftenartikel
AutoRAG: Grounding Text and Symbols
Vorschaubild nicht verfügbar
Volltext URI
Dokumententyp
Text/Journal Article
Zusatzinformation
Datum
2024
Autor:innen
Zeitschriftentitel
ISSN der Zeitschrift
Bandtitel
Verlag
Springer
Zusammenfassung
In safety critical domains such as the healthcare domain, systems for natural language question answering demand special correctness guarantees. Modeling problem domains formally allows for automatic transparent reasoning, but handling comprehensive formal models may quickly demand expert knowledge. Ultimately, we need a system which is as easily accessible as large language models while the correctness of its output should be checkable using trusted knowledge. Since words are ambiguous in general but concepts of a formal model are not, we propose to expand the vocabulary of a language model by concepts of a knowledge base: Motivated by retrieval-augmented generation, we introduce AutoRAG, which does not retrieve data from external sources, rather it perceives parts of the knowledge base from special vocabulary, trained by auto-encoding text and concepts. Our AutoRAG implementation for a use case in the field of nosocomial pneumonia describes concepts it associates with the input and can naturally provide a graphical depiction from the expert-made knowledge bas to allow for feasible text sanity checks.