Leveraging Distillation Techniques for Document Understanding: A Case Study with FLAN-T5
dc.contributor.author | Lamott, Marcel | |
dc.contributor.author | Shakir, Muhammad Armaghan | |
dc.contributor.editor | Klein, Maike | |
dc.contributor.editor | Krupka, Daniel | |
dc.contributor.editor | Winter, Cornelia | |
dc.contributor.editor | Gergeleit, Martin | |
dc.contributor.editor | Martin, Ludger | |
dc.date.accessioned | 2024-10-21T18:24:13Z | |
dc.date.available | 2024-10-21T18:24:13Z | |
dc.date.issued | 2024 | |
dc.description.abstract | The surge of digital documents in various formats, including less standardized documents such as business reports and environmental assessments, underscores the growing importance of Document Understanding. While Large Language Models (LLMs) have showcased prowess across diverse natural language processing tasks, their direct application to Document Understanding remains a challenge. Previous research has demonstrated the utility of LLMs in this domain, yet their significant computational demands make them challenging to deploy effectively. Additionally, proprietary Blackbox LLMs often outperform their open-source counterparts, posing a barrier to widespread accessibility. In this paper, we delve into the realm of document understanding, leveraging distillation methods to harness the power of large LLMs while accommodating computational limitations. Specifically, we present a novel approach wherein we distill document understanding knowledge from the proprietary LLM ChatGPT into FLAN-T5. Our methodology integrates labeling and curriculum-learning mechanisms to facilitate efficient knowledge transfer. This work contributes to the advancement of document understanding methodologies by offering a scalable solution that bridges the gap between resource-intensive LLMs and practical applications. Our findings underscore the potential of distillation techniques in facilitating the deployment of sophisticated language models in real-world scenarios, thereby fostering advancements in natural language processing and document comprehension domains. | en |
dc.identifier.doi | 10.18420/inf2024_120 | |
dc.identifier.eissn | 2944-7682 | |
dc.identifier.isbn | 978-3-88579-746-3 | |
dc.identifier.issn | 2944-7682 | |
dc.identifier.pissn | 1617-5468 | |
dc.identifier.uri | https://dl.gi.de/handle/20.500.12116/45093 | |
dc.language.iso | en | |
dc.publisher | Gesellschaft für Informatik e.V. | |
dc.relation.ispartof | INFORMATIK 2024 | |
dc.relation.ispartofseries | Lecture Notes in Informatics (LNI) - Proceedings, Volume P-352 | |
dc.subject | Document Understanding | |
dc.subject | Large Language Models | |
dc.subject | Layout Understanding | |
dc.subject | Knowledge Distillation | |
dc.title | Leveraging Distillation Techniques for Document Understanding: A Case Study with FLAN-T5 | en |
dc.type | Text/Conference Paper | |
gi.citation.endPage | 1381 | |
gi.citation.publisherPlace | Bonn | |
gi.citation.startPage | 1371 | |
gi.conference.date | 24.-26. September 2024 | |
gi.conference.location | Wiesbaden | |
gi.conference.sessiontitle | AI@WORK |
Dateien
Originalbündel
1 - 1 von 1
Lade...
- Name:
- Lamott_Shakir_Leveraging_Distillation_Techniques.pdf
- Größe:
- 646.29 KB
- Format:
- Adobe Portable Document Format