Vysvětlování výstupu modelů zpracování přirozeného jazyka pro úlohu ověřování faktů
Explaining NLP Model Predictions for Fact-Checking Pipeline
Typ dokumentu
diplomová prácemaster thesis
Autor
Eliška Kopecká
Vedoucí práce
Drchal Jan
Oponent práce
Šír Gustav
Studijní obor
Datové vědyStudijní program
Otevřená informatikaInstituce přidělující hodnost
katedra počítačůPráva
A university thesis is a work protected by the Copyright Act. Extracts, copies and transcripts of the thesis are allowed for personal use only and at one?s own expense. The use of thesis should be in compliance with the Copyright Act http://www.mkcr.cz/assets/autorske-pravo/01-3982006.pdf and the citation ethics http://knihovny.cvut.cz/vychova/vskp.htmlVysokoškolská závěrečná práce je dílo chráněné autorským zákonem. Je možné pořizovat z něj na své náklady a pro svoji osobní potřebu výpisy, opisy a rozmnoženiny. Jeho využití musí být v souladu s autorským zákonem http://www.mkcr.cz/assets/autorske-pravo/01-3982006.pdf a citační etikou http://knihovny.cvut.cz/vychova/vskp.html
Metadata
Zobrazit celý záznamAbstrakt
This thesis explores interpretability methods and the possibilities of their application to natural language processing (NLP) models used within a fact-checking pipeline. More specifically, it focuses on the application of two local, model-agnostic interpretability methods LIME and SHAP to natural language inference (NLI) models used to infer a veracity label from a claim and a context. In this work, we modify and apply SHAP and LIME interpretability methods to the NLI models and develop a text-augmented version for LIME. Later, we test various parameter settings to find the optimal parametrization for each method which we then compare in a binary forced-choice experiment with human-grounded evaluation. For both datasets used within the the project, SHAP is evaluated to produce more helpful explanations. This thesis explores interpretability methods and the possibilities of their application to natural language processing (NLP) models used within a fact-checking pipeline. More specifically, it focuses on the application of two local, model-agnostic interpretability methods LIME and SHAP to natural language inference (NLI) models used to infer a veracity label from a claim and a context. In this work, we modify and apply SHAP and LIME interpretability methods to the NLI models and develop a text-augmented version for LIME. Later, we test various parameter settings to find the optimal parametrization for each method which we then compare in a binary forced-choice experiment with human-grounded evaluation. For both datasets used within the the project, SHAP is evaluated to produce more helpful explanations.
Kolekce
- Diplomové práce - 13136 [892]