The 10 most recently published documents
Small appliance
(2024)
While AI’s accuracy is impressive, it often operates opaquely, leaving users puzzled by its decisions. Explainable AI (XAI) seeks to demystify these processes, yet it encounters usability hurdles, often favouring developers over end-users. This paper introduces EXPERT-DUO, a flexible framework for Explainable Object Classification. While demonstrated in the domain of surgical tool classification, EXPERT-DUO is a versatile system applicable across domains. Operating as an assistant system for the users, the framework accommodates varying levels of domain knowledge and provides understandable decisions through a hierarchical methodology. The framework pipeline starts by segmenting the object parts, recognizing and classifying the object parts that make up the main object, progresses to attribute classification, and culminates in the classification of the complete object using an expert decision tree that encodes the domain knowledge. EXPERT-DUO aims to assist users by offering transparent and understandable reasoning for the object classifications. This unique approach enables users to make rational and informed judgments regarding their trust in the model’s decisions. Experimental results within the surgical context demonstrate the effectiveness of the approach. These results underscore EXPERT-DUO’s potential to enhance user confidence in AI systems across a spectrum of domains, thereby facilitating more widespread adoption and utilization of AI technologies.
Hygiene im Haushalt
(2024)
The LinPair approach introduces an approach to tokenization, tackling challenges in processing textual data for machine learning applications. This method refines the Word-Piece Tokenization technique by incorporating linguistic insights into token-tag pairs, thereby enriching Large Language Models (LLMs) with syntactic information. LinPair Tokenization is articulated through two distinct strategies: CompleteLinPair Tokenization, which is tailored for smaller datasets and enriches each token with part-of-speech (POS) information, and SmartLinPair Tokenization, designed to mitigate the impact of out-of-vocabulary (OOV) occurrences. The experimental analysis indicates that SmartLinPair Tokenization improves text classification tasks with BERT, showing an increase in F1-Score of up to 13.9% compared to a WordPiece baseline. The CompleteLinPair Tokenization method results in a decrease in classification performance. It is important to note that these findings are experiment specific and may not be generalizable to other contexts.