Gold
Refine
Document type
- Article (peer-reviewed) (42)
- Conference Proceeding (19)
Keywords
- Electrical impedance tomography (4)
- Systematic review (2)
- 3D bioprinting (1)
- Absorption process (1)
- Adolescence (1)
- Aquatic exercise (1)
- Artificial intelligence (1)
- Atmosphere absolute (1)
- Authentication (1)
- Authorization (1)
The common corpus optimization method “stop words removal” is based on the assumption that text tokens with high occurrence frequency can be removed without affecting classification performance. Linguistic information regarding sentence structure is ignored as well as preferences of the classification technology. We propose the Weighted Unimportant Part-of-Speech Model (WUP-Model) for token removal in the pre-processing of text corpora. The weighted relevance of a token is determined using classification relevance and classification performance impact. The WUP-Model uses linguistic information (part of speech) as grouping criteria. Analogous to stop word removal, we provide a set of irrelevant part of speech (WUP-Instance) for word removal. In a proof-of-concept we created WUP-Instances for several classification algorithms. The evaluation showed significant advantages compared to classic stop word removal. The tree-based classifier increased runtime by 65% and 25% in performance. The performance of the other classifiers decreased between 0.2% and 2.4%, their runtime improved between −4.4% and −24.7%. These results prove beneficial effects of the proposed WUP-Model.