Refine
Year of publication
Document type
- Article (peer-reviewed) (26) (remove)
Is part of the Bibliography
- Yes (26)
Keywords
- Industry 4.0 (4)
- Machine learning (4)
- Blockchain (3)
- Cloud computing (3)
- Predictive maintenance (3)
- Security (3)
- Agents (2)
- Chronicle mining (2)
- Deep learning (2)
- IoT (2)
Smart Condition Monitoring for Industry 4.0 Manufacturing Processes: An Ontology-Based Approach
(2019)
Combining Chronicle Mining and Semantics for Predictive Maintenance in Manufacturing Processes
(2020)
In the context of Industry 4.0, smart factories use advanced sensing and data analytic technologies to understand and monitor the manufacturing processes. To enhance production efficiency and reliability, statistical Artificial Intelligence (AI) technologies such as machine learning and data mining are used to detect and predict potential anomalies within manufacturing processes. However, due to the heterogeneous nature of industrial data, sometimes the knowledge extracted from industrial data is presented in a complex structure. This brings the semantic gap issue which stands for the lack of interoperability among different manufacturing systems. Furthermore, as the Cyber-Physical Systems (CPS) are becoming more knowledge-intensive, uniform knowledge representation of physical resources and real-time reasoning capabilities for analytic tasks are needed to automate the decision-making processes for these systems. These requirements highlight the potential of using symbolic AI for predictive maintenance.
To automate and facilitate predictive analytics in Industry 4.0, in this paper, we present a novel Knowledge-based System for Predictive Maintenance in Industry 4.0 (KSPMI). KSPMI is developed based on a novel hybrid approach that leverages both statistical and symbolic AI technologies. The hybrid approach involves using statistical AI technologies such as machine learning and chronicle mining (a special type of sequential pattern mining approach) to extract machine degradation models from industrial data. On the other hand, symbolic AI technologies, especially domain ontologies and logic rules, will use the extracted chronicle patterns to query and reason on system input data with rich domain and contextual knowledge. This hybrid approach uses Semantic Web Rule Language (SWRL) rules generated from chronicle patterns together with domain ontologies to perform ontology reasoning, which enables the automatic detection of machinery anomalies and the prediction of future events’ occurrence. KSPMI is evaluated and tested on both real-world and synthetic data sets.
Quality assurance (QA) plays a crucial role in manufacturing to ensure that products meet their specifications. However, manual QA processes are costly and time-consuming, thereby making artificial intelligence (AI) an attractive solution for automation and expert support. In particular, convolutional neural networks (CNNs) have gained a lot of interest in visual inspection. Next to AI methods, the explainable artificial intelligence (XAI) systems, which achieve transparency and interpretability by providing insights into the decision-making process of the AI, are interesting methods for achieveing quality inspections in manufacturing processes. In this study, we conducted a systematic literature review (SLR) to explore AI and XAI approaches for visual QA (VQA) in manufacturing. Our objective was to assess the current state of the art and identify research gaps in this context. Our findings revealed that AI-based systems predominantly focused on visual quality control (VQC) for defect detection. Research addressing VQA practices, like process optimization, predictive maintenance, or root cause analysis, are more rare. Least often cited are papers that utilize XAI methods. In conclusion, this survey emphasizes the importance and potential of AI and XAI in VQA across various industries. By integrating XAI, organizations can enhance model transparency, interpretability, and trust in AI systems. Overall, leveraging AI and XAI improves VQA practices and decision-making in industries.
While the number of devices connected together as the Internet of Things (IoT) is growing, the demand for an efficient and secure model of resource discovery in IoT is increasing. An efficient resource discovery model distributes the registration and discovery workload among many nodes and allow the resources to be discovered based on their attributes. In most cases this discovery ability should be restricted to a number of clients based on their attributes, otherwise, any client in the system can discover any registered resource. In a binary discovery policy, any client with the shared secret key can discover and decrypt the address data of a registered resource regardless of the attributes of the client. In this paper we propose Attred, a decentralized resource discovery model using the Region-based Distributed Hash Table (RDHT) that allows secure and location-aware discovery of the resources in IoT network. Using Attribute Based Encryption (ABE) and based on predefined discovery policies by the resources, Attred allows clients only by their inherent attributes, to discover the resources in the network. Attred distributes the workload of key generations and resource registration and reduces the risk of central authority management. In addition, some of the heavy computations in our proposed model can be securely distributed using secret sharing that allows a more efficient resource registration, without affecting the required security properties. The performance analysis results showed that the distributed computation can significantly reduce the computation cost while maintaining the functionality. The performance and security analysis results also showed that our model can efficiently provide the required security properties of discovery correctness, soundness, resource privacy and client privacy.
The Present and Future of a Digital Montenegro: Analysis of C-ITS, Agriculture, and Healthcare
(2023)
Data processed in context is more meaningful, easier to understand and has higher information content, hence it derives its semantic meaning from the surrounding context. Even in the field of acoustic signal processing. In this work, a Deep Learning based approach using Ensemble Neural Networks to integrate context into a learning system is presented. For this purpose, different use cases are considered and the method is demonstrated using acoustic signal processing of machine sound data for valves, pumps and slide rails. Mel-spectrograms are used to train convolutional neural networks in order to analyse acoustic data using image processing techniques.
Nowadays, machine learning projects have become more and more relevant to various real-world use cases. The success of complex Neural Network models depends upon many factors, as the requirement for structured and machine learning-centric project development management arises. Due to the multitude of tools available for different operational phases, responsibilities and requirements become more and more unclear. In this work, Machine Learning Operations (MLOps) technologies and tools for every part of the overall project pipeline, as well as involved roles, are examined and clearly defined. With the focus on the inter-connectivity of specific tools and comparison by well-selected requirements of MLOps, model performance, input data, and system quality metrics are briefly discussed. By identifying aspects of machine learning, which can be reused from project to project, open-source tools which help in specific parts of the pipeline, and possible combinations, an overview of support in MLOps is given. Deep learning has revolutionized the field of Image processing, and building an automated machine learning workflow for object detection is of great interest for many organizations. For this, a simple MLOps workflow for object detection with images is portrayed.
The common corpus optimization method “stop words removal” is based on the assumption that text tokens with high occurrence frequency can be removed without affecting classification performance. Linguistic information regarding sentence structure is ignored as well as preferences of the classification technology. We propose the Weighted Unimportant Part-of-Speech Model (WUP-Model) for token removal in the pre-processing of text corpora. The weighted relevance of a token is determined using classification relevance and classification performance impact. The WUP-Model uses linguistic information (part of speech) as grouping criteria. Analogous to stop word removal, we provide a set of irrelevant part of speech (WUP-Instance) for word removal. In a proof-of-concept we created WUP-Instances for several classification algorithms. The evaluation showed significant advantages compared to classic stop word removal. The tree-based classifier increased runtime by 65% and 25% in performance. The performance of the other classifiers decreased between 0.2% and 2.4%, their runtime improved between −4.4% and −24.7%. These results prove beneficial effects of the proposed WUP-Model.