Refine
Year of publication
Document type
- Conference Proceeding (134)
- Article (peer-reviewed) (26)
- Contribution to a Periodical (9)
- Report (8)
- Part of a Book (7)
- Working Paper (2)
- Book (1)
Keywords
- Cloud computing (27)
- Security (22)
- Industry 4.0 (16)
- Blockchain (10)
- Machine learning (8)
- Privacy (8)
- Audit (7)
- Cloud Computing (7)
- Monitoring (7)
- PaaS (6)
- Accountability (5)
- Authentication (5)
- Cloud (5)
- OSGi (5)
- Predictive maintenance (5)
- AAL (4)
- Data privacy (4)
- Docker (4)
- Internet of things (4)
- IoT (4)
- QoS (4)
- Biometrics (3)
- Computer architecture (3)
- Digital evidence (3)
- Digital forensics (3)
- Fuzzy logic (3)
- IaaS (3)
- Industrie 4.0 (3)
- Maintenance (3)
- SLA (3)
- Smart contracts (3)
- Software agents (3)
- Usability (3)
- Access control (2)
- Agents (2)
- Ambient assisted living (2)
- Augmentation (2)
- Authentication technologies (2)
- Authorization (2)
- CPS (2)
- Chronicle mining (2)
- Cloud security (2)
- Computational modeling (2)
- Condition monitoring (2)
- Container (2)
- Container virtualization (2)
- Context aware (2)
- Continuous authentication (2)
- Cybersecurity (2)
- DHT (2)
- Deep learning (2)
- Distributed data validation network (2)
- Evidence (2)
- Federated cloud (2)
- Fuzzy Logic (2)
- Generative models (2)
- HPC (2)
- Hardware (2)
- Industrial blockchain (2)
- Industrial internet of things (2)
- Knowledge-based system (2)
- ML (2)
- MLOps (2)
- Manufacturing process (2)
- Metrics (2)
- Mobile security (2)
- Modularization (2)
- Neural network (2)
- Ontologies (2)
- Ontology (2)
- P2P (2)
- Pix2Pix (2)
- Quality of service (2)
- Resource discovery (2)
- SWRL rules (2)
- SaaS (2)
- Shibboleth (2)
- Sicherheit (2)
- Simulation (2)
- System call tracing (2)
- Traceability (2)
- Transparency (2)
- Transparent authentication (2)
- Trust (2)
- Trust management (2)
- User authentication (2)
- Verifiability (2)
- Virtual machining (2)
- Virtualization (2)
- XAI (2)
- ABE (1)
- Access-control (1)
- Address distribution (1)
- Agent confidentiality (1)
- Agent migration (1)
- Agent trust (1)
- Anomalieerkennung (1)
- Anomalous behavior (1)
- Anomalous behaviour (1)
- Anomaly detection (1)
- Application deployment (1)
- Application software (1)
- Artificial intelligence (1)
- Artificial neural network (1)
- Artificial neural networks (1)
- Assessment (1)
- Assurance (1)
- Attribute certificate (1)
- Automl (1)
- Autonomous agents (1)
- Autorisierung (1)
- Availability (1)
- BLEU (1)
- Benchmark testing (1)
- Beweis Sammlung & Sicherung (1)
- Big Data (1)
- Big data (1)
- Big data frameworks (1)
- Bildverarbeitung (1)
- Biological neural networks (1)
- Biological system modeling (1)
- Blinde und Sehbehinderte (1)
- Bounding-box regression (1)
- Business (1)
- CNN (1)
- Cache (1)
- Chatbot (1)
- Circuit synthesis (1)
- Classification (1)
- Cloud Customer (1)
- Cloud Forensics (1)
- Cloud Provider (1)
- Cloud Sicherheit (1)
- Cloud audit (1)
- Cloud audits (1)
- Cloud database (1)
- Cloud forensic challenges (1)
- Cloud forensic solutions (1)
- Cloud forensics challenges (1)
- Cloud forensics solutions (1)
- Cloud infrastructure (1)
- Cloud scheduling (1)
- Cloud service (1)
- Cloud-Computing (1)
- Cloud-edge computing (1)
- Clouds (1)
- Cluster-based data validation (1)
- Coils (1)
- Container Virtualization (1)
- Context (1)
- Context enhancement protocol (1)
- Context-Awareness (1)
- Context-aware (1)
- Continuous monitoring (1)
- Convolutional generative adversarial network (1)
- Cooperative intelligent transportation systems (C-ITS) (1)
- Costs (1)
- Cross authentication (1)
- Cross company business model (1)
- Customer relationship management (1)
- Cyber physical systems (1)
- Cyber security (1)
- Data acquisition (1)
- Data flow (1)
- Data protection (1)
- Data validation (1)
- Data-driven (1)
- Decentralized architecture (1)
- Decision support systems (1)
- Decision theory (1)
- Deloyment (1)
- Dementia (1)
- Dementia health care (1)
- Denial of service (1)
- Deployment (1)
- Design engineering (1)
- Desktop grid (1)
- Digital Evidence (1)
- Digital Forensics (1)
- Digital agriculture (1)
- Digital forensic (1)
- Digital health (1)
- Digital innovation hub (1)
- Digital investigation (1)
- Digital product passport (1)
- Digital twin (1)
- Digital wallet (1)
- Digitized agreements (1)
- Discriminative convolutional neural network (1)
- Distance measurement (1)
- Distributed DNN (1)
- Distributed OSGi (1)
- Distributed computing (1)
- Distributed ledger (1)
- Distributed reflection denial of service (1)
- Distributed system (1)
- Domain analysis provisioner (1)
- Dynamic VM creation (1)
- Edge Computing (1)
- Edge security (1)
- Educational institutions (1)
- Einflussfaktoren (1)
- Emergency-situations (1)
- Endoscopy (1)
- Energy forecast (1)
- Evidential clustering (1)
- Evolutionary strategy (1)
- Experience capitalization (1)
- Expert system (1)
- Explainable AI (1)
- Factory of the future (1)
- FastCAM (1)
- Fault prognostics (1)
- Federated identity (1)
- Federated service (1)
- Fertigung (1)
- Filter (1)
- Fog computing (1)
- Forensic acquisition (1)
- Forensic analysis (1)
- Fuzzy clustering (1)
- Fuzzy control (1)
- GAIA-X (1)
- Genetische Algorithmen (1)
- GreedyDual-Size (1)
- Grenzwertbestimmung (1)
- Grinding burn (1)
- Grinding parameters (1)
- H.264 (1)
- Health informatics (1)
- Healthcare (1)
- Healthcare workforce training (1)
- High Performance Computing (1)
- Home appliances (1)
- Hybrid (1)
- Hybrid AI (1)
- Hybrid Cloud (1)
- Hyperparameter tuning (1)
- I/O isolation (1)
- IIoT (1)
- IT compliance (1)
- Identity management (1)
- Image caption generation (1)
- Image classification (1)
- Industrial cyber physical system (1)
- Instance segmentation (1)
- Internet der Dinge (1)
- Interoperability (1)
- JADE (1)
- KPI (1)
- Kamera (1)
- Kernel (1)
- LRU (1)
- LRU-MIN (1)
- LRU-threshold (1)
- Lecture capturing (1)
- Legacy machines (1)
- Load Balancing (1)
- Load modeling (1)
- Locality aware (1)
- Log2(SIZE) (1)
- Logic circuits (1)
- METEOR (1)
- MIX (1)
- MLOps Mlflow DVC (1)
- MP4 (1)
- MQTT (1)
- MUWS (1)
- Machine integration (1)
- Machine maintenance (1)
- Mathematical model (1)
- Medical devices (1)
- Medical services (1)
- Medizinische Informatik (1)
- Meta learning (1)
- Metalworking (1)
- Meteorology (1)
- Metric (1)
- Mobile Assistenzsysteme (1)
- Mobile agent security (1)
- Mobile device (1)
- Mobile devices (1)
- Modelling language (1)
- Multimodal data (1)
- NAT traversal (1)
- NLP (1)
- Nachhaltigkeit (1)
- Natural language processing (1)
- Neural net-works (1)
- Neuro-fuzzy (1)
- Neuronale Netze (1)
- OSGi Service Architecture (1)
- Object detection (1)
- Ontology reasoning (1)
- OpenVZ (1)
- Operating system (1)
- Optical fiber sensors (1)
- Optical fibers (1)
- Optimization (1)
- P2P communication (1)
- PaaS Management (1)
- Parallel algorithms (1)
- Partitioning algorithms (1)
- Peer-reviewed conference (1)
- Peer-reviewed of the scientific committee (1)
- Performance evaluation (1)
- Persistence (1)
- Poisoning (1)
- Poster (1)
- Poster Presentation (1)
- Prediction (1)
- Preprint (1)
- Price models (1)
- Privacy level agreement (PLA) (1)
- Privacy protection (1)
- Private cloud (1)
- Privilege management infrastructure (1)
- Protocols (1)
- Proxy (1)
- Public key cryptography (1)
- Public key infrastructure (1)
- Quality assessment (1)
- Quality assurance (1)
- Quality of Service (1)
- Quality prediction (1)
- REST (1)
- Reliability (1)
- Reminisence therapy (1)
- Replacement-Algorithm (1)
- Resource Discovery (1)
- Reusable architectural patterns (1)
- Risk analysis (1)
- Risk management (1)
- Risk-based metrics (1)
- Rule base refinement (1)
- Rule-based-security (1)
- SIZE (1)
- SOA (1)
- Scaling service (1)
- Scenario simulator (1)
- Security Audit as a Service (1)
- Security Zertifikat (1)
- Security monitoring (1)
- Security policies (1)
- Self adaptive algorithm (1)
- Semantic segmentation (1)
- Semantic technology (1)
- Semantics (1)
- Sensor fusion (1)
- Servers (1)
- Service level agreement (SLA) (1)
- Shear wave elastography (1)
- Shop floor (1)
- Shortest path problem (1)
- Single sign-on (1)
- Smart farming (1)
- Smart medical wearables (1)
- Smart system (1)
- Software-Agenten (1)
- Standards organizations (1)
- Steel surface damage (1)
- Storage (1)
- Storage management (1)
- TAS (1)
- Taxonomy (1)
- Time factors (1)
- Time-Series analysis (1)
- Timing (1)
- Training (1)
- Tutor (1)
- Ultrasound (1)
- Universal image quality index (1)
- Unreliability (1)
- User centric ontology (1)
- User manuals (1)
- User study (1)
- User survey (1)
- Users' perceptions (1)
- Users’ security practices (1)
- Variety (1)
- Velocity (1)
- Veracity (1)
- Video format converting (1)
- Virtual cluster (1)
- Virtual instance (1)
- Virtual machine monitors (1)
- Visual XAI (1)
- Visual attention network (1)
- Visual attention networks (1)
- Visualisation (1)
- Volume (1)
- WSDM (1)
- Web Service Ping (1)
- Web Services (1)
- Yolo (1)
- academia-business cooperation (1)
- e-Learning (1)
- eHealth/mHealth (1)
Health informatics plays a crucial role in modern healthcare provision. Training and continuous education are essential to bolster the healthcare workforce on health informatics. In this work, we present the training events within EU-funded DigNest project. The aim of the training events, the subjects offered, and the overall evaluation of the results are described in this paper.
The digital transformation of companies is expected to increase the digital interconnection between different companies to develop optimized, customized, hybrid business models. These cross-company business models require secure, reliable, and traceable logging and monitoring of contractually agreed information sharing between machine tools, operators, and service providers. This paper discusses how the major requirements for building hybrid business models can be tackled by the blockchain for building a chain of trust and smart contracts for digitized contracts. A machine maintenance use case is used to discuss the readiness of smart contracts for the automation of workflows defined in contracts. Furthermore, it is shown that the number of failures is significantly improved by using these contracts and a blockchain.
Machine learning applications, like machine condition monitoring, predictive maintenance, and others, become a state of the art in Industry 4.0. One of many machine learning algorithms are decision trees for the decision-making process. A new approach for creating distributed decision trees, called node based parallelization, is presented. It allows data to be classified through a network of industrial devices. Each industrial device is responsible for a single classification rule. Also, nodes that react incorrectly, for example, due to an attack, are taken into account using a variety of methods to remain the decision-making process correct and robust.
In Industry 4.0 machine learning approaches are a state-of-the art for predictive maintenance, machine condition monitoring, and others. Distributed decision trees are one of the learning algorithms for such applications. A new approach of node based parallelization for the construction is presented and allows to classify data through a network of nodes. Attacks on the nodes are discussed based on different attack scenarios and attack classifications are presented. A thorough analysis of protection measurements is given, such that classification is not maliciously modified by an attacker. Different countermeasures are proposed and analyzed. A quorum-based system allows for a good balance between computational overhead and robustness of the algorithm.
Ensuring data quality is central to the digital transformation in industry. Business processes such as predictive maintenance or condition monitoring can be implemented or improved based on the available data. In order to guarantee high data quality, a single data validation system are usually used to validate the production data for further use. However, using a single system allows an attacker only to perform one successful attack to corrupt the whole system. We present a new approach in which a data validation system using multiple different validators minimizes the probability of success for the attacker. The validators are arranged in clusters based on their properties. For a validation process, a challenge is given that specifies which validators should perform the current validation. Validation results from other validators are dropped. This ensures that even for more than half of the validators being corrupted anomalies can be detected during the validation process.
Distributed machine learning algorithms that employ Deep Neural Networks (DNNs) are widely used in Industry 4.0 applications, such as smart manufacturing. The layers of a DNN can be mapped onto different nodes located in the cloud, edge and shop floor for preserving privacy. The quality of the data that is fed into and processed through the DNN is of utmost importance for critical tasks, such as inspection and quality control. Distributed Data Validation Networks (DDVNs) are used to validate the quality of the data. However, they are prone to single points of failure when an attack occurs. This paper proposes QUDOS, an approach that enhances the security of a distributed DNN that is supported by DDVNs using quorums. The proposed approach allows individual nodes that are corrupted due to an attack to be detected or excluded when the DNN produces an output. Metrics such as corruption factor and success probability of an attack are considered for evaluating the security aspects of DNNs. A simulation study demonstrates that if the number of corrupted nodes is less than a given threshold for decision-making in a quorum, the QUDOS approach always prevents attacks. Furthermore, the study shows that increasing the size of the quorum has a better impact on security than increasing the number of layers. One merit of QUDOS is that it enhances the security of DNNs without requiring any modifications to the algorithm and can therefore be applied to other classes of problems.
The importance of machine learning (ML) has been increasing dramatically for years. From assistance systems to production optimisation to healthcare support, almost every area of daily life and industry is coming into contact with machine learning. Besides all the benefits ML brings, the lack of transparency and difficulty in creating traceability pose major risks. While solutions exist to make the training of machine learning models more transparent, traceability is still a major challenge. Ensuring the identity of a model is another challenge, as unnoticed modification of a model is also a danger when using ML. This paper proposes to create an ML Birth Certificate and ML Family Tree secured by blockchain technology. Important information about training and changes to the model through retraining can be stored in a blockchain and accessed by any user to create more security and traceability about an ML model.
In recent years, both the Internet of Things (IoT) and blockchain technologies have been highly influential and revolutionary. IoT enables companies to embrace Industry 4.0, the Fourth Industrial Revolution, which benefits from communication and connectivity to reduce cost and to increase productivity through sensor-based autonomy. These automated systems can be further refined with smart contracts that are executed within a blockchain, thereby increasing transparency through continuous and indisputable logging. Ideally, the level of security for these IoT devices shall be very high, as they are specifically designed for this autonomous and networked environment. This paper discusses a use case of a company with legacy devices that wants to benefit from the features and functionality of blockchain technology. In particular, the implications of retrofit solutions are analyzed. The use of the BISS:4.0 platform is proposed as the underlying infrastructure. BISS:4.0 is intended to integrate the blockchain technologies into existing enterprise environments. Furthermore, a security analysis of IoT and blockchain present attacks and countermeasures are presented that are identified and applied to the mentioned use case.
Explainable Artificial Intelligence (XAI) seeks to enhance transparency and trust in AI systems. Evaluating the quality of XAI explanation methods remains challenging due to limitations in existing metrics. To address these issues, we propose a novel metric called Explanation Significance Assessment (ESA) and its extension, the Weighted Explanation Significance Assessment (WESA). These metrics offer a comprehensive evaluation of XAI explanations, considering spatial precision, focus overlap, and relevance accuracy. In this paper, we demonstrate the applicability of ESA and WESA on medical data. These metrics quantify the understandability and reliability of XAI explanations, assisting practitioners in interpreting AI-based decisions and promoting informed choices in critical domains like healthcare. Moreover, ESA and WESA can play a crucial role in AI certification, ensuring both accuracy and explainability. By evaluating the performance of XAI methods and underlying AI models, these metrics contribute to trustworthy AI systems. Incorporating ESA and WESA in AI certification efforts advances the field of XAI and bridges the gap between accuracy and interpretability. In summary, ESA and WESA provide comprehensive metrics to evaluate XAI explanations, benefiting research, critical domains, and AI certification, thereby enabling trustworthy and interpretable AI systems.
Data scientists, researchers and engineers want to understand, whether machine learning models for object detection work accurate and precise. Networks like Yolo use bounding boxes as a result to localize the object in the image.
The principal aim of this paper is to address the problem of a lack of an effective metric for evaluating the results of bounding box regression in object detection networks when boxes do not overlap or lie completely within each other.
The standard known metrics, like IoU, lack of differentiating results, which do not overlap but differ in the distance between predicted bounding box and label.
To solve this challenge, we propose a new metric called UIoU (Unified Intersection over Union) that combines the best properties of existing metrics (IoU, GIoU and DIoU) and extends them with a similarity factor. By assigning weight to each component of the metric, it allows for a clear differentiation between the three possible cases of box positions (not overlapping, overlapping, boxes inside each other).
The result of this paper is a new metric that outperforms the existing metrics such as IoU, GIoU and DIoU by providing a more understandable measure of the performance of object detection models. This provides researchers and users in the field of explainable AI with a metric that allows the evaluation and comparison of prediction and label bounding boxes in an understandable way.
Enormous potential of artificial intelligence (AI) exists in numerous products and services, especially in healthcare and medical technology. Explainability is a central prerequisite for certification procedures around the world and the fulfilment of transparency obligations. Explainability tools increase the comprehensibility of object recognition in images using Convolutional Neural Networks, but lack precision.
This paper adapts FastCAM for the domain of detection of medical instruments in endoscopy images. The results show that the Domain Adapted (DA)-FastCAM provides better results for the focus of the model than standard FastCAM weights.
The usage of machine learning models for prediction is growing rapidly and proof that the intended requirements are met is essential. Audits are a proven method to determine whether requirements or guidelines are met. However, machine learning models have intrinsic characteristics, such as the quality of training data, that make it difficult to demonstrate the required behavior and make audits more challenging. This paper describes an ML audit framework that evaluates and reviews the risks of machine learning applications, the quality of the training data, and the machine learning model. We evaluate and demonstrate the functionality of the proposed framework by auditing an steel plate fault prediction model.
The YOLO series of object detection algorithms, including YOLOv4 and YOLOv5, have shown superior performance in various medical diagnostic tasks, surpassing human ability in some cases. However, their black-box nature has limited their adoption in medical applications that require trust and explainability of model decisions. To address this issue, visual explanations for AI models, known as visual XAI, have been proposed in the form of heatmaps that highlight regions in the input that contributed most to a particular decision. Gradient-based approaches, such as Grad-CAM, and non-gradient-based approaches, such as Eigen-CAM, are applicable to YOLO models and do not require new layer implementation. This paper evaluates the performance of Grad-CAM and Eigen-CAM on the VinDrCXR Chest X-ray Abnormalities Detection dataset and discusses the limitations of these methods for explaining model decisions to data scientists.
Formal Description of Use Cases for Industry 4.0 Maintenance Processes Using Blockchain Technology
(2019)
As industrial networks continue to expand and connect more devices and users, they face growing security challenges such as unauthorized access and data breaches. This paper delves into the crucial role of security and trust in industrial networks and how trust management systems (TMS) can mitigate malicious access to these networks.
The TMS presented in this paper leverages distributed ledger technology (blockchain) to evaluate the trustworthiness of blockchain nodes, including devices and users, and make access decisions accordingly. While this approach is applicable to blockchain, it can also be extended to other areas. This approach can help prevent malicious actors from penetrating industrial networks and causing harm. The paper also presents the results of a simulation to demonstrate the behavior of the TMS and provide insights into its effectiveness.
Industrial Internet of Things (IIoT) systems are enhancing the delivery of services and boosting productivity in a wide array of industries, from manufacturing to healthcare. However, IIoT devices are susceptible to cyber-threats such as the leaking of important information, products becoming compromised, and damage to industrial controls. Recently, blockchain technology has been used to increase the trust between stakeholders collaborating in the supply chain in order to preserve privacy, ensure the provenance of material, provide machine-led maintenance, etc. In all cases, such industrial blockchains establish a novel foundation of trust for business transactions which could potentially streamline and expedite economic processes to a significant extent. This paper presents an examination of “Schloss”, an industrial blockchain system architecture designed for multi-factory environments. It proposes an innovative solution to increase trust in industrial networks by incorporating a fairness concept as a subsystem of an industrial blockchain. The proposed mechanism leverages the concept of taxes imposed on blockchain nodes to enforce ethical conduct and discipline among participants. In this paper, we propose a game theory-based mechanism to address security and trust difficulties in industrial networks. The mechanism, inspired by the ultimatum game, progressively punishes malicious actors to increase the cost of fraud, improve the compensation system, and utilise the reward reporting capabilities of blockchain technology to further discourage fraudulent activities. Furthermore, the blockchain’s incentive structure is utilised to reduce collusion and speed up the process of reaching equilibrium, thereby promoting a secure and trustworthy environment for industrial collaboration. The objective of this paper is to address lack of trust among industrial partners and introduce a solution that brings security and trust to the forefront of industrial blockchain applications.
The Industrial Internet of Things (IIoT) holds significant potential for improving efficiency, quality, and flexibility. In decentralized systems, there are no trust based centralized authentication techniques, which are unsuitable for distributed networks or subnets, as they have a single point of failure. However, in a decentralized system, more emphasis is needed on trust management, which presents significant challenges in ensuring security and trust in industrial devices and applications. To address these issues, industrial blockchain has the potential to make use of trustless and transparent technologies for devices, applications, and systems. By using a distributed ledger, blockchains can track devices and their data exchanges, improving relationships between trading partners, and proving the supply chain. In this paper, we propose a model for cross-domain authentication between the blockchain-based infrastructure and industrial centralized networks outside the blockchain to ensure secure communication in industrial environments. Our model enables cross authentication for different sub-networks with different protocols or authentication methods while maintaining the transparency provided by the blockchain. The core concept is to build a bridge of trust that enables secure communication between different domains in the IIoT ecosystem. Our proposed model enables devices and applications in different domains to establish secure and trusted communication channels through the use of blockchain technology, providing an efficient and secure way to exchange data within the IIoT ecosystem. Our study presents a decentralized cross-domain authentication mechanism for field devices, which includes enhancements to the standard authentication system. To validate the feasibility of our approach, we developed a prototype and assessed its performance in a real-world industrial scenario. By improving the security and efficiency in industrial settings, this mechanism has the potential to inspire this important area.
A Review on Digital Wallets and Federated Service for Future of Cloud Services Identity Management
(2023)
In today’s technology-driven era, managing digital identities has become a critical concern due to the widespread use of online services and digital devices. This has led to a fragmented landscape of digital identities, burdening individuals with multiple usernames, passwords, and authentication methods. To address this challenge, digital wallets have emerged as a promising solution. These wallets empower users to store, manage, and utilize their digital assets, including personal data, payment information, and credentials. Additionally, federated services have gained prominence, enabling users to access multiple services using a single digital identity. Gaia-X is an example of such a service, aiming to establish a secure and trustworthy data infrastructure. This paper examines digital identity management, focusing on the application of digital wallets and federated services. It explores the categorization of identities needed for different cloud services, considering their unique requirements and characteristics. Furthermore, it discusses the future requirements for digital wallets and federated identity management in the cloud, along with the associated challenges and benefits. The paper also introduces a categorization scheme for cloud services based on security and privacy requirements, demonstrating how different identity types can be mapped to each category.
In this paper, we present a study on the utilization of smart medical wearables and the user manuals of such devices. A total of 342 individuals provided input for 18 questions that address user behavior in the investigated context and the connections between various assessments and preferences. The presented work clusters individuals based on their professional relation to user manuals and analyzes the obtained results separately for these groups.
Digital transformation strengthens the interconnection of companies in order to develop optimized and better customized, cross-company business models. These models require secure, reliable, and trace- able evidence and monitoring of contractually agreed information to gain trust between stakeholders. Blockchain technology using smart contracts allows the industry to establish trust and automate cross- company business processes without the risk of losing data control. A typical cross-company industry use case is equipment maintenance. Machine manufacturers and service providers offer maintenance for their machines and tools in order to achieve high availability at low costs. The aim of this chapter is to demonstrate how maintenance use cases are attempted by utilizing hyperledger fabric for building a chain of trust by hardened evidence logging of the maintenance process to achieve legal certainty. Contracts are digitized into smart contracts automating business that increase the security and mitigate the error-proneness of the business processes.
The common corpus optimization method “stop words removal” is based on the assumption that text tokens with high occurrence frequency can be removed without affecting classification performance. Linguistic information regarding sentence structure is ignored as well as preferences of the classification technology. We propose the Weighted Unimportant Part-of-Speech Model (WUP-Model) for token removal in the pre-processing of text corpora. The weighted relevance of a token is determined using classification relevance and classification performance impact. The WUP-Model uses linguistic information (part of speech) as grouping criteria. Analogous to stop word removal, we provide a set of irrelevant part of speech (WUP-Instance) for word removal. In a proof-of-concept we created WUP-Instances for several classification algorithms. The evaluation showed significant advantages compared to classic stop word removal. The tree-based classifier increased runtime by 65% and 25% in performance. The performance of the other classifiers decreased between 0.2% and 2.4%, their runtime improved between −4.4% and −24.7%. These results prove beneficial effects of the proposed WUP-Model.
As machine learning becomes increasingly pervasive, its resource demands and financial implications escalate, necessitating energy and cost optimisations to meet stakeholder demands. Quality metrics for predictive machine learning models are abundant, but efficiency metrics remain rare. We propose a framework for efficiency metrics, that enables the comparison of distinct efficiency types. A quality-focused efficiency metric is introduced that considers resource consumption, computational effort, and runtime in addition to prediction quality. The metric has been successfully tested for usability, plausibility, and compensation for dataset size and host performance. This framework enables informed decisions to be made about the use and design of machine learning in an environmentally responsible and cost-effective manner.
ARTHUR – Distributed Measuring System for Synchronous Data Acquisition from Different Data Sources
(2023)
In industrial manufacturing lines, different machines are well orchestrated and applied for their well-defined purpose. As each of these machines must be monitored and maintained in the first place, there are scenarios in which a Data Acquisition system brings enormous benefits. Since the cost of such professional systems is often not appropriate or feasible for research projects or prototyping, a proof of concept is often achieved by applying end-user hardware. In this work, a distributed measurement system for supporting the collection of data is described with respect to AI-based projects for research and teaching. ARTHUR (meAsuRing sysTem witH distribUted sensoRs) is arbitrarily expandable and has so far been used in the field of data acquisition on machine tools. Typical measured values are Accoustic Emission values, force plates X-Y-Z force values, simple PLC switching signals, OPC-UA machine parameters, etc., which were recorded by a wide variety of sensors. The overall ATHUR system is based on Raspberry Pis and consists of a master node, multiple independent measurement worker nodes, a streaming system realized with Redis, as well as a gateway that stores the data in the cloud. The major objectives of the ARTHUR system are scalability and the support for low-cost measuring components while solely applying open-source software. The work on hand discusses the advantages and disadvantages regarding the hard- and software of this TCP/IP-based system.
On the way to the smart factory, the manufacturing companies investigate the potential of Machine Learning approaches like visual quality inspection, process optimisation, maintenance prediction and more. In order to be able to assess the influence of Machine Learning based systems on business-relevant key figures, many companies go down the path of test before invest. This paper describes a novel and inexpensive distributed Data Acquisition System, ARTHUR (dAta collectoR sysTem witH distribUted sensoRs), to enable the collection of data for AI-based projects for research, education and the industry. ARTHUR is arbitrarily expandable and has so far been used in the field of data acquisition on machine tools. Typical measured values are Acoustic Emission values, force plate X-Y-Z force values, simple SPS signals, OPC-UA machine parameters, etc. which were recorded by a wide variety of sensors. The ARTHUR system consists of a master node, multiple measurement worker nodes, a local streaming system and a gateway that stores the data to the cloud. The authors describe the hardware and software of this system and discuss its advantages and disadvantages.