Refine
Year of publication
Document type
- Conference Proceeding (135)
- Article (peer-reviewed) (26)
- Contribution to a Periodical (9)
- Report (8)
- Part of a Book (7)
- Working Paper (2)
- Book (1)
Keywords
- Cloud computing (27)
- Security (22)
- Industry 4.0 (16)
- Blockchain (10)
- Machine learning (8)
- Privacy (8)
- Audit (7)
- Cloud Computing (7)
- Monitoring (7)
- PaaS (6)
- Accountability (5)
- Authentication (5)
- Cloud (5)
- OSGi (5)
- Predictive maintenance (5)
- AAL (4)
- Data privacy (4)
- Docker (4)
- Internet of things (4)
- IoT (4)
- QoS (4)
- Biometrics (3)
- Computer architecture (3)
- Digital evidence (3)
- Digital forensics (3)
- Fuzzy logic (3)
- IaaS (3)
- Industrie 4.0 (3)
- Maintenance (3)
- SLA (3)
- Smart contracts (3)
- Software agents (3)
- Usability (3)
- Access control (2)
- Agents (2)
- Ambient assisted living (2)
- Augmentation (2)
- Authentication technologies (2)
- Authorization (2)
- CPS (2)
- Chronicle mining (2)
- Cloud security (2)
- Computational modeling (2)
- Condition monitoring (2)
- Container (2)
- Container virtualization (2)
- Context aware (2)
- Continuous authentication (2)
- Cybersecurity (2)
- DHT (2)
- Deep learning (2)
- Distributed data validation network (2)
- Evidence (2)
- Federated cloud (2)
- Fuzzy Logic (2)
- Generative models (2)
- HPC (2)
- Hardware (2)
- Industrial blockchain (2)
- Industrial internet of things (2)
- Knowledge-based system (2)
- ML (2)
- MLOps (2)
- Manufacturing process (2)
- Metrics (2)
- Mobile security (2)
- Modularization (2)
- Neural network (2)
- Ontologies (2)
- Ontology (2)
- P2P (2)
- Pix2Pix (2)
- Quality of service (2)
- Resource discovery (2)
- SWRL rules (2)
- SaaS (2)
- Shibboleth (2)
- Sicherheit (2)
- Simulation (2)
- System call tracing (2)
- Traceability (2)
- Transparency (2)
- Transparent authentication (2)
- Trust (2)
- Trust management (2)
- User authentication (2)
- Verifiability (2)
- Virtual machining (2)
- Virtualization (2)
- XAI (2)
- ABE (1)
- Access-control (1)
- Address distribution (1)
- Agent confidentiality (1)
- Agent migration (1)
- Agent trust (1)
- Anomalieerkennung (1)
- Anomalous behavior (1)
- Anomalous behaviour (1)
- Anomaly detection (1)
- Application deployment (1)
- Application software (1)
- Artificial intelligence (1)
- Artificial neural network (1)
- Artificial neural networks (1)
- Assessment (1)
- Assurance (1)
- Attribute certificate (1)
- Automl (1)
- Autonomous agents (1)
- Autorisierung (1)
- Availability (1)
- BLEU (1)
- Benchmark testing (1)
- Beweis Sammlung & Sicherung (1)
- Big Data (1)
- Big data (1)
- Big data frameworks (1)
- Bildverarbeitung (1)
- Biological neural networks (1)
- Biological system modeling (1)
- Blinde und Sehbehinderte (1)
- Bounding-box regression (1)
- Business (1)
- CNN (1)
- Cache (1)
- Chatbot (1)
- Circuit synthesis (1)
- Classification (1)
- Cloud Customer (1)
- Cloud Forensics (1)
- Cloud Provider (1)
- Cloud Sicherheit (1)
- Cloud audit (1)
- Cloud audits (1)
- Cloud database (1)
- Cloud forensic challenges (1)
- Cloud forensic solutions (1)
- Cloud forensics challenges (1)
- Cloud forensics solutions (1)
- Cloud infrastructure (1)
- Cloud scheduling (1)
- Cloud service (1)
- Cloud-Computing (1)
- Cloud-edge computing (1)
- Clouds (1)
- Cluster-based data validation (1)
- Coils (1)
- Container Virtualization (1)
- Context (1)
- Context enhancement protocol (1)
- Context-Awareness (1)
- Context-aware (1)
- Continuous monitoring (1)
- Convolutional generative adversarial network (1)
- Cooperative intelligent transportation systems (C-ITS) (1)
- Costs (1)
- Cross authentication (1)
- Cross company business model (1)
- Customer relationship management (1)
- Cyber physical systems (1)
- Cyber security (1)
- Data acquisition (1)
- Data flow (1)
- Data protection (1)
- Data validation (1)
- Data-driven (1)
- Decentralized architecture (1)
- Decision support systems (1)
- Decision theory (1)
- Deloyment (1)
- Dementia (1)
- Dementia health care (1)
- Denial of service (1)
- Deployment (1)
- Design engineering (1)
- Desktop grid (1)
- DigNest (1)
- Digital Evidence (1)
- Digital Forensics (1)
- Digital agriculture (1)
- Digital forensic (1)
- Digital health (1)
- Digital innovation hub (1)
- Digital investigation (1)
- Digital product passport (1)
- Digital twin (1)
- Digital wallet (1)
- Digitized agreements (1)
- Discriminative convolutional neural network (1)
- Distance measurement (1)
- Distributed DNN (1)
- Distributed OSGi (1)
- Distributed computing (1)
- Distributed ledger (1)
- Distributed reflection denial of service (1)
- Distributed system (1)
- Domain analysis provisioner (1)
- Dynamic VM creation (1)
- Edge Computing (1)
- Edge security (1)
- Educational institutions (1)
- Einflussfaktoren (1)
- Emergency-situations (1)
- Endoscopy (1)
- Energy forecast (1)
- Evidential clustering (1)
- Evolutionary strategy (1)
- Experience capitalization (1)
- Expert system (1)
- Explainable AI (1)
- Factory of the future (1)
- FastCAM (1)
- Fault prognostics (1)
- Federated identity (1)
- Federated service (1)
- Fertigung (1)
- Filter (1)
- Fog computing (1)
- Forensic acquisition (1)
- Forensic analysis (1)
- Fuzzy clustering (1)
- Fuzzy control (1)
- GAIA-X (1)
- GII (1)
- Genetische Algorithmen (1)
- GreedyDual-Size (1)
- Grenzwertbestimmung (1)
- Grinding burn (1)
- Grinding parameters (1)
- H.264 (1)
- Health informatics (1)
- Healthcare (1)
- Healthcare workforce training (1)
- High Performance Computing (1)
- Home appliances (1)
- Hybrid (1)
- Hybrid AI (1)
- Hybrid Cloud (1)
- Hyperparameter tuning (1)
- I/O isolation (1)
- IIoT (1)
- IT compliance (1)
- Identity management (1)
- Image caption generation (1)
- Image classification (1)
- Industrial cyber physical system (1)
- Innovation hubs (1)
- Instance segmentation (1)
- Internet der Dinge (1)
- Interoperability (1)
- JADE (1)
- KPI (1)
- Kamera (1)
- Kernel (1)
- LRU (1)
- LRU-MIN (1)
- LRU-threshold (1)
- Lecture capturing (1)
- Legacy machines (1)
- Load Balancing (1)
- Load modeling (1)
- Locality aware (1)
- Log2(SIZE) (1)
- Logic circuits (1)
- METEOR (1)
- MIX (1)
- MLOps Mlflow DVC (1)
- MP4 (1)
- MQTT (1)
- MUWS (1)
- Machine integration (1)
- Machine maintenance (1)
- Mathematical model (1)
- Medical devices (1)
- Medical services (1)
- Medizinische Informatik (1)
- Meta learning (1)
- Metalworking (1)
- Meteorology (1)
- Metric (1)
- Mobile Assistenzsysteme (1)
- Mobile agent security (1)
- Mobile device (1)
- Mobile devices (1)
- Modelling language (1)
- Montenegro (1)
- Multimodal data (1)
- NAT traversal (1)
- NLP (1)
- Nachhaltigkeit (1)
- Natural language processing (1)
- Neural net-works (1)
- Neuro-fuzzy (1)
- Neuronale Netze (1)
- OSGi Service Architecture (1)
- Object detection (1)
- Ontology reasoning (1)
- OpenVZ (1)
- Operating system (1)
- Optical fiber sensors (1)
- Optical fibers (1)
- Optimization (1)
- P2P communication (1)
- PaaS Management (1)
- Parallel algorithms (1)
- Partitioning algorithms (1)
- Peer-reviewed conference (1)
- Peer-reviewed of the scientific committee (1)
- Performance evaluation (1)
- Persistence (1)
- Poisoning (1)
- Poster (1)
- Poster Presentation (1)
- Prediction (1)
- Preprint (1)
- Price models (1)
- Privacy level agreement (PLA) (1)
- Privacy protection (1)
- Private cloud (1)
- Privilege management infrastructure (1)
- Protocols (1)
- Proxy (1)
- Public key cryptography (1)
- Public key infrastructure (1)
- Quality assessment (1)
- Quality assurance (1)
- Quality of Service (1)
- Quality prediction (1)
- REST (1)
- Reliability (1)
- Reminisence therapy (1)
- Replacement-Algorithm (1)
- Resource Discovery (1)
- Reusable architectural patterns (1)
- Risk analysis (1)
- Risk management (1)
- Risk-based metrics (1)
- Rule base refinement (1)
- Rule-based-security (1)
- SIZE (1)
- SOA (1)
- Scaling service (1)
- Scenario simulator (1)
- Security Audit as a Service (1)
- Security Zertifikat (1)
- Security monitoring (1)
- Security policies (1)
- Self adaptive algorithm (1)
- Semantic segmentation (1)
- Semantic technology (1)
- Semantics (1)
- Sensor fusion (1)
- Servers (1)
- Service level agreement (SLA) (1)
- Shear wave elastography (1)
- Shop floor (1)
- Shortest path problem (1)
- Single sign-on (1)
- Smart farming (1)
- Smart medical wearables (1)
- Smart system (1)
- Software-Agenten (1)
- Standards organizations (1)
- Steel surface damage (1)
- Storage (1)
- Storage management (1)
- TAS (1)
- Taxonomy (1)
- Time factors (1)
- Time-Series analysis (1)
- Timing (1)
- Training (1)
- Tutor (1)
- Ultrasound (1)
- Universal image quality index (1)
- Unreliability (1)
- User centric ontology (1)
- User manuals (1)
- User study (1)
- User survey (1)
- Users' perceptions (1)
- Users’ security practices (1)
- Variety (1)
- Velocity (1)
- Veracity (1)
- Video format converting (1)
- Virtual cluster (1)
- Virtual instance (1)
- Virtual machine monitors (1)
- Visual XAI (1)
- Visual attention network (1)
- Visual attention networks (1)
- Visualisation (1)
- Volume (1)
- WSDM (1)
- Web Service Ping (1)
- Web Services (1)
- Western Balkans (1)
- Yolo (1)
- academia-business cooperation (1)
- e-Learning (1)
- eHealth/mHealth (1)
A Fog-Cloud Computing Infrastructure for Condition Monitoring and Distributing Industry 4.0 Services
(2019)
Explainable Artificial Intelligence (XAI) seeks to enhance transparency and trust in AI systems. Evaluating the quality of XAI explanation methods remains challenging due to limitations in existing metrics. To address these issues, we propose a novel metric called Explanation Significance Assessment (ESA) and its extension, the Weighted Explanation Significance Assessment (WESA). These metrics offer a comprehensive evaluation of XAI explanations, considering spatial precision, focus overlap, and relevance accuracy. In this paper, we demonstrate the applicability of ESA and WESA on medical data. These metrics quantify the understandability and reliability of XAI explanations, assisting practitioners in interpreting AI-based decisions and promoting informed choices in critical domains like healthcare. Moreover, ESA and WESA can play a crucial role in AI certification, ensuring both accuracy and explainability. By evaluating the performance of XAI methods and underlying AI models, these metrics contribute to trustworthy AI systems. Incorporating ESA and WESA in AI certification efforts advances the field of XAI and bridges the gap between accuracy and interpretability. In summary, ESA and WESA provide comprehensive metrics to evaluate XAI explanations, benefiting research, critical domains, and AI certification, thereby enabling trustworthy and interpretable AI systems.
A Review on Digital Wallets and Federated Service for Future of Cloud Services Identity Management
(2023)
In today’s technology-driven era, managing digital identities has become a critical concern due to the widespread use of online services and digital devices. This has led to a fragmented landscape of digital identities, burdening individuals with multiple usernames, passwords, and authentication methods. To address this challenge, digital wallets have emerged as a promising solution. These wallets empower users to store, manage, and utilize their digital assets, including personal data, payment information, and credentials. Additionally, federated services have gained prominence, enabling users to access multiple services using a single digital identity. Gaia-X is an example of such a service, aiming to establish a secure and trustworthy data infrastructure. This paper examines digital identity management, focusing on the application of digital wallets and federated services. It explores the categorization of identities needed for different cloud services, considering their unique requirements and characteristics. Furthermore, it discusses the future requirements for digital wallets and federated identity management in the cloud, along with the associated challenges and benefits. The paper also introduces a categorization scheme for cloud services based on security and privacy requirements, demonstrating how different identity types can be mapped to each category.
The YOLO series of object detection algorithms, including YOLOv4 and YOLOv5, have shown superior performance in various medical diagnostic tasks, surpassing human ability in some cases. However, their black-box nature has limited their adoption in medical applications that require trust and explainability of model decisions. To address this issue, visual explanations for AI models, known as visual XAI, have been proposed in the form of heatmaps that highlight regions in the input that contributed most to a particular decision. Gradient-based approaches, such as Grad-CAM, and non-gradient-based approaches, such as Eigen-CAM, are applicable to YOLO models and do not require new layer implementation. This paper evaluates the performance of Grad-CAM and Eigen-CAM on the VinDrCXR Chest X-ray Abnormalities Detection dataset and discusses the limitations of these methods for explaining model decisions to data scientists.
Quality assurance (QA) plays a crucial role in manufacturing to ensure that products meet their specifications. However, manual QA processes are costly and time-consuming, thereby making artificial intelligence (AI) an attractive solution for automation and expert support. In particular, convolutional neural networks (CNNs) have gained a lot of interest in visual inspection. Next to AI methods, the explainable artificial intelligence (XAI) systems, which achieve transparency and interpretability by providing insights into the decision-making process of the AI, are interesting methods for achieveing quality inspections in manufacturing processes. In this study, we conducted a systematic literature review (SLR) to explore AI and XAI approaches for visual QA (VQA) in manufacturing. Our objective was to assess the current state of the art and identify research gaps in this context. Our findings revealed that AI-based systems predominantly focused on visual quality control (VQC) for defect detection. Research addressing VQA practices, like process optimization, predictive maintenance, or root cause analysis, are more rare. Least often cited are papers that utilize XAI methods. In conclusion, this survey emphasizes the importance and potential of AI and XAI in VQA across various industries. By integrating XAI, organizations can enhance model transparency, interpretability, and trust in AI systems. Overall, leveraging AI and XAI improves VQA practices and decision-making in industries.
The digital transformation of companies is expected to increase the digital interconnection between different companies to develop optimized, customized, hybrid business models. These cross-company business models require secure, reliable, and traceable logging and monitoring of contractually agreed information sharing between machine tools, operators, and service providers. This paper discusses how the major requirements for building hybrid business models can be tackled by the blockchain for building a chain of trust and smart contracts for digitized contracts. A machine maintenance use case is used to discuss the readiness of smart contracts for the automation of workflows defined in contracts. Furthermore, it is shown that the number of failures is significantly improved by using these contracts and a blockchain.
Evolutionary strategy is increasingly used for optimization in various machine learning problems. It can scale very well, even to high dimensional problems, and its ability to globally self optimize in flexible ways provides new and exciting opportunities when combined with more recent machine learning methods. This paper describes a novel approach for the optimization of models with a data driven evolutionary strategy. The optimization can directly be applied as a preprocessing step and is therefore independent of the machine learning algorithm used. The experimental analysis of six different use cases show that, on average, better results are attained than without evolutionary strategy. Furthermore it is shown, that the best individual models are also achieved with the help of evolutionary strategy. The six different use cases were of different complexity which reinforces the idea that the approach is universal and not depending on specific use cases.
Up until now, it has been shown that big parts of the so called Industry 4.0 are impacted by Machine Learning (ML) in some way or another. In many shopfloor situations, there are different sensors involved and all data is eventually structured, accumulated and prepared for application in various ML-based scenarios, e.g., predictive maintenance of a machine, quality monitoring of manufactured workpieces or handling domain-specific aspect of the respective fabricator or product. As the physical environment of Cyber Physical System (CPS) can change rapidly, the overall Data Acquisition (DAQ) process and ML training is impacted, too. This work focuses on datasets which consist of small amounts of tabular information and how to utilize them in image-based Neural Networks (NN) with respect to meta learning and multimodal transformations. Therefore, the conceptual utilization of an DAQ system in industrial environments is discussed regarding a variety of techniques for preprocessing and generating visual material from multimodal data. The outcome of such operations is a new dataset which is then applied in model training. Therefore, the presented approach is three-fold. In first analysing the concept of predicting the similarity of structured and numerical data in different datasets, indicators of the feasibility when applying the methodology in related but more sophisticated learning scenarios can be gained. Although ongoing time-series data is differing from simple multi-class data in terms of a chronologically dimension, basic classification concepts are applied to it and evaluated. In order to extend the similarity prediction with a temporal component, the discussed methods are extended by multimodal transformations and an subsequent utilization in Siamese Neural Networks (SNN). By discussing the concept of applying visual representations of structured time-series data in a meta-learning context, known trends and historic information can be utilized for generating real-world test-samples and predicting their validity on inference.
ARTHUR – Distributed Measuring System for Synchronous Data Acquisition from Different Data Sources
(2023)
In industrial manufacturing lines, different machines are well orchestrated and applied for their well-defined purpose. As each of these machines must be monitored and maintained in the first place, there are scenarios in which a Data Acquisition system brings enormous benefits. Since the cost of such professional systems is often not appropriate or feasible for research projects or prototyping, a proof of concept is often achieved by applying end-user hardware. In this work, a distributed measurement system for supporting the collection of data is described with respect to AI-based projects for research and teaching. ARTHUR (meAsuRing sysTem witH distribUted sensoRs) is arbitrarily expandable and has so far been used in the field of data acquisition on machine tools. Typical measured values are Accoustic Emission values, force plates X-Y-Z force values, simple PLC switching signals, OPC-UA machine parameters, etc., which were recorded by a wide variety of sensors. The overall ATHUR system is based on Raspberry Pis and consists of a master node, multiple independent measurement worker nodes, a streaming system realized with Redis, as well as a gateway that stores the data in the cloud. The major objectives of the ARTHUR system are scalability and the support for low-cost measuring components while solely applying open-source software. The work on hand discusses the advantages and disadvantages regarding the hard- and software of this TCP/IP-based system.
On the way to the smart factory, the manufacturing companies investigate the potential of Machine Learning approaches like visual quality inspection, process optimisation, maintenance prediction and more. In order to be able to assess the influence of Machine Learning based systems on business-relevant key figures, many companies go down the path of test before invest. This paper describes a novel and inexpensive distributed Data Acquisition System, ARTHUR (dAta collectoR sysTem witH distribUted sensoRs), to enable the collection of data for AI-based projects for research, education and the industry. ARTHUR is arbitrarily expandable and has so far been used in the field of data acquisition on machine tools. Typical measured values are Acoustic Emission values, force plate X-Y-Z force values, simple SPS signals, OPC-UA machine parameters, etc. which were recorded by a wide variety of sensors. The ARTHUR system consists of a master node, multiple measurement worker nodes, a local streaming system and a gateway that stores the data to the cloud. The authors describe the hardware and software of this system and discuss its advantages and disadvantages.
While the number of devices connected together as the Internet of Things (IoT) is growing, the demand for an efficient and secure model of resource discovery in IoT is increasing. An efficient resource discovery model distributes the registration and discovery workload among many nodes and allow the resources to be discovered based on their attributes. In most cases this discovery ability should be restricted to a number of clients based on their attributes, otherwise, any client in the system can discover any registered resource. In a binary discovery policy, any client with the shared secret key can discover and decrypt the address data of a registered resource regardless of the attributes of the client. In this paper we propose Attred, a decentralized resource discovery model using the Region-based Distributed Hash Table (RDHT) that allows secure and location-aware discovery of the resources in IoT network. Using Attribute Based Encryption (ABE) and based on predefined discovery policies by the resources, Attred allows clients only by their inherent attributes, to discover the resources in the network. Attred distributes the workload of key generations and resource registration and reduces the risk of central authority management. In addition, some of the heavy computations in our proposed model can be securely distributed using secret sharing that allows a more efficient resource registration, without affecting the required security properties. The performance analysis results showed that the distributed computation can significantly reduce the computation cost while maintaining the functionality. The performance and security analysis results also showed that our model can efficiently provide the required security properties of discovery correctness, soundness, resource privacy and client privacy.
The Industrial Internet of Things (IIoT) holds significant potential for improving efficiency, quality, and flexibility. In decentralized systems, there are no trust based centralized authentication techniques, which are unsuitable for distributed networks or subnets, as they have a single point of failure. However, in a decentralized system, more emphasis is needed on trust management, which presents significant challenges in ensuring security and trust in industrial devices and applications. To address these issues, industrial blockchain has the potential to make use of trustless and transparent technologies for devices, applications, and systems. By using a distributed ledger, blockchains can track devices and their data exchanges, improving relationships between trading partners, and proving the supply chain. In this paper, we propose a model for cross-domain authentication between the blockchain-based infrastructure and industrial centralized networks outside the blockchain to ensure secure communication in industrial environments. Our model enables cross authentication for different sub-networks with different protocols or authentication methods while maintaining the transparency provided by the blockchain. The core concept is to build a bridge of trust that enables secure communication between different domains in the IIoT ecosystem. Our proposed model enables devices and applications in different domains to establish secure and trusted communication channels through the use of blockchain technology, providing an efficient and secure way to exchange data within the IIoT ecosystem. Our study presents a decentralized cross-domain authentication mechanism for field devices, which includes enhancements to the standard authentication system. To validate the feasibility of our approach, we developed a prototype and assessed its performance in a real-world industrial scenario. By improving the security and efficiency in industrial settings, this mechanism has the potential to inspire this important area.
Cloud Computing an der HFU
(2010)
Cloud Resource Price System
(2014)
Cloud Utility Price Models
(2013)
Combining Chronicle Mining and Semantics for Predictive Maintenance in Manufacturing Processes
(2020)
As machine learning becomes increasingly pervasive, its resource demands and financial implications escalate, necessitating energy and cost optimisations to meet stakeholder demands. Quality metrics for predictive machine learning models are abundant, but efficiency metrics remain rare. We propose a framework for efficiency metrics, that enables the comparison of distinct efficiency types. A quality-focused efficiency metric is introduced that considers resource consumption, computational effort, and runtime in addition to prediction quality. The metric has been successfully tested for usability, plausibility, and compensation for dataset size and host performance. This framework enables informed decisions to be made about the use and design of machine learning in an environmentally responsible and cost-effective manner.
Training of neural networks requires often high computational power and large memory on Graphics Processing Unit (GPU) hardware. Many cloud providers such as Amazon, Azure, Google, Siemens, etc, provide such infrastructure. However, should one choose a cloud infrastructure or an on premise system for a neural network application, how can these systems be compared with one another? This paper investigates seven prominent Machine Learning benchmarks, which are MLPerf, DAWNBench, DeepBench, DLBS, TBD, AIBench, and ADABench. The recent popularity and widespread use of Deep Learning in various applications have created a need for benchmarking in this field. This paper shows that these application domains need slightly different resources and argue that there is no standard benchmark suite available that addresses these different application needs. We compare these benchmarks and summarize benchmarkrelated datasets, domains, and metrics. Finally, a concept of an ideal benchmark is sketched.
Comparison of Visual Attention Networks for Semantic Image Segmentation in Reminiscence Therapy
(2022)
Due to the steadily increasing age of the entire population, the number of dementia patients is steadily growing. Reminiscence therapy is an important aspect of dementia care. It is crucial to include this area in digitization as well. Modern Reminiscence sessions consist of digital media content specifically tailored to a patient’s biographical needs. To enable an automatic selection of this content, the use of Visual Attention Networks for Semantic Image Segmentation is evaluated in this work. A detailed comparison of various Neural Networks is shown, evaluated by Metric for Evaluation of Translation with Explicit Ordering (METEOR) in addition to Billingual Evaluation Study (BLEU) Score. The most promising Visual Attention Network consists of a Xception Network as Encoder and a Gated Recurrent Unit Network as Decoder.
Container environments permeate all areas of computing, such as HPC, since they are lightweight, efficient, and ease the deployment of software. However, due to the shared host kernel, their isolation is considered to be weak, so additional protection mechanisms are needed.This paper shows that neural networks can be used to do anomaly detection by observing the behavior of containers through system call data. In more detail the detection of anomalies in file and directory paths used by system calls is evaluated to show their advantages and drawbacks.
Data processed in context is more meaningful, easier to understand and has higher information content, hence it derives its semantic meaning from the surrounding context. Even in the field of acoustic signal processing. In this work, a Deep Learning based approach using Ensemble Neural Networks to integrate context into a learning system is presented. For this purpose, different use cases are considered and the method is demonstrated using acoustic signal processing of machine sound data for valves, pumps and slide rails. Mel-spectrograms are used to train convolutional neural networks in order to analyse acoustic data using image processing techniques.
Machine learning applications, like machine condition monitoring, predictive maintenance, and others, become a state of the art in Industry 4.0. One of many machine learning algorithms are decision trees for the decision-making process. A new approach for creating distributed decision trees, called node based parallelization, is presented. It allows data to be classified through a network of industrial devices. Each industrial device is responsible for a single classification rule. Also, nodes that react incorrectly, for example, due to an attack, are taken into account using a variety of methods to remain the decision-making process correct and robust.
The rise of digital twins in the manufacturing industry is accompanied by new possibilities, like process automation and condition monitoring, real time simulations and quality and maintenance prediction are just a few advantages which can be realized. This paper takes a novel approach by extracting the fundamental knowledge of a data set from a production process and mapping it to an expert fuzzy rule set. Afterwards, new fundamental augmented data is generated by exploring the feature space of the previously generated fuzzy rule set. At the same time, a high number of artificial neural network (ANN)models with different hyperparameter configurations are created.
The best models are chosen, in line with the idea of survival of the fittest, and improved with the additional training data sets, generated by the fuzzy rule simulation. It is shown that ANN models can be improved by adding fundamental knowledge represented by the discovered fuzzy rules. Those models can represent digitized machines as digital twins. The architecture and effectiveness of the digital twin is evaluated within an industry 4.0 use case.
Delegated Audit of Cloud Provider Chains Using Provider Provisioned Mobile Evidence Collection
(2017)
Nowadays, machine learning projects have become more and more relevant to various real-world use cases. The success of complex Neural Network models depends upon many factors, as the requirement for structured and machine learning-centric project development management arises. Due to the multitude of tools available for different operational phases, responsibilities and requirements become more and more unclear. In this work, Machine Learning Operations (MLOps) technologies and tools for every part of the overall project pipeline, as well as involved roles, are examined and clearly defined. With the focus on the inter-connectivity of specific tools and comparison by well-selected requirements of MLOps, model performance, input data, and system quality metrics are briefly discussed. By identifying aspects of machine learning, which can be reused from project to project, open-source tools which help in specific parts of the pipeline, and possible combinations, an overview of support in MLOps is given. Deep learning has revolutionized the field of Image processing, and building an automated machine learning workflow for object detection is of great interest for many organizations. For this, a simple MLOps workflow for object detection with images is portrayed.
Supervised object detection models are trained to recognize certain objects. These models are classified into two types: single-stage detectors and two-stage detectors. The single-stage detectors just need one pass through the model to anticipate all the bounding boxes, whereas the two-stage detectors require to first estimate the image portions where the object could be located. Due to their speed and simplicity, single-stage anchor-based models are used in many industrial settings. Training such models require bounding boxes that describe the spatial location of an object, which are usually drawn by an expert. However, the question remains, how much area should be considered when drawing the bounding boxes? In this paper, we demonstrate the effects that the size and placement of a rectangular bounding box can have on the performance of the anchor-based models. For this, we first perform experiments on a synthetically generated binary dataset and then on a real-world object detection dataset. Our results show that fixing the size of the bounding boxes can help in improving the performance of the model in the case of single class object detection (approximately 50% improvement in mAP@[.5:.95] for real world dataset). Furthermore, we also demonstrate how freely available tools can be combined for obtaining the best possible semi automated object labeling pipeline.
Enormous potential of artificial intelligence (AI) exists in numerous products and services, especially in healthcare and medical technology. Explainability is a central prerequisite for certification procedures around the world and the fulfilment of transparency obligations. Explainability tools increase the comprehensibility of object recognition in images using Convolutional Neural Networks, but lack precision.
This paper adapts FastCAM for the domain of detection of medical instruments in endoscopy images. The results show that the Domain Adapted (DA)-FastCAM provides better results for the focus of the model than standard FastCAM weights.
Operations within a Cyber Physical System (CPS) environment are naturally diverse and the resulting data sets include complex relations between sensors of the shopfloor devices setup, their configuration respectively. As Machine Learn- ing (ML) can increase the success of industrial plants in a variety of cases, like smart controlling, intrusion detection or predictive maintenance, clarifying responsibilities and operations for the whole lifecycle supports evaluating the potentially feasible scenarios. In this work, the need for highly configurable and flexible modules is demonstrated by depicting the complex possibilities of extending simple Machine Learning Operations (MLOps) pipelines with additional data sources, e.g., sensors. In addition to the particular modules core functionality, arbitrary evaluation logic or data structure specific anomaly detection can be integrated into the pipeline. With the creation of audit-trails for all operational modules, automated reports can be generated for increasing the accountability of the different physical devices and the data related processing. The concept is evaluated in the context of the project Collaborative Smart Contracting Platform for digital value-added Networks (KOSMoS), where a sensor is part of an ML pipeline and audit trails are realized using Blockchain (BC) technology.