Refine
Year of publication
Document type
- Conference Proceeding (135)
- Article (peer-reviewed) (26)
- Contribution to a Periodical (9)
- Report (8)
- Part of a Book (7)
- Working Paper (2)
- Book (1)
Keywords
- Cloud computing (27)
- Security (22)
- Industry 4.0 (16)
- Blockchain (10)
- Machine learning (8)
- Privacy (8)
- Audit (7)
- Cloud Computing (7)
- Monitoring (7)
- PaaS (6)
Nowadays, machine learning projects have become more and more relevant to various real-world use cases. The success of complex Neural Network models depends upon many factors, as the requirement for structured and machine learning-centric project development management arises. Due to the multitude of tools available for different operational phases, responsibilities and requirements become more and more unclear. In this work, Machine Learning Operations (MLOps) technologies and tools for every part of the overall project pipeline, as well as involved roles, are examined and clearly defined. With the focus on the inter-connectivity of specific tools and comparison by well-selected requirements of MLOps, model performance, input data, and system quality metrics are briefly discussed. By identifying aspects of machine learning, which can be reused from project to project, open-source tools which help in specific parts of the pipeline, and possible combinations, an overview of support in MLOps is given. Deep learning has revolutionized the field of Image processing, and building an automated machine learning workflow for object detection is of great interest for many organizations. For this, a simple MLOps workflow for object detection with images is portrayed.
While the number of devices connected together as the Internet of Things (IoT) is growing, the demand for an efficient and secure model of resource discovery in IoT is increasing. An efficient resource discovery model distributes the registration and discovery workload among many nodes and allow the resources to be discovered based on their attributes. In most cases this discovery ability should be restricted to a number of clients based on their attributes, otherwise, any client in the system can discover any registered resource. In a binary discovery policy, any client with the shared secret key can discover and decrypt the address data of a registered resource regardless of the attributes of the client. In this paper we propose Attred, a decentralized resource discovery model using the Region-based Distributed Hash Table (RDHT) that allows secure and location-aware discovery of the resources in IoT network. Using Attribute Based Encryption (ABE) and based on predefined discovery policies by the resources, Attred allows clients only by their inherent attributes, to discover the resources in the network. Attred distributes the workload of key generations and resource registration and reduces the risk of central authority management. In addition, some of the heavy computations in our proposed model can be securely distributed using secret sharing that allows a more efficient resource registration, without affecting the required security properties. The performance analysis results showed that the distributed computation can significantly reduce the computation cost while maintaining the functionality. The performance and security analysis results also showed that our model can efficiently provide the required security properties of discovery correctness, soundness, resource privacy and client privacy.
Cylindrical grinding is an important process in the manufacturing industry. During this process, the problem of grinding burn may appear, which can cause the workpiece to be worthless. In this work, a machine learning neural network approach is used to predict grinding burn based on the process parameters to prevent damage. A small dataset of 21 samples was gathered at a specific machine, grinding always the same element type with different process parameters. Each workpiece got a label from 0 to 3 after the process, indicating the severity of grinding burn. To get a robust neural network model, the dataset has been scaled by augmentation controlled by grinding experts, to generate more samples for training a neural network model. As a result, the model is able to predict the severity of grinding burn in a multiclass classification and it turned out that even with little data, the model performed well.
Cloud Computing an der HFU
(2010)
Understanding Cloud Audits
(2012)
Cloud Resource Price System
(2014)
Towards an Ontological Representation of Condition Monitoring Knowledge in the Manufacturing Domain
(2018)
Machine learning applications, like machine condition monitoring, predictive maintenance, and others, become a state of the art in Industry 4.0. One of many machine learning algorithms are decision trees for the decision-making process. A new approach for creating distributed decision trees, called node based parallelization, is presented. It allows data to be classified through a network of industrial devices. Each industrial device is responsible for a single classification rule. Also, nodes that react incorrectly, for example, due to an attack, are taken into account using a variety of methods to remain the decision-making process correct and robust.
In Industry 4.0 machine learning approaches are a state-of-the art for predictive maintenance, machine condition monitoring, and others. Distributed decision trees are one of the learning algorithms for such applications. A new approach of node based parallelization for the construction is presented and allows to classify data through a network of nodes. Attacks on the nodes are discussed based on different attack scenarios and attack classifications are presented. A thorough analysis of protection measurements is given, such that classification is not maliciously modified by an attacker. Different countermeasures are proposed and analyzed. A quorum-based system allows for a good balance between computational overhead and robustness of the algorithm.
Towards a Domain Specific Security Policy Language for Automatic Audit of Virtual Machine Images
(2012)
Cloud Utility Price Models
(2013)
The rise of digital twins in the manufacturing industry is accompanied by new possibilities, like process automation and condition monitoring, real time simulations and quality and maintenance prediction are just a few advantages which can be realized. This paper takes a novel approach by extracting the fundamental knowledge of a data set from a production process and mapping it to an expert fuzzy rule set. Afterwards, new fundamental augmented data is generated by exploring the feature space of the previously generated fuzzy rule set. At the same time, a high number of artificial neural network (ANN)models with different hyperparameter configurations are created.
The best models are chosen, in line with the idea of survival of the fittest, and improved with the additional training data sets, generated by the fuzzy rule simulation. It is shown that ANN models can be improved by adding fundamental knowledge represented by the discovered fuzzy rules. Those models can represent digitized machines as digital twins. The architecture and effectiveness of the digital twin is evaluated within an industry 4.0 use case.
Ensuring data quality is central to the digital transformation in industry. Business processes such as predictive maintenance or condition monitoring can be implemented or improved based on the available data. In order to guarantee high data quality, a single data validation system are usually used to validate the production data for further use. However, using a single system allows an attacker only to perform one successful attack to corrupt the whole system. We present a new approach in which a data validation system using multiple different validators minimizes the probability of success for the attacker. The validators are arranged in clusters based on their properties. For a validation process, a challenge is given that specifies which validators should perform the current validation. Validation results from other validators are dropped. This ensures that even for more than half of the validators being corrupted anomalies can be detected during the validation process.
Container environments permeate all areas of computing, such as HPC, since they are lightweight, efficient, and ease the deployment of software. However, due to the shared host kernel, their isolation is considered to be weak, so additional protection mechanisms are needed.This paper shows that neural networks can be used to do anomaly detection by observing the behavior of containers through system call data. In more detail the detection of anomalies in file and directory paths used by system calls is evaluated to show their advantages and drawbacks.
Combining Chronicle Mining and Semantics for Predictive Maintenance in Manufacturing Processes
(2020)
Distributed machine learning algorithms that employ Deep Neural Networks (DNNs) are widely used in Industry 4.0 applications, such as smart manufacturing. The layers of a DNN can be mapped onto different nodes located in the cloud, edge and shop floor for preserving privacy. The quality of the data that is fed into and processed through the DNN is of utmost importance for critical tasks, such as inspection and quality control. Distributed Data Validation Networks (DDVNs) are used to validate the quality of the data. However, they are prone to single points of failure when an attack occurs. This paper proposes QUDOS, an approach that enhances the security of a distributed DNN that is supported by DDVNs using quorums. The proposed approach allows individual nodes that are corrupted due to an attack to be detected or excluded when the DNN produces an output. Metrics such as corruption factor and success probability of an attack are considered for evaluating the security aspects of DNNs. A simulation study demonstrates that if the number of corrupted nodes is less than a given threshold for decision-making in a quorum, the QUDOS approach always prevents attacks. Furthermore, the study shows that increasing the size of the quorum has a better impact on security than increasing the number of layers. One merit of QUDOS is that it enhances the security of DNNs without requiring any modifications to the algorithm and can therefore be applied to other classes of problems.
Training of neural networks requires often high computational power and large memory on Graphics Processing Unit (GPU) hardware. Many cloud providers such as Amazon, Azure, Google, Siemens, etc, provide such infrastructure. However, should one choose a cloud infrastructure or an on premise system for a neural network application, how can these systems be compared with one another? This paper investigates seven prominent Machine Learning benchmarks, which are MLPerf, DAWNBench, DeepBench, DLBS, TBD, AIBench, and ADABench. The recent popularity and widespread use of Deep Learning in various applications have created a need for benchmarking in this field. This paper shows that these application domains need slightly different resources and argue that there is no standard benchmark suite available that addresses these different application needs. We compare these benchmarks and summarize benchmarkrelated datasets, domains, and metrics. Finally, a concept of an ideal benchmark is sketched.
Potentials of Semantic Image Segmentation Using Visual Attention Networks for People with Dementia
(2021)
Due to the increasing number of dementia patients, it is time to include the care sector in digitization as well. Digital media, for example, can be used on tablets in memory care and have considerable potential for reminiscence therapy for people with dementia. The time consuming assembly of digital media content has to be automated for the caretakers.
This work analyzes the potentials of semantic image segmentation with Visual Attention Networks for reminiscence therapy sessions. These approaches enable the selection of digital images to satisfy the patients individual experience and biographically. A detailed comparison of various Visual Attention Networks evaluated by the BLEU score is shown. The most promising networks for semantic image segmentation are VGG16 and VGG19.
In edge/fog computing infrastructures, the resources and services are offloaded to the edge and computations are distributed among different nodes instead of transmitting them to a centralized entity. Distributed Hash Table (DHT) systems provide a solution to organizing and distributing the computations and storage without involving a trusted third party. However, the physical locations of nodes are not considered during the creation of the overlay which causes some efficiency issues. In this paper, Locality aware Distributed Addressing (LADA) model is proposed that can be adopted in distributed infrastructures to create an overlay that considers the physical locations of participating nodes. LADA aims to address the efficiency issues during the store and lookup processes in DHT overlay. Additionally, it addresses the privacy issue in similar proposals and removes any possible set of fixed entities. Our studies showed that the proposed model is efficient, robust and is able to protect the privacy of the locations of the participating nodes.
Testing applications for SmartHome environments is quite complicated, since a real environment is not accessible or the conditions are not controllable during development time. The need to set up the whole hardware environment, increase the complexity of these systems enormously. Therefore, it is helpful to simulate the SmartHome hardware components and environment conditions (e.g. rain, heat, etc.). This paper contains an approach to improve the test and demonstration process of Internet of Things scenarios. A prototype (ScnSim: Scenario Simulator) was developed to set up scenarios. The user of the ScnSim can create her/his own scenario using items (sensors/actuators) and rules, which control the sensors and actors building the IoT enviornment. This simulator is supposed to support the user testing IoT applications or configurations of SmartHome platforms like openHAB. In addition, the ScnSim is supposed to help demonstrating showcases, for example, often demonstrated on a trade fair or as a proof of concept for a customer.