Refine
Year of publication
Document type
- Conference Proceeding (135) (remove)
Keywords
- Cloud computing (22)
- Security (18)
- Industry 4.0 (12)
- Privacy (8)
- Monitoring (7)
- Blockchain (6)
- Audit (5)
- Cloud Computing (5)
- OSGi (5)
- AAL (4)
A Fog-Cloud Computing Infrastructure for Condition Monitoring and Distributing Industry 4.0 Services
(2019)
Cloud Resource Price System
(2014)
Cloud Utility Price Models
(2013)
Towards an Ontological Representation of Condition Monitoring Knowledge in the Manufacturing Domain
(2018)
Towards a Domain Specific Security Policy Language for Automatic Audit of Virtual Machine Images
(2012)
Container environments permeate all areas of computing, such as HPC, since they are lightweight, efficient, and ease the deployment of software. However, due to the shared host kernel, their isolation is considered to be weak, so additional protection mechanisms are needed.This paper shows that neural networks can be used to do anomaly detection by observing the behavior of containers through system call data. In more detail the detection of anomalies in file and directory paths used by system calls is evaluated to show their advantages and drawbacks.
In edge/fog computing infrastructures, the resources and services are offloaded to the edge and computations are distributed among different nodes instead of transmitting them to a centralized entity. Distributed Hash Table (DHT) systems provide a solution to organizing and distributing the computations and storage without involving a trusted third party. However, the physical locations of nodes are not considered during the creation of the overlay which causes some efficiency issues. In this paper, Locality aware Distributed Addressing (LADA) model is proposed that can be adopted in distributed infrastructures to create an overlay that considers the physical locations of participating nodes. LADA aims to address the efficiency issues during the store and lookup processes in DHT overlay. Additionally, it addresses the privacy issue in similar proposals and removes any possible set of fixed entities. Our studies showed that the proposed model is efficient, robust and is able to protect the privacy of the locations of the participating nodes.
Software Defined Privacy
(2017)
Software Defined Privacy
(2016)
Real time In-Situ Quality Monitoring of Grinding Process using Microtechnology based Sensor Fusion
(2020)
In situ Qualitätsbeurteilung von Schleifprozessen mittels Mikrosystemtechnik basierter Sensorfusion
(2020)
AAL applications are designed for elderly people and collecting personally identifiable information (PII), e.g. health data. During normal operations, these data should be kept private. But during emergency situations, the information is critical for helpers and emergency doctors. This paper discusses the results of a survey conducted for PII in AAL and proofs the requirement of special access control rules for systems in situations of emergency.
The rise of digital twins in the manufacturing industry is accompanied by new possibilities, like process automation and condition monitoring, real time simulations and quality and maintenance prediction are just a few advantages which can be realized. This paper takes a novel approach by extracting the fundamental knowledge of a data set from a production process and mapping it to an expert fuzzy rule set. Afterwards, new fundamental augmented data is generated by exploring the feature space of the previously generated fuzzy rule set. At the same time, a high number of artificial neural network (ANN)models with different hyperparameter configurations are created.
The best models are chosen, in line with the idea of survival of the fittest, and improved with the additional training data sets, generated by the fuzzy rule simulation. It is shown that ANN models can be improved by adding fundamental knowledge represented by the discovered fuzzy rules. Those models can represent digitized machines as digital twins. The architecture and effectiveness of the digital twin is evaluated within an industry 4.0 use case.
Evolutionary strategy is increasingly used for optimization in various machine learning problems. It can scale very well, even to high dimensional problems, and its ability to globally self optimize in flexible ways provides new and exciting opportunities when combined with more recent machine learning methods. This paper describes a novel approach for the optimization of models with a data driven evolutionary strategy. The optimization can directly be applied as a preprocessing step and is therefore independent of the machine learning algorithm used. The experimental analysis of six different use cases show that, on average, better results are attained than without evolutionary strategy. Furthermore it is shown, that the best individual models are also achieved with the help of evolutionary strategy. The six different use cases were of different complexity which reinforces the idea that the approach is universal and not depending on specific use cases.
Training of neural networks requires often high computational power and large memory on Graphics Processing Unit (GPU) hardware. Many cloud providers such as Amazon, Azure, Google, Siemens, etc, provide such infrastructure. However, should one choose a cloud infrastructure or an on premise system for a neural network application, how can these systems be compared with one another? This paper investigates seven prominent Machine Learning benchmarks, which are MLPerf, DAWNBench, DeepBench, DLBS, TBD, AIBench, and ADABench. The recent popularity and widespread use of Deep Learning in various applications have created a need for benchmarking in this field. This paper shows that these application domains need slightly different resources and argue that there is no standard benchmark suite available that addresses these different application needs. We compare these benchmarks and summarize benchmarkrelated datasets, domains, and metrics. Finally, a concept of an ideal benchmark is sketched.
Supervised object detection models are trained to recognize certain objects. These models are classified into two types: single-stage detectors and two-stage detectors. The single-stage detectors just need one pass through the model to anticipate all the bounding boxes, whereas the two-stage detectors require to first estimate the image portions where the object could be located. Due to their speed and simplicity, single-stage anchor-based models are used in many industrial settings. Training such models require bounding boxes that describe the spatial location of an object, which are usually drawn by an expert. However, the question remains, how much area should be considered when drawing the bounding boxes? In this paper, we demonstrate the effects that the size and placement of a rectangular bounding box can have on the performance of the anchor-based models. For this, we first perform experiments on a synthetically generated binary dataset and then on a real-world object detection dataset. Our results show that fixing the size of the bounding boxes can help in improving the performance of the model in the case of single class object detection (approximately 50% improvement in mAP@[.5:.95] for real world dataset). Furthermore, we also demonstrate how freely available tools can be combined for obtaining the best possible semi automated object labeling pipeline.
Potentials of Semantic Image Segmentation Using Visual Attention Networks for People with Dementia
(2021)
Due to the increasing number of dementia patients, it is time to include the care sector in digitization as well. Digital media, for example, can be used on tablets in memory care and have considerable potential for reminiscence therapy for people with dementia. The time consuming assembly of digital media content has to be automated for the caretakers.
This work analyzes the potentials of semantic image segmentation with Visual Attention Networks for reminiscence therapy sessions. These approaches enable the selection of digital images to satisfy the patients individual experience and biographically. A detailed comparison of various Visual Attention Networks evaluated by the BLEU score is shown. The most promising networks for semantic image segmentation are VGG16 and VGG19.
Comparison of Visual Attention Networks for Semantic Image Segmentation in Reminiscence Therapy
(2022)
Due to the steadily increasing age of the entire population, the number of dementia patients is steadily growing. Reminiscence therapy is an important aspect of dementia care. It is crucial to include this area in digitization as well. Modern Reminiscence sessions consist of digital media content specifically tailored to a patient’s biographical needs. To enable an automatic selection of this content, the use of Visual Attention Networks for Semantic Image Segmentation is evaluated in this work. A detailed comparison of various Neural Networks is shown, evaluated by Metric for Evaluation of Translation with Explicit Ordering (METEOR) in addition to Billingual Evaluation Study (BLEU) Score. The most promising Visual Attention Network consists of a Xception Network as Encoder and a Gated Recurrent Unit Network as Decoder.
Retinopathy of Prematurity (ROP) is the highest cause of childhood blindness globally with babies born preterm having a higher probability of contracting the disease. The disease diagnosis remains an economic burden to many countries, lack of enough ophthalmologists for the disease diagnosis coupled with non-existent national screening guidelines still remains a challenge. To diagnose the disease, a fundus photography is conducted, printout images are analyzed to determine the presence or absence of the disease. With the increase in the development of smartphones having advanced image capturing and processing features, the utilization of smartphones to capture retina image for disease diagnosis is becoming a common trend. For regions where ophthalmologists are few and/or for low resource regions with few or no retina capturing equipment, the use of smartphones to capture retina images for retina diseases is an effective method. This, however, is challenged: different smartphones produce images of different resolutions; some images are darker others lighter and with different resolution. A smartphone retina image capturing has a smaller field of view ranging between 450–900 which is a major limitation. A lens to support a bigger view can be combined with this approach to provide a wide view of 1300. This enlargement however distorts the image quality and may result in losing some image features. To overcome these challenges, this work develops an improved U-Net model to preprocess images captured using smartphones for ROP disease diagnosis. Our focus is to determine the presence or absence of the disease from smartphone captured images. Because the images are captured under a smaller field of view (FOV), we develop an improved U-Net model by adding patches to enhance image circumference and extract all features from the image and use the extracted features to train a U-Net model for the disease diagnosis. The model results outperformed similar recent developments.
This poster presents a Montenegrin Digital Academic Innovation Hub aimed to support education, innovations, and academia-business cooperation in medical informatics (as one of four priority areas) at national level in Montenegro. The Hub topology and its organisation in the form of two main nodes, with services established within key pillars: Digital Education; Digital Business Support; Innovations and cooperation with industry; and Employment support.
Delegated Audit of Cloud Provider Chains Using Provider Provisioned Mobile Evidence Collection
(2017)
Cylindrical grinding is an important process in the manufacturing industry. During this process, the problem of grinding burn may appear, which can cause the workpiece to be worthless. In this work, a machine learning neural network approach is used to predict grinding burn based on the process parameters to prevent damage. A small dataset of 21 samples was gathered at a specific machine, grinding always the same element type with different process parameters. Each workpiece got a label from 0 to 3 after the process, indicating the severity of grinding burn. To get a robust neural network model, the dataset has been scaled by augmentation controlled by grinding experts, to generate more samples for training a neural network model. As a result, the model is able to predict the severity of grinding burn in a multiclass classification and it turned out that even with little data, the model performed well.
Up until now, it has been shown that big parts of the so called Industry 4.0 are impacted by Machine Learning (ML) in some way or another. In many shopfloor situations, there are different sensors involved and all data is eventually structured, accumulated and prepared for application in various ML-based scenarios, e.g., predictive maintenance of a machine, quality monitoring of manufactured workpieces or handling domain-specific aspect of the respective fabricator or product. As the physical environment of Cyber Physical System (CPS) can change rapidly, the overall Data Acquisition (DAQ) process and ML training is impacted, too. This work focuses on datasets which consist of small amounts of tabular information and how to utilize them in image-based Neural Networks (NN) with respect to meta learning and multimodal transformations. Therefore, the conceptual utilization of an DAQ system in industrial environments is discussed regarding a variety of techniques for preprocessing and generating visual material from multimodal data. The outcome of such operations is a new dataset which is then applied in model training. Therefore, the presented approach is three-fold. In first analysing the concept of predicting the similarity of structured and numerical data in different datasets, indicators of the feasibility when applying the methodology in related but more sophisticated learning scenarios can be gained. Although ongoing time-series data is differing from simple multi-class data in terms of a chronologically dimension, basic classification concepts are applied to it and evaluated. In order to extend the similarity prediction with a temporal component, the discussed methods are extended by multimodal transformations and an subsequent utilization in Siamese Neural Networks (SNN). By discussing the concept of applying visual representations of structured time-series data in a meta-learning context, known trends and historic information can be utilized for generating real-world test-samples and predicting their validity on inference.
Operations within a Cyber Physical System (CPS) environment are naturally diverse and the resulting data sets include complex relations between sensors of the shopfloor devices setup, their configuration respectively. As Machine Learn- ing (ML) can increase the success of industrial plants in a variety of cases, like smart controlling, intrusion detection or predictive maintenance, clarifying responsibilities and operations for the whole lifecycle supports evaluating the potentially feasible scenarios. In this work, the need for highly configurable and flexible modules is demonstrated by depicting the complex possibilities of extending simple Machine Learning Operations (MLOps) pipelines with additional data sources, e.g., sensors. In addition to the particular modules core functionality, arbitrary evaluation logic or data structure specific anomaly detection can be integrated into the pipeline. With the creation of audit-trails for all operational modules, automated reports can be generated for increasing the accountability of the different physical devices and the data related processing. The concept is evaluated in the context of the project Collaborative Smart Contracting Platform for digital value-added Networks (KOSMoS), where a sensor is part of an ML pipeline and audit trails are realized using Blockchain (BC) technology.
In modern industrial production lines, the integration and interconnection of various different manufacturing components, like robots, laser cutting machines, milling machines, CNC-machines, etc. allows for a higher degree of autonomous production on the shop floor. Manufacturers of these increasingly complex machines are beginning to equip their business models with bidirectional data flows to other factories. This is creating a digital, cross-company shop floor infrastructure where the transfer of information is controlled by digital contracts. To establish a trusted ecosystem, the new technology "blockchain" and a variety of technology stacks must be combined while ensuring security. Such blockchain-based frameworks enable bidirectional trust across all contract partners. Essential data flows are defined by specific technical representation of contract agreements and executed through smart contracts.This work describes a platform for rapid cross-company business model instantiation based on blockchain for establishing trust between the enterprises. It focuses on selected security aspects of the deployment- and configuration processes applied by the industrial ecosystem. A threat analysis of the platform shows the critical security risks. Based on an industrial dynamic machine leasing use case, a risk assessment and security analysis of the key platform components is carried out.