Refine
Year of publication
Document type
- Conference Proceeding (127)
- Article (peer-reviewed) (25)
- Part of a Book (6)
- Contribution to a Periodical (3)
- Working Paper (2)
Language
- English (163) (remove)
Keywords
- Cloud computing (26)
- Security (22)
- Industry 4.0 (15)
- Blockchain (10)
- Privacy (8)
- Audit (7)
- Machine learning (7)
- Monitoring (7)
- Accountability (5)
- Authentication (5)
A Fog-Cloud Computing Infrastructure for Condition Monitoring and Distributing Industry 4.0 Services
(2019)
Cloud Resource Price System
(2014)
Cloud Utility Price Models
(2013)
Smart Condition Monitoring for Industry 4.0 Manufacturing Processes: An Ontology-Based Approach
(2019)
Combining Chronicle Mining and Semantics for Predictive Maintenance in Manufacturing Processes
(2020)
In the context of Industry 4.0, smart factories use advanced sensing and data analytic technologies to understand and monitor the manufacturing processes. To enhance production efficiency and reliability, statistical Artificial Intelligence (AI) technologies such as machine learning and data mining are used to detect and predict potential anomalies within manufacturing processes. However, due to the heterogeneous nature of industrial data, sometimes the knowledge extracted from industrial data is presented in a complex structure. This brings the semantic gap issue which stands for the lack of interoperability among different manufacturing systems. Furthermore, as the Cyber-Physical Systems (CPS) are becoming more knowledge-intensive, uniform knowledge representation of physical resources and real-time reasoning capabilities for analytic tasks are needed to automate the decision-making processes for these systems. These requirements highlight the potential of using symbolic AI for predictive maintenance.
To automate and facilitate predictive analytics in Industry 4.0, in this paper, we present a novel Knowledge-based System for Predictive Maintenance in Industry 4.0 (KSPMI). KSPMI is developed based on a novel hybrid approach that leverages both statistical and symbolic AI technologies. The hybrid approach involves using statistical AI technologies such as machine learning and chronicle mining (a special type of sequential pattern mining approach) to extract machine degradation models from industrial data. On the other hand, symbolic AI technologies, especially domain ontologies and logic rules, will use the extracted chronicle patterns to query and reason on system input data with rich domain and contextual knowledge. This hybrid approach uses Semantic Web Rule Language (SWRL) rules generated from chronicle patterns together with domain ontologies to perform ontology reasoning, which enables the automatic detection of machinery anomalies and the prediction of future events’ occurrence. KSPMI is evaluated and tested on both real-world and synthetic data sets.
Understanding Cloud Audits
(2012)
Towards a Domain Specific Security Policy Language for Automatic Audit of Virtual Machine Images
(2012)
Container environments permeate all areas of computing, such as HPC, since they are lightweight, efficient, and ease the deployment of software. However, due to the shared host kernel, their isolation is considered to be weak, so additional protection mechanisms are needed.This paper shows that neural networks can be used to do anomaly detection by observing the behavior of containers through system call data. In more detail the detection of anomalies in file and directory paths used by system calls is evaluated to show their advantages and drawbacks.
Quality assurance (QA) plays a crucial role in manufacturing to ensure that products meet their specifications. However, manual QA processes are costly and time-consuming, thereby making artificial intelligence (AI) an attractive solution for automation and expert support. In particular, convolutional neural networks (CNNs) have gained a lot of interest in visual inspection. Next to AI methods, the explainable artificial intelligence (XAI) systems, which achieve transparency and interpretability by providing insights into the decision-making process of the AI, are interesting methods for achieveing quality inspections in manufacturing processes. In this study, we conducted a systematic literature review (SLR) to explore AI and XAI approaches for visual QA (VQA) in manufacturing. Our objective was to assess the current state of the art and identify research gaps in this context. Our findings revealed that AI-based systems predominantly focused on visual quality control (VQC) for defect detection. Research addressing VQA practices, like process optimization, predictive maintenance, or root cause analysis, are more rare. Least often cited are papers that utilize XAI methods. In conclusion, this survey emphasizes the importance and potential of AI and XAI in VQA across various industries. By integrating XAI, organizations can enhance model transparency, interpretability, and trust in AI systems. Overall, leveraging AI and XAI improves VQA practices and decision-making in industries.
Testing applications for SmartHome environments is quite complicated, since a real environment is not accessible or the conditions are not controllable during development time. The need to set up the whole hardware environment, increase the complexity of these systems enormously. Therefore, it is helpful to simulate the SmartHome hardware components and environment conditions (e.g. rain, heat, etc.). This paper contains an approach to improve the test and demonstration process of Internet of Things scenarios. A prototype (ScnSim: Scenario Simulator) was developed to set up scenarios. The user of the ScnSim can create her/his own scenario using items (sensors/actuators) and rules, which control the sensors and actors building the IoT enviornment. This simulator is supposed to support the user testing IoT applications or configurations of SmartHome platforms like openHAB. In addition, the ScnSim is supposed to help demonstrating showcases, for example, often demonstrated on a trade fair or as a proof of concept for a customer.
In edge/fog computing infrastructures, the resources and services are offloaded to the edge and computations are distributed among different nodes instead of transmitting them to a centralized entity. Distributed Hash Table (DHT) systems provide a solution to organizing and distributing the computations and storage without involving a trusted third party. However, the physical locations of nodes are not considered during the creation of the overlay which causes some efficiency issues. In this paper, Locality aware Distributed Addressing (LADA) model is proposed that can be adopted in distributed infrastructures to create an overlay that considers the physical locations of participating nodes. LADA aims to address the efficiency issues during the store and lookup processes in DHT overlay. Additionally, it addresses the privacy issue in similar proposals and removes any possible set of fixed entities. Our studies showed that the proposed model is efficient, robust and is able to protect the privacy of the locations of the participating nodes.
While the number of devices connected together as the Internet of Things (IoT) is growing, the demand for an efficient and secure model of resource discovery in IoT is increasing. An efficient resource discovery model distributes the registration and discovery workload among many nodes and allow the resources to be discovered based on their attributes. In most cases this discovery ability should be restricted to a number of clients based on their attributes, otherwise, any client in the system can discover any registered resource. In a binary discovery policy, any client with the shared secret key can discover and decrypt the address data of a registered resource regardless of the attributes of the client. In this paper we propose Attred, a decentralized resource discovery model using the Region-based Distributed Hash Table (RDHT) that allows secure and location-aware discovery of the resources in IoT network. Using Attribute Based Encryption (ABE) and based on predefined discovery policies by the resources, Attred allows clients only by their inherent attributes, to discover the resources in the network. Attred distributes the workload of key generations and resource registration and reduces the risk of central authority management. In addition, some of the heavy computations in our proposed model can be securely distributed using secret sharing that allows a more efficient resource registration, without affecting the required security properties. The performance analysis results showed that the distributed computation can significantly reduce the computation cost while maintaining the functionality. The performance and security analysis results also showed that our model can efficiently provide the required security properties of discovery correctness, soundness, resource privacy and client privacy.
The Present and Future of a Digital Montenegro: Analysis of C-ITS, Agriculture, and Healthcare
(2023)
Software Defined Privacy
(2017)
Software Defined Privacy
(2016)
Real time In-Situ Quality Monitoring of Grinding Process using Microtechnology based Sensor Fusion
(2020)
AAL applications are designed for elderly people and collecting personally identifiable information (PII), e.g. health data. During normal operations, these data should be kept private. But during emergency situations, the information is critical for helpers and emergency doctors. This paper discusses the results of a survey conducted for PII in AAL and proofs the requirement of special access control rules for systems in situations of emergency.
The rise of digital twins in the manufacturing industry is accompanied by new possibilities, like process automation and condition monitoring, real time simulations and quality and maintenance prediction are just a few advantages which can be realized. This paper takes a novel approach by extracting the fundamental knowledge of a data set from a production process and mapping it to an expert fuzzy rule set. Afterwards, new fundamental augmented data is generated by exploring the feature space of the previously generated fuzzy rule set. At the same time, a high number of artificial neural network (ANN)models with different hyperparameter configurations are created.
The best models are chosen, in line with the idea of survival of the fittest, and improved with the additional training data sets, generated by the fuzzy rule simulation. It is shown that ANN models can be improved by adding fundamental knowledge represented by the discovered fuzzy rules. Those models can represent digitized machines as digital twins. The architecture and effectiveness of the digital twin is evaluated within an industry 4.0 use case.
Evolutionary strategy is increasingly used for optimization in various machine learning problems. It can scale very well, even to high dimensional problems, and its ability to globally self optimize in flexible ways provides new and exciting opportunities when combined with more recent machine learning methods. This paper describes a novel approach for the optimization of models with a data driven evolutionary strategy. The optimization can directly be applied as a preprocessing step and is therefore independent of the machine learning algorithm used. The experimental analysis of six different use cases show that, on average, better results are attained than without evolutionary strategy. Furthermore it is shown, that the best individual models are also achieved with the help of evolutionary strategy. The six different use cases were of different complexity which reinforces the idea that the approach is universal and not depending on specific use cases.
Training of neural networks requires often high computational power and large memory on Graphics Processing Unit (GPU) hardware. Many cloud providers such as Amazon, Azure, Google, Siemens, etc, provide such infrastructure. However, should one choose a cloud infrastructure or an on premise system for a neural network application, how can these systems be compared with one another? This paper investigates seven prominent Machine Learning benchmarks, which are MLPerf, DAWNBench, DeepBench, DLBS, TBD, AIBench, and ADABench. The recent popularity and widespread use of Deep Learning in various applications have created a need for benchmarking in this field. This paper shows that these application domains need slightly different resources and argue that there is no standard benchmark suite available that addresses these different application needs. We compare these benchmarks and summarize benchmarkrelated datasets, domains, and metrics. Finally, a concept of an ideal benchmark is sketched.
Supervised object detection models are trained to recognize certain objects. These models are classified into two types: single-stage detectors and two-stage detectors. The single-stage detectors just need one pass through the model to anticipate all the bounding boxes, whereas the two-stage detectors require to first estimate the image portions where the object could be located. Due to their speed and simplicity, single-stage anchor-based models are used in many industrial settings. Training such models require bounding boxes that describe the spatial location of an object, which are usually drawn by an expert. However, the question remains, how much area should be considered when drawing the bounding boxes? In this paper, we demonstrate the effects that the size and placement of a rectangular bounding box can have on the performance of the anchor-based models. For this, we first perform experiments on a synthetically generated binary dataset and then on a real-world object detection dataset. Our results show that fixing the size of the bounding boxes can help in improving the performance of the model in the case of single class object detection (approximately 50% improvement in mAP@[.5:.95] for real world dataset). Furthermore, we also demonstrate how freely available tools can be combined for obtaining the best possible semi automated object labeling pipeline.
Potentials of Semantic Image Segmentation Using Visual Attention Networks for People with Dementia
(2021)
Due to the increasing number of dementia patients, it is time to include the care sector in digitization as well. Digital media, for example, can be used on tablets in memory care and have considerable potential for reminiscence therapy for people with dementia. The time consuming assembly of digital media content has to be automated for the caretakers.
This work analyzes the potentials of semantic image segmentation with Visual Attention Networks for reminiscence therapy sessions. These approaches enable the selection of digital images to satisfy the patients individual experience and biographically. A detailed comparison of various Visual Attention Networks evaluated by the BLEU score is shown. The most promising networks for semantic image segmentation are VGG16 and VGG19.
Comparison of Visual Attention Networks for Semantic Image Segmentation in Reminiscence Therapy
(2022)
Due to the steadily increasing age of the entire population, the number of dementia patients is steadily growing. Reminiscence therapy is an important aspect of dementia care. It is crucial to include this area in digitization as well. Modern Reminiscence sessions consist of digital media content specifically tailored to a patient’s biographical needs. To enable an automatic selection of this content, the use of Visual Attention Networks for Semantic Image Segmentation is evaluated in this work. A detailed comparison of various Neural Networks is shown, evaluated by Metric for Evaluation of Translation with Explicit Ordering (METEOR) in addition to Billingual Evaluation Study (BLEU) Score. The most promising Visual Attention Network consists of a Xception Network as Encoder and a Gated Recurrent Unit Network as Decoder.
Data processed in context is more meaningful, easier to understand and has higher information content, hence it derives its semantic meaning from the surrounding context. Even in the field of acoustic signal processing. In this work, a Deep Learning based approach using Ensemble Neural Networks to integrate context into a learning system is presented. For this purpose, different use cases are considered and the method is demonstrated using acoustic signal processing of machine sound data for valves, pumps and slide rails. Mel-spectrograms are used to train convolutional neural networks in order to analyse acoustic data using image processing techniques.
Retinopathy of Prematurity (ROP) is the highest cause of childhood blindness globally with babies born preterm having a higher probability of contracting the disease. The disease diagnosis remains an economic burden to many countries, lack of enough ophthalmologists for the disease diagnosis coupled with non-existent national screening guidelines still remains a challenge. To diagnose the disease, a fundus photography is conducted, printout images are analyzed to determine the presence or absence of the disease. With the increase in the development of smartphones having advanced image capturing and processing features, the utilization of smartphones to capture retina image for disease diagnosis is becoming a common trend. For regions where ophthalmologists are few and/or for low resource regions with few or no retina capturing equipment, the use of smartphones to capture retina images for retina diseases is an effective method. This, however, is challenged: different smartphones produce images of different resolutions; some images are darker others lighter and with different resolution. A smartphone retina image capturing has a smaller field of view ranging between 450–900 which is a major limitation. A lens to support a bigger view can be combined with this approach to provide a wide view of 1300. This enlargement however distorts the image quality and may result in losing some image features. To overcome these challenges, this work develops an improved U-Net model to preprocess images captured using smartphones for ROP disease diagnosis. Our focus is to determine the presence or absence of the disease from smartphone captured images. Because the images are captured under a smaller field of view (FOV), we develop an improved U-Net model by adding patches to enhance image circumference and extract all features from the image and use the extracted features to train a U-Net model for the disease diagnosis. The model results outperformed similar recent developments.
This poster presents a Montenegrin Digital Academic Innovation Hub aimed to support education, innovations, and academia-business cooperation in medical informatics (as one of four priority areas) at national level in Montenegro. The Hub topology and its organisation in the form of two main nodes, with services established within key pillars: Digital Education; Digital Business Support; Innovations and cooperation with industry; and Employment support.