Refine
Year of publication
Document type
- Conference Proceeding (127)
- Article (peer-reviewed) (11)
- Contribution to a Periodical (9)
- Report (8)
- Part of a Book (7)
- Book (1)
- Working Paper (1)
Has full text
- No (164) (remove)
Keywords
- Cloud computing (27)
- Security (19)
- Industry 4.0 (15)
- Privacy (8)
- Cloud Computing (7)
- Monitoring (7)
- Audit (6)
- Blockchain (6)
- PaaS (6)
- Accountability (5)
- Cloud (5)
- OSGi (5)
- Predictive maintenance (5)
- AAL (4)
- Authentication (4)
- Data privacy (4)
- Docker (4)
- Internet of things (4)
- QoS (4)
- Biometrics (3)
- Computer architecture (3)
- Digital evidence (3)
- Digital forensics (3)
- Fuzzy logic (3)
- IaaS (3)
- Industrie 4.0 (3)
- Machine learning (3)
- SLA (3)
- Smart contracts (3)
- Software agents (3)
- Usability (3)
- Access control (2)
- Ambient assisted living (2)
- Authentication technologies (2)
- CPS (2)
- Chronicle mining (2)
- Computational modeling (2)
- Condition monitoring (2)
- Container (2)
- Container virtualization (2)
- Context aware (2)
- Continuous authentication (2)
- Distributed data validation network (2)
- Evidence (2)
- Federated cloud (2)
- Fuzzy Logic (2)
- HPC (2)
- Hardware (2)
- Industrial internet of things (2)
- IoT (2)
- Knowledge-based system (2)
- ML (2)
- MLOps (2)
- Maintenance (2)
- Manufacturing process (2)
- Mobile security (2)
- Modularization (2)
- Ontologies (2)
- Ontology (2)
- Quality of service (2)
- SWRL rules (2)
- SaaS (2)
- Shibboleth (2)
- Simulation (2)
- System call tracing (2)
- Transparency (2)
- Transparent authentication (2)
- User authentication (2)
- Virtual machining (2)
- Virtualization (2)
- ABE (1)
- Access-control (1)
- Agent confidentiality (1)
- Agent migration (1)
- Agent trust (1)
- Agents (1)
- Anomalieerkennung (1)
- Anomalous behavior (1)
- Anomalous behaviour (1)
- Anomaly detection (1)
- Application deployment (1)
- Application software (1)
- Artificial intelligence (1)
- Artificial neural network (1)
- Artificial neural networks (1)
- Assurance (1)
- Attribute certificate (1)
- Augmentation (1)
- Authorization (1)
- Automl (1)
- Autonomous agents (1)
- Autorisierung (1)
- Availability (1)
- BLEU (1)
- Benchmark testing (1)
- Beweis Sammlung & Sicherung (1)
- Big Data (1)
- Big data (1)
- Big data frameworks (1)
- Bildverarbeitung (1)
- Biological neural networks (1)
- Biological system modeling (1)
- Blinde und Sehbehinderte (1)
- Bounding-box regression (1)
- Business (1)
- CNN (1)
- Cache (1)
- Chatbot (1)
- Circuit synthesis (1)
- Cloud Customer (1)
- Cloud Forensics (1)
- Cloud Provider (1)
- Cloud Sicherheit (1)
- Cloud audits (1)
- Cloud database (1)
- Cloud forensic challenges (1)
- Cloud forensic solutions (1)
- Cloud forensics challenges (1)
- Cloud forensics solutions (1)
- Cloud infrastructure (1)
- Cloud scheduling (1)
- Cloud security (1)
- Cloud service (1)
- Cloud-Computing (1)
- Cloud-edge computing (1)
- Clouds (1)
- Cluster-based data validation (1)
- Coils (1)
- Container Virtualization (1)
- Context (1)
- Context enhancement protocol (1)
- Context-Awareness (1)
- Context-aware (1)
- Continuous monitoring (1)
- Convolutional generative adversarial network (1)
- Costs (1)
- Cross company business model (1)
- Customer relationship management (1)
- Cyber physical systems (1)
- Cyber security (1)
- Cybersecurity (1)
- DHT (1)
- Data acquisition (1)
- Data flow (1)
- Data protection (1)
- Data validation (1)
- Data-driven (1)
- Decision support systems (1)
- Deloyment (1)
- Dementia (1)
- Dementia health care (1)
- Denial of service (1)
- Deployment (1)
- Design engineering (1)
- Desktop grid (1)
- DigNest (1)
- Digital Evidence (1)
- Digital Forensics (1)
- Digital forensic (1)
- Digital investigation (1)
- Digital product passport (1)
- Digital twin (1)
- Digital wallet (1)
- Digitized agreements (1)
- Distance measurement (1)
- Distributed DNN (1)
- Distributed OSGi (1)
- Distributed computing (1)
- Distributed reflection denial of service (1)
- Distributed system (1)
- Domain analysis provisioner (1)
- Dynamic VM creation (1)
- Edge Computing (1)
- Edge security (1)
- Educational institutions (1)
- Einflussfaktoren (1)
- Emergency-situations (1)
- Endoscopy (1)
- Energy forecast (1)
- Evidential clustering (1)
- Evolutionary strategy (1)
- Experience capitalization (1)
- Expert system (1)
- Explainable AI (1)
- Factory of the future (1)
- FastCAM (1)
- Fault prognostics (1)
- Federated identity (1)
- Federated service (1)
- Fertigung (1)
- Filter (1)
- Fog computing (1)
- Forensic acquisition (1)
- Forensic analysis (1)
- Fuzzy clustering (1)
- Fuzzy control (1)
- GAIA-X (1)
- GII (1)
- Generative models (1)
- Genetische Algorithmen (1)
- GreedyDual-Size (1)
- Grenzwertbestimmung (1)
- H.264 (1)
- Healthcare (1)
- High Performance Computing (1)
- Home appliances (1)
- Hybrid (1)
- Hybrid AI (1)
- Hybrid Cloud (1)
- Hyperparameter tuning (1)
- I/O isolation (1)
- IIoT (1)
- IT compliance (1)
- Identity management (1)
- Image caption generation (1)
- Image classification (1)
- Industrial blockchain (1)
- Industrial cyber physical system (1)
- Innovation hubs (1)
- Instance segmentation (1)
- Interoperability (1)
- JADE (1)
- KPI (1)
- Kamera (1)
- Kernel (1)
- LRU (1)
- LRU-MIN (1)
- LRU-threshold (1)
- Lecture capturing (1)
- Load Balancing (1)
- Load modeling (1)
- Locality aware (1)
- Log2(SIZE) (1)
- Logic circuits (1)
- METEOR (1)
- MIX (1)
- MP4 (1)
- MQTT (1)
- MUWS (1)
- Machine integration (1)
- Machine maintenance (1)
- Mathematical model (1)
- Medical services (1)
- Meta learning (1)
- Meteorology (1)
- Metric (1)
- Mobile Assistenzsysteme (1)
- Mobile agent security (1)
- Mobile device (1)
- Mobile devices (1)
- Modelling language (1)
- Montenegro (1)
- Multimodal data (1)
- NLP (1)
- Nachhaltigkeit (1)
- Natural language processing (1)
- Neural net-works (1)
- Neural network (1)
- Neuro-fuzzy (1)
- Neuronale Netze (1)
- OSGi Service Architecture (1)
- Object detection (1)
- Ontology reasoning (1)
- OpenVZ (1)
- Operating system (1)
- Optical fiber sensors (1)
- Optical fibers (1)
- Optimization (1)
- P2P (1)
- P2P communication (1)
- PaaS Management (1)
- Parallel algorithms (1)
- Partitioning algorithms (1)
- Peer-reviewed conference (1)
- Performance evaluation (1)
- Persistence (1)
- Pix2Pix (1)
- Poisoning (1)
- Poster Presentation (1)
- Prediction (1)
- Preprint (1)
- Price models (1)
- Privacy level agreement (PLA) (1)
- Privacy protection (1)
- Private cloud (1)
- Privilege management infrastructure (1)
- Protocols (1)
- Proxy (1)
- Public key cryptography (1)
- Public key infrastructure (1)
- Quality assessment (1)
- Quality of Service (1)
- REST (1)
- Reliability (1)
- Reminisence therapy (1)
- Replacement-Algorithm (1)
- Resource Discovery (1)
- Reusable architectural patterns (1)
- Risk analysis (1)
- Rule base refinement (1)
- Rule-based-security (1)
- SIZE (1)
- SOA (1)
- Scaling service (1)
- Scenario simulator (1)
- Security Audit as a Service (1)
- Security Zertifikat (1)
- Security monitoring (1)
- Security policies (1)
- Semantic segmentation (1)
- Semantic technology (1)
- Semantics (1)
- Sensor fusion (1)
- Servers (1)
- Service level agreement (SLA) (1)
- Shortest path problem (1)
- Sicherheit (1)
- Single sign-on (1)
- Smart system (1)
- Software-Agenten (1)
- Standards organizations (1)
- Steel surface damage (1)
- Storage (1)
- Storage management (1)
- TAS (1)
- Taxonomy (1)
- Time factors (1)
- Time-Series analysis (1)
- Timing (1)
- Traceability (1)
- Training (1)
- Trust (1)
- Trust management (1)
- Tutor (1)
- Universal image quality index (1)
- User centric ontology (1)
- User survey (1)
- Users' perceptions (1)
- Users’ security practices (1)
- Variety (1)
- Velocity (1)
- Veracity (1)
- Verifiability (1)
- Video format converting (1)
- Virtual cluster (1)
- Virtual instance (1)
- Virtual machine monitors (1)
- Visual attention network (1)
- Visual attention networks (1)
- Visualisation (1)
- Volume (1)
- WSDM (1)
- Web Service Ping (1)
- Web Services (1)
- Western Balkans (1)
- XAI (1)
- e-Learning (1)
As machine learning becomes increasingly pervasive, its resource demands and financial implications escalate, necessitating energy and cost optimisations to meet stakeholder demands. Quality metrics for predictive machine learning models are abundant, but efficiency metrics remain rare. We propose a framework for efficiency metrics, that enables the comparison of distinct efficiency types. A quality-focused efficiency metric is introduced that considers resource consumption, computational effort, and runtime in addition to prediction quality. The metric has been successfully tested for usability, plausibility, and compensation for dataset size and host performance. This framework enables informed decisions to be made about the use and design of machine learning in an environmentally responsible and cost-effective manner.
Explainable Artificial Intelligence (XAI) seeks to enhance transparency and trust in AI systems. Evaluating the quality of XAI explanation methods remains challenging due to limitations in existing metrics. To address these issues, we propose a novel metric called Explanation Significance Assessment (ESA) and its extension, the Weighted Explanation Significance Assessment (WESA). These metrics offer a comprehensive evaluation of XAI explanations, considering spatial precision, focus overlap, and relevance accuracy. In this paper, we demonstrate the applicability of ESA and WESA on medical data. These metrics quantify the understandability and reliability of XAI explanations, assisting practitioners in interpreting AI-based decisions and promoting informed choices in critical domains like healthcare. Moreover, ESA and WESA can play a crucial role in AI certification, ensuring both accuracy and explainability. By evaluating the performance of XAI methods and underlying AI models, these metrics contribute to trustworthy AI systems. Incorporating ESA and WESA in AI certification efforts advances the field of XAI and bridges the gap between accuracy and interpretability. In summary, ESA and WESA provide comprehensive metrics to evaluate XAI explanations, benefiting research, critical domains, and AI certification, thereby enabling trustworthy and interpretable AI systems.
Up until now, it has been shown that big parts of the so called Industry 4.0 are impacted by Machine Learning (ML) in some way or another. In many shopfloor situations, there are different sensors involved and all data is eventually structured, accumulated and prepared for application in various ML-based scenarios, e.g., predictive maintenance of a machine, quality monitoring of manufactured workpieces or handling domain-specific aspect of the respective fabricator or product. As the physical environment of Cyber Physical System (CPS) can change rapidly, the overall Data Acquisition (DAQ) process and ML training is impacted, too. This work focuses on datasets which consist of small amounts of tabular information and how to utilize them in image-based Neural Networks (NN) with respect to meta learning and multimodal transformations. Therefore, the conceptual utilization of an DAQ system in industrial environments is discussed regarding a variety of techniques for preprocessing and generating visual material from multimodal data. The outcome of such operations is a new dataset which is then applied in model training. Therefore, the presented approach is three-fold. In first analysing the concept of predicting the similarity of structured and numerical data in different datasets, indicators of the feasibility when applying the methodology in related but more sophisticated learning scenarios can be gained. Although ongoing time-series data is differing from simple multi-class data in terms of a chronologically dimension, basic classification concepts are applied to it and evaluated. In order to extend the similarity prediction with a temporal component, the discussed methods are extended by multimodal transformations and an subsequent utilization in Siamese Neural Networks (SNN). By discussing the concept of applying visual representations of structured time-series data in a meta-learning context, known trends and historic information can be utilized for generating real-world test-samples and predicting their validity on inference.
Operations within a Cyber Physical System (CPS) environment are naturally diverse and the resulting data sets include complex relations between sensors of the shopfloor devices setup, their configuration respectively. As Machine Learn- ing (ML) can increase the success of industrial plants in a variety of cases, like smart controlling, intrusion detection or predictive maintenance, clarifying responsibilities and operations for the whole lifecycle supports evaluating the potentially feasible scenarios. In this work, the need for highly configurable and flexible modules is demonstrated by depicting the complex possibilities of extending simple Machine Learning Operations (MLOps) pipelines with additional data sources, e.g., sensors. In addition to the particular modules core functionality, arbitrary evaluation logic or data structure specific anomaly detection can be integrated into the pipeline. With the creation of audit-trails for all operational modules, automated reports can be generated for increasing the accountability of the different physical devices and the data related processing. The concept is evaluated in the context of the project Collaborative Smart Contracting Platform for digital value-added Networks (KOSMoS), where a sensor is part of an ML pipeline and audit trails are realized using Blockchain (BC) technology.
Retinopathy of Prematurity (ROP) is the highest cause of childhood blindness globally with babies born preterm having a higher probability of contracting the disease. The disease diagnosis remains an economic burden to many countries, lack of enough ophthalmologists for the disease diagnosis coupled with non-existent national screening guidelines still remains a challenge. To diagnose the disease, a fundus photography is conducted, printout images are analyzed to determine the presence or absence of the disease. With the increase in the development of smartphones having advanced image capturing and processing features, the utilization of smartphones to capture retina image for disease diagnosis is becoming a common trend. For regions where ophthalmologists are few and/or for low resource regions with few or no retina capturing equipment, the use of smartphones to capture retina images for retina diseases is an effective method. This, however, is challenged: different smartphones produce images of different resolutions; some images are darker others lighter and with different resolution. A smartphone retina image capturing has a smaller field of view ranging between 450–900 which is a major limitation. A lens to support a bigger view can be combined with this approach to provide a wide view of 1300. This enlargement however distorts the image quality and may result in losing some image features. To overcome these challenges, this work develops an improved U-Net model to preprocess images captured using smartphones for ROP disease diagnosis. Our focus is to determine the presence or absence of the disease from smartphone captured images. Because the images are captured under a smaller field of view (FOV), we develop an improved U-Net model by adding patches to enhance image circumference and extract all features from the image and use the extracted features to train a U-Net model for the disease diagnosis. The model results outperformed similar recent developments.
Evolutionary strategy is increasingly used for optimization in various machine learning problems. It can scale very well, even to high dimensional problems, and its ability to globally self optimize in flexible ways provides new and exciting opportunities when combined with more recent machine learning methods. This paper describes a novel approach for the optimization of models with a data driven evolutionary strategy. The optimization can directly be applied as a preprocessing step and is therefore independent of the machine learning algorithm used. The experimental analysis of six different use cases show that, on average, better results are attained than without evolutionary strategy. Furthermore it is shown, that the best individual models are also achieved with the help of evolutionary strategy. The six different use cases were of different complexity which reinforces the idea that the approach is universal and not depending on specific use cases.
Data scientists, researchers and engineers want to understand, whether machine learning models for object detection work accurate and precise. Networks like Yolo use bounding boxes as a result to localize the object in the image.
The principal aim of this paper is to address the problem of a lack of an effective metric for evaluating the results of bounding box regression in object detection networks when boxes do not overlap or lie completely within each other.
The standard known metrics, like IoU, lack of differentiating results, which do not overlap but differ in the distance between predicted bounding box and label.
To solve this challenge, we propose a new metric called UIoU (Unified Intersection over Union) that combines the best properties of existing metrics (IoU, GIoU and DIoU) and extends them with a similarity factor. By assigning weight to each component of the metric, it allows for a clear differentiation between the three possible cases of box positions (not overlapping, overlapping, boxes inside each other).
The result of this paper is a new metric that outperforms the existing metrics such as IoU, GIoU and DIoU by providing a more understandable measure of the performance of object detection models. This provides researchers and users in the field of explainable AI with a metric that allows the evaluation and comparison of prediction and label bounding boxes in an understandable way.
As industrial networks continue to expand and connect more devices and users, they face growing security challenges such as unauthorized access and data breaches. This paper delves into the crucial role of security and trust in industrial networks and how trust management systems (TMS) can mitigate malicious access to these networks.
The TMS presented in this paper leverages distributed ledger technology (blockchain) to evaluate the trustworthiness of blockchain nodes, including devices and users, and make access decisions accordingly. While this approach is applicable to blockchain, it can also be extended to other areas. This approach can help prevent malicious actors from penetrating industrial networks and causing harm. The paper also presents the results of a simulation to demonstrate the behavior of the TMS and provide insights into its effectiveness.
A Review on Digital Wallets and Federated Service for Future of Cloud Services Identity Management
(2023)
In today’s technology-driven era, managing digital identities has become a critical concern due to the widespread use of online services and digital devices. This has led to a fragmented landscape of digital identities, burdening individuals with multiple usernames, passwords, and authentication methods. To address this challenge, digital wallets have emerged as a promising solution. These wallets empower users to store, manage, and utilize their digital assets, including personal data, payment information, and credentials. Additionally, federated services have gained prominence, enabling users to access multiple services using a single digital identity. Gaia-X is an example of such a service, aiming to establish a secure and trustworthy data infrastructure. This paper examines digital identity management, focusing on the application of digital wallets and federated services. It explores the categorization of identities needed for different cloud services, considering their unique requirements and characteristics. Furthermore, it discusses the future requirements for digital wallets and federated identity management in the cloud, along with the associated challenges and benefits. The paper also introduces a categorization scheme for cloud services based on security and privacy requirements, demonstrating how different identity types can be mapped to each category.
On the way to the smart factory, the manufacturing companies investigate the potential of Machine Learning approaches like visual quality inspection, process optimisation, maintenance prediction and more. In order to be able to assess the influence of Machine Learning based systems on business-relevant key figures, many companies go down the path of test before invest. This paper describes a novel and inexpensive distributed Data Acquisition System, ARTHUR (dAta collectoR sysTem witH distribUted sensoRs), to enable the collection of data for AI-based projects for research, education and the industry. ARTHUR is arbitrarily expandable and has so far been used in the field of data acquisition on machine tools. Typical measured values are Acoustic Emission values, force plate X-Y-Z force values, simple SPS signals, OPC-UA machine parameters, etc. which were recorded by a wide variety of sensors. The ARTHUR system consists of a master node, multiple measurement worker nodes, a local streaming system and a gateway that stores the data to the cloud. The authors describe the hardware and software of this system and discuss its advantages and disadvantages.
ARTHUR – Distributed Measuring System for Synchronous Data Acquisition from Different Data Sources
(2023)
In industrial manufacturing lines, different machines are well orchestrated and applied for their well-defined purpose. As each of these machines must be monitored and maintained in the first place, there are scenarios in which a Data Acquisition system brings enormous benefits. Since the cost of such professional systems is often not appropriate or feasible for research projects or prototyping, a proof of concept is often achieved by applying end-user hardware. In this work, a distributed measurement system for supporting the collection of data is described with respect to AI-based projects for research and teaching. ARTHUR (meAsuRing sysTem witH distribUted sensoRs) is arbitrarily expandable and has so far been used in the field of data acquisition on machine tools. Typical measured values are Accoustic Emission values, force plates X-Y-Z force values, simple PLC switching signals, OPC-UA machine parameters, etc., which were recorded by a wide variety of sensors. The overall ATHUR system is based on Raspberry Pis and consists of a master node, multiple independent measurement worker nodes, a streaming system realized with Redis, as well as a gateway that stores the data in the cloud. The major objectives of the ARTHUR system are scalability and the support for low-cost measuring components while solely applying open-source software. The work on hand discusses the advantages and disadvantages regarding the hard- and software of this TCP/IP-based system.
Supervised object detection models are trained to recognize certain objects. These models are classified into two types: single-stage detectors and two-stage detectors. The single-stage detectors just need one pass through the model to anticipate all the bounding boxes, whereas the two-stage detectors require to first estimate the image portions where the object could be located. Due to their speed and simplicity, single-stage anchor-based models are used in many industrial settings. Training such models require bounding boxes that describe the spatial location of an object, which are usually drawn by an expert. However, the question remains, how much area should be considered when drawing the bounding boxes? In this paper, we demonstrate the effects that the size and placement of a rectangular bounding box can have on the performance of the anchor-based models. For this, we first perform experiments on a synthetically generated binary dataset and then on a real-world object detection dataset. Our results show that fixing the size of the bounding boxes can help in improving the performance of the model in the case of single class object detection (approximately 50% improvement in mAP@[.5:.95] for real world dataset). Furthermore, we also demonstrate how freely available tools can be combined for obtaining the best possible semi automated object labeling pipeline.