Closed Access
Refine
Year of publication
Document type
- Conference Proceeding (41)
- Article (peer-reviewed) (3)
- Part of a Book (2)
Language
- English (46)
Has full text
- No (46)
Is part of the Bibliography
- Yes (46)
Keywords
- Cloud computing (11)
- Security (9)
- Monitoring (6)
- Industry 4.0 (4)
- Audit (3)
- Data privacy (3)
- Fuzzy logic (3)
- Privacy (3)
- Accountability (2)
- Blockchain (2)
- Cloud (2)
- Cloud Computing (2)
- Computational modeling (2)
- Computer architecture (2)
- Container (2)
- Docker (2)
- Industrie 4.0 (2)
- Internet of things (2)
- IoT (2)
- Quality of service (2)
- Transparency (2)
- Virtual machining (2)
- ABE (1)
- Access control (1)
- Access-control (1)
- Anomalieerkennung (1)
- Anomalous behaviour (1)
- Anomaly detection (1)
- Application software (1)
- Artificial intelligence (1)
- Artificial neural network (1)
- Artificial neural networks (1)
- Assurance (1)
- Attribute certificate (1)
- Authentication (1)
- Authorization (1)
- Autonomous agents (1)
- Autorisierung (1)
- Benchmark testing (1)
- Big data (1)
- Biological neural networks (1)
- Biological system modeling (1)
- Business (1)
- CPS (1)
- Chronicle mining (1)
- Circuit synthesis (1)
- Cloud database (1)
- Cloud infrastructure (1)
- Cloud security (1)
- Clouds (1)
- Cluster-based data validation (1)
- Coils (1)
- Container virtualization (1)
- Context-Awareness (1)
- Continuous monitoring (1)
- Costs (1)
- Cross company business model (1)
- Customer relationship management (1)
- DHT (1)
- Data validation (1)
- Denial of service (1)
- Deployment (1)
- Design engineering (1)
- Desktop grid (1)
- DigNest (1)
- Digital twin (1)
- Distance measurement (1)
- Distributed computing (1)
- Distributed data validation network (1)
- Distributed reflection denial of service (1)
- Distributed system (1)
- Dynamic VM creation (1)
- Edge Computing (1)
- Educational institutions (1)
- Emergency-situations (1)
- Energy forecast (1)
- Evidence (1)
- Factory of the future (1)
- Fertigung (1)
- Fuzzy control (1)
- GII (1)
- HPC (1)
- Hardware (1)
- Home appliances (1)
- Hybrid AI (1)
- IIoT (1)
- IT compliance (1)
- Industrial blockchain (1)
- Industrial internet of things (1)
- Innovation hubs (1)
- Kernel (1)
- Knowledge-based system (1)
- Load modeling (1)
- Locality aware (1)
- Logic circuits (1)
- ML (1)
- MLOps (1)
- Machine integration (1)
- Maintenance (1)
- Mathematical model (1)
- Medical services (1)
- Meteorology (1)
- Modularization (1)
- Montenegro (1)
- Nachhaltigkeit (1)
- Neuro-fuzzy (1)
- Neuronale Netze (1)
- Ontology reasoning (1)
- Optical fiber sensors (1)
- Optical fibers (1)
- PaaS (1)
- Parallel algorithms (1)
- Partitioning algorithms (1)
- Peer-reviewed conference (1)
- Performance evaluation (1)
- Persistence (1)
- Predictive maintenance (1)
- Privacy level agreement (PLA) (1)
- Private cloud (1)
- Privilege management infrastructure (1)
- Protocols (1)
- Public key cryptography (1)
- Public key infrastructure (1)
- Reliability (1)
- Resource Discovery (1)
- Risk analysis (1)
- Security Audit as a Service (1)
- Security Zertifikat (1)
- Security monitoring (1)
- Sensor fusion (1)
- Servers (1)
- Service level agreement (SLA) (1)
- Shibboleth (1)
- Shortest path problem (1)
- Sicherheit (1)
- Smart contracts (1)
- Standards organizations (1)
- System call tracing (1)
- Taxonomy (1)
- Time factors (1)
- Timing (1)
- Training (1)
- Trust (1)
- Trust management (1)
- Virtual instance (1)
- Virtual machine monitors (1)
- Virtualization (1)
- Western Balkans (1)
- e-Learning (1)
Explainable Artificial Intelligence (XAI) seeks to enhance transparency and trust in AI systems. Evaluating the quality of XAI explanation methods remains challenging due to limitations in existing metrics. To address these issues, we propose a novel metric called Explanation Significance Assessment (ESA) and its extension, the Weighted Explanation Significance Assessment (WESA). These metrics offer a comprehensive evaluation of XAI explanations, considering spatial precision, focus overlap, and relevance accuracy. In this paper, we demonstrate the applicability of ESA and WESA on medical data. These metrics quantify the understandability and reliability of XAI explanations, assisting practitioners in interpreting AI-based decisions and promoting informed choices in critical domains like healthcare. Moreover, ESA and WESA can play a crucial role in AI certification, ensuring both accuracy and explainability. By evaluating the performance of XAI methods and underlying AI models, these metrics contribute to trustworthy AI systems. Incorporating ESA and WESA in AI certification efforts advances the field of XAI and bridges the gap between accuracy and interpretability. In summary, ESA and WESA provide comprehensive metrics to evaluate XAI explanations, benefiting research, critical domains, and AI certification, thereby enabling trustworthy and interpretable AI systems.
Retinopathy of Prematurity (ROP) is the highest cause of childhood blindness globally with babies born preterm having a higher probability of contracting the disease. The disease diagnosis remains an economic burden to many countries, lack of enough ophthalmologists for the disease diagnosis coupled with non-existent national screening guidelines still remains a challenge. To diagnose the disease, a fundus photography is conducted, printout images are analyzed to determine the presence or absence of the disease. With the increase in the development of smartphones having advanced image capturing and processing features, the utilization of smartphones to capture retina image for disease diagnosis is becoming a common trend. For regions where ophthalmologists are few and/or for low resource regions with few or no retina capturing equipment, the use of smartphones to capture retina images for retina diseases is an effective method. This, however, is challenged: different smartphones produce images of different resolutions; some images are darker others lighter and with different resolution. A smartphone retina image capturing has a smaller field of view ranging between 450–900 which is a major limitation. A lens to support a bigger view can be combined with this approach to provide a wide view of 1300. This enlargement however distorts the image quality and may result in losing some image features. To overcome these challenges, this work develops an improved U-Net model to preprocess images captured using smartphones for ROP disease diagnosis. Our focus is to determine the presence or absence of the disease from smartphone captured images. Because the images are captured under a smaller field of view (FOV), we develop an improved U-Net model by adding patches to enhance image circumference and extract all features from the image and use the extracted features to train a U-Net model for the disease diagnosis. The model results outperformed similar recent developments.
Evolutionary strategy is increasingly used for optimization in various machine learning problems. It can scale very well, even to high dimensional problems, and its ability to globally self optimize in flexible ways provides new and exciting opportunities when combined with more recent machine learning methods. This paper describes a novel approach for the optimization of models with a data driven evolutionary strategy. The optimization can directly be applied as a preprocessing step and is therefore independent of the machine learning algorithm used. The experimental analysis of six different use cases show that, on average, better results are attained than without evolutionary strategy. Furthermore it is shown, that the best individual models are also achieved with the help of evolutionary strategy. The six different use cases were of different complexity which reinforces the idea that the approach is universal and not depending on specific use cases.
As industrial networks continue to expand and connect more devices and users, they face growing security challenges such as unauthorized access and data breaches. This paper delves into the crucial role of security and trust in industrial networks and how trust management systems (TMS) can mitigate malicious access to these networks.
The TMS presented in this paper leverages distributed ledger technology (blockchain) to evaluate the trustworthiness of blockchain nodes, including devices and users, and make access decisions accordingly. While this approach is applicable to blockchain, it can also be extended to other areas. This approach can help prevent malicious actors from penetrating industrial networks and causing harm. The paper also presents the results of a simulation to demonstrate the behavior of the TMS and provide insights into its effectiveness.
AAL applications are designed for elderly people and collecting personally identifiable information (PII), e.g. health data. During normal operations, these data should be kept private. But during emergency situations, the information is critical for helpers and emergency doctors. This paper discusses the results of a survey conducted for PII in AAL and proofs the requirement of special access control rules for systems in situations of emergency.
In edge/fog computing infrastructures, the resources and services are offloaded to the edge and computations are distributed among different nodes instead of transmitting them to a centralized entity. Distributed Hash Table (DHT) systems provide a solution to organizing and distributing the computations and storage without involving a trusted third party. However, the physical locations of nodes are not considered during the creation of the overlay which causes some efficiency issues. In this paper, Locality aware Distributed Addressing (LADA) model is proposed that can be adopted in distributed infrastructures to create an overlay that considers the physical locations of participating nodes. LADA aims to address the efficiency issues during the store and lookup processes in DHT overlay. Additionally, it addresses the privacy issue in similar proposals and removes any possible set of fixed entities. Our studies showed that the proposed model is efficient, robust and is able to protect the privacy of the locations of the participating nodes.
In the context of Industry 4.0, smart factories use advanced sensing and data analytic technologies to understand and monitor the manufacturing processes. To enhance production efficiency and reliability, statistical Artificial Intelligence (AI) technologies such as machine learning and data mining are used to detect and predict potential anomalies within manufacturing processes. However, due to the heterogeneous nature of industrial data, sometimes the knowledge extracted from industrial data is presented in a complex structure. This brings the semantic gap issue which stands for the lack of interoperability among different manufacturing systems. Furthermore, as the Cyber-Physical Systems (CPS) are becoming more knowledge-intensive, uniform knowledge representation of physical resources and real-time reasoning capabilities for analytic tasks are needed to automate the decision-making processes for these systems. These requirements highlight the potential of using symbolic AI for predictive maintenance.
To automate and facilitate predictive analytics in Industry 4.0, in this paper, we present a novel Knowledge-based System for Predictive Maintenance in Industry 4.0 (KSPMI). KSPMI is developed based on a novel hybrid approach that leverages both statistical and symbolic AI technologies. The hybrid approach involves using statistical AI technologies such as machine learning and chronicle mining (a special type of sequential pattern mining approach) to extract machine degradation models from industrial data. On the other hand, symbolic AI technologies, especially domain ontologies and logic rules, will use the extracted chronicle patterns to query and reason on system input data with rich domain and contextual knowledge. This hybrid approach uses Semantic Web Rule Language (SWRL) rules generated from chronicle patterns together with domain ontologies to perform ontology reasoning, which enables the automatic detection of machinery anomalies and the prediction of future events’ occurrence. KSPMI is evaluated and tested on both real-world and synthetic data sets.