Refine
Year of publication
Document type
- Conference Proceeding (135) (remove)
Keywords
- Cloud computing (22)
- Security (18)
- Industry 4.0 (12)
- Privacy (8)
- Monitoring (7)
- Blockchain (6)
- Audit (5)
- Cloud Computing (5)
- OSGi (5)
- AAL (4)
- Accountability (4)
- Authentication (4)
- Cloud (4)
- Data privacy (4)
- Docker (4)
- Machine learning (4)
- PaaS (4)
- Computer architecture (3)
- Digital evidence (3)
- Fuzzy logic (3)
- Industrie 4.0 (3)
- Internet of things (3)
- QoS (3)
- SLA (3)
- Smart contracts (3)
- Software agents (3)
- Access control (2)
- Ambient assisted living (2)
- Augmentation (2)
- Biometrics (2)
- CPS (2)
- Computational modeling (2)
- Container (2)
- Container virtualization (2)
- Context aware (2)
- Digital forensics (2)
- Distributed data validation network (2)
- Evidence (2)
- Federated cloud (2)
- Fuzzy Logic (2)
- Generative models (2)
- HPC (2)
- Hardware (2)
- IaaS (2)
- Industrial internet of things (2)
- IoT (2)
- ML (2)
- MLOps (2)
- Maintenance (2)
- Mobile security (2)
- Modularization (2)
- Neural network (2)
- Ontologies (2)
- Pix2Pix (2)
- Predictive maintenance (2)
- Quality of service (2)
- SWRL rules (2)
- System call tracing (2)
- Usability (2)
- Virtual machining (2)
- Virtualization (2)
- ABE (1)
- Access-control (1)
- Agent confidentiality (1)
- Agent migration (1)
- Agent trust (1)
- Anomalieerkennung (1)
- Anomalous behavior (1)
- Anomalous behaviour (1)
- Anomaly detection (1)
- Application deployment (1)
- Application software (1)
- Artificial intelligence (1)
- Artificial neural network (1)
- Artificial neural networks (1)
- Attribute certificate (1)
- Authentication technologies (1)
- Authorization (1)
- Automl (1)
- Autonomous agents (1)
- Autorisierung (1)
- BLEU (1)
- Benchmark testing (1)
- Big Data (1)
- Big data (1)
- Big data frameworks (1)
- Biological neural networks (1)
- Biological system modeling (1)
- Bounding-box regression (1)
- Business (1)
- CNN (1)
- Cache (1)
- Chatbot (1)
- Circuit synthesis (1)
- Cloud Customer (1)
- Cloud Forensics (1)
- Cloud Provider (1)
- Cloud database (1)
- Cloud forensics challenges (1)
- Cloud forensics solutions (1)
- Cloud infrastructure (1)
- Cloud security (1)
- Cloud-edge computing (1)
- Clouds (1)
- Cluster-based data validation (1)
- Coils (1)
- Condition monitoring (1)
- Container Virtualization (1)
- Context (1)
- Context enhancement protocol (1)
- Context-Awareness (1)
- Context-aware (1)
- Continuous authentication (1)
- Convolutional generative adversarial network (1)
- Costs (1)
- Cross company business model (1)
- Customer relationship management (1)
- Cyber physical systems (1)
- Cyber security (1)
- Cybersecurity (1)
- DHT (1)
- Data acquisition (1)
- Data flow (1)
- Data protection (1)
- Data validation (1)
- Data-driven (1)
- Decision support systems (1)
- Deloyment (1)
- Dementia (1)
- Dementia health care (1)
- Denial of service (1)
- Deployment (1)
- Design engineering (1)
- Desktop grid (1)
- DigNest (1)
- Digital Evidence (1)
- Digital Forensics (1)
- Digital forensic (1)
- Digital innovation hub (1)
- Digital investigation (1)
- Digital twin (1)
- Digital wallet (1)
- Digitized agreements (1)
- Distance measurement (1)
- Distributed DNN (1)
- Distributed OSGi (1)
- Distributed computing (1)
- Distributed reflection denial of service (1)
- Distributed system (1)
- Domain analysis provisioner (1)
- Edge Computing (1)
- Edge security (1)
- Educational institutions (1)
- Emergency-situations (1)
- Endoscopy (1)
- Energy forecast (1)
- Evidential clustering (1)
- Evolutionary strategy (1)
- Expert system (1)
- Explainable AI (1)
- Factory of the future (1)
- FastCAM (1)
- Federated identity (1)
- Federated service (1)
- Fertigung (1)
- Filter (1)
- Fog computing (1)
- Fuzzy clustering (1)
- Fuzzy control (1)
- GII (1)
- GreedyDual-Size (1)
- Grinding burn (1)
- Grinding parameters (1)
- Health informatics (1)
- Healthcare (1)
- Healthcare workforce training (1)
- High Performance Computing (1)
- Home appliances (1)
- Hybrid (1)
- Hybrid AI (1)
- Hybrid Cloud (1)
- Hyperparameter tuning (1)
- I/O isolation (1)
- IIoT (1)
- IT compliance (1)
- Identity management (1)
- Image caption generation (1)
- Image classification (1)
- Industrial blockchain (1)
- Industrial cyber physical system (1)
- Innovation hubs (1)
- Instance segmentation (1)
- JADE (1)
- KPI (1)
- Kernel (1)
- LRU (1)
- LRU-MIN (1)
- LRU-threshold (1)
- Load Balancing (1)
- Load modeling (1)
- Locality aware (1)
- Log2(SIZE) (1)
- Logic circuits (1)
- METEOR (1)
- MIX (1)
- Machine integration (1)
- Machine maintenance (1)
- Mathematical model (1)
- Medical services (1)
- Medizinische Informatik (1)
- Meta learning (1)
- Metalworking (1)
- Meteorology (1)
- Metric (1)
- Metrics (1)
- Mobile agent security (1)
- Mobile device (1)
- Modelling language (1)
- Montenegro (1)
- Multimodal data (1)
- NLP (1)
- Nachhaltigkeit (1)
- Natural language processing (1)
- Neural net-works (1)
- Neuro-fuzzy (1)
- Neuronale Netze (1)
- OSGi Service Architecture (1)
- Object detection (1)
- Ontology (1)
- OpenVZ (1)
- Operating system (1)
- Optical fiber sensors (1)
- Optical fibers (1)
- Optimization (1)
- P2P (1)
- P2P communication (1)
- PaaS Management (1)
- Parallel algorithms (1)
- Partitioning algorithms (1)
- Peer-reviewed conference (1)
- Peer-reviewed of the scientific committee (1)
- Performance evaluation (1)
- Persistence (1)
- Poisoning (1)
- Poster (1)
- Poster Presentation (1)
- Prediction (1)
- Price models (1)
- Privacy protection (1)
- Private cloud (1)
- Privilege management infrastructure (1)
- Protocols (1)
- Proxy (1)
- Public key cryptography (1)
- Public key infrastructure (1)
- Quality assessment (1)
- Quality prediction (1)
- Reliability (1)
- Reminisence therapy (1)
- Replacement-Algorithm (1)
- Resource Discovery (1)
- Reusable architectural patterns (1)
- Risk analysis (1)
- Rule-based-security (1)
- SIZE (1)
- SaaS (1)
- Scaling service (1)
- Security Zertifikat (1)
- Security monitoring (1)
- Semantic segmentation (1)
- Sensor fusion (1)
- Servers (1)
- Shortest path problem (1)
- Sicherheit (1)
- Simulation (1)
- Single sign-on (1)
- Smart medical wearables (1)
- Smart system (1)
- Standards organizations (1)
- Steel surface damage (1)
- Storage (1)
- Storage management (1)
- TAS (1)
- Taxonomy (1)
- Time factors (1)
- Time-Series analysis (1)
- Timing (1)
- Traceability (1)
- Training (1)
- Transparency (1)
- Transparent authentication (1)
- Trust management (1)
- Tutor (1)
- Universal image quality index (1)
- Unreliability (1)
- User authentication (1)
- User centric ontology (1)
- User manuals (1)
- User study (1)
- User survey (1)
- Users' perceptions (1)
- Users’ security practices (1)
- Variety (1)
- Velocity (1)
- Veracity (1)
- Verifiability (1)
- Virtual instance (1)
- Virtual machine monitors (1)
- Visual XAI (1)
- Visual attention network (1)
- Visual attention networks (1)
- Visualisation (1)
- Volume (1)
- Western Balkans (1)
- XAI (1)
- Yolo (1)
- academia-business cooperation (1)
Health informatics plays a crucial role in modern healthcare provision. Training and continuous education are essential to bolster the healthcare workforce on health informatics. In this work, we present the training events within EU-funded DigNest project. The aim of the training events, the subjects offered, and the overall evaluation of the results are described in this paper.
The digital transformation of companies is expected to increase the digital interconnection between different companies to develop optimized, customized, hybrid business models. These cross-company business models require secure, reliable, and traceable logging and monitoring of contractually agreed information sharing between machine tools, operators, and service providers. This paper discusses how the major requirements for building hybrid business models can be tackled by the blockchain for building a chain of trust and smart contracts for digitized contracts. A machine maintenance use case is used to discuss the readiness of smart contracts for the automation of workflows defined in contracts. Furthermore, it is shown that the number of failures is significantly improved by using these contracts and a blockchain.
Machine learning applications, like machine condition monitoring, predictive maintenance, and others, become a state of the art in Industry 4.0. One of many machine learning algorithms are decision trees for the decision-making process. A new approach for creating distributed decision trees, called node based parallelization, is presented. It allows data to be classified through a network of industrial devices. Each industrial device is responsible for a single classification rule. Also, nodes that react incorrectly, for example, due to an attack, are taken into account using a variety of methods to remain the decision-making process correct and robust.
Ensuring data quality is central to the digital transformation in industry. Business processes such as predictive maintenance or condition monitoring can be implemented or improved based on the available data. In order to guarantee high data quality, a single data validation system are usually used to validate the production data for further use. However, using a single system allows an attacker only to perform one successful attack to corrupt the whole system. We present a new approach in which a data validation system using multiple different validators minimizes the probability of success for the attacker. The validators are arranged in clusters based on their properties. For a validation process, a challenge is given that specifies which validators should perform the current validation. Validation results from other validators are dropped. This ensures that even for more than half of the validators being corrupted anomalies can be detected during the validation process.
Distributed machine learning algorithms that employ Deep Neural Networks (DNNs) are widely used in Industry 4.0 applications, such as smart manufacturing. The layers of a DNN can be mapped onto different nodes located in the cloud, edge and shop floor for preserving privacy. The quality of the data that is fed into and processed through the DNN is of utmost importance for critical tasks, such as inspection and quality control. Distributed Data Validation Networks (DDVNs) are used to validate the quality of the data. However, they are prone to single points of failure when an attack occurs. This paper proposes QUDOS, an approach that enhances the security of a distributed DNN that is supported by DDVNs using quorums. The proposed approach allows individual nodes that are corrupted due to an attack to be detected or excluded when the DNN produces an output. Metrics such as corruption factor and success probability of an attack are considered for evaluating the security aspects of DNNs. A simulation study demonstrates that if the number of corrupted nodes is less than a given threshold for decision-making in a quorum, the QUDOS approach always prevents attacks. Furthermore, the study shows that increasing the size of the quorum has a better impact on security than increasing the number of layers. One merit of QUDOS is that it enhances the security of DNNs without requiring any modifications to the algorithm and can therefore be applied to other classes of problems.
The importance of machine learning (ML) has been increasing dramatically for years. From assistance systems to production optimisation to healthcare support, almost every area of daily life and industry is coming into contact with machine learning. Besides all the benefits ML brings, the lack of transparency and difficulty in creating traceability pose major risks. While solutions exist to make the training of machine learning models more transparent, traceability is still a major challenge. Ensuring the identity of a model is another challenge, as unnoticed modification of a model is also a danger when using ML. This paper proposes to create an ML Birth Certificate and ML Family Tree secured by blockchain technology. Important information about training and changes to the model through retraining can be stored in a blockchain and accessed by any user to create more security and traceability about an ML model.
Explainable Artificial Intelligence (XAI) seeks to enhance transparency and trust in AI systems. Evaluating the quality of XAI explanation methods remains challenging due to limitations in existing metrics. To address these issues, we propose a novel metric called Explanation Significance Assessment (ESA) and its extension, the Weighted Explanation Significance Assessment (WESA). These metrics offer a comprehensive evaluation of XAI explanations, considering spatial precision, focus overlap, and relevance accuracy. In this paper, we demonstrate the applicability of ESA and WESA on medical data. These metrics quantify the understandability and reliability of XAI explanations, assisting practitioners in interpreting AI-based decisions and promoting informed choices in critical domains like healthcare. Moreover, ESA and WESA can play a crucial role in AI certification, ensuring both accuracy and explainability. By evaluating the performance of XAI methods and underlying AI models, these metrics contribute to trustworthy AI systems. Incorporating ESA and WESA in AI certification efforts advances the field of XAI and bridges the gap between accuracy and interpretability. In summary, ESA and WESA provide comprehensive metrics to evaluate XAI explanations, benefiting research, critical domains, and AI certification, thereby enabling trustworthy and interpretable AI systems.
Data scientists, researchers and engineers want to understand, whether machine learning models for object detection work accurate and precise. Networks like Yolo use bounding boxes as a result to localize the object in the image.
The principal aim of this paper is to address the problem of a lack of an effective metric for evaluating the results of bounding box regression in object detection networks when boxes do not overlap or lie completely within each other.
The standard known metrics, like IoU, lack of differentiating results, which do not overlap but differ in the distance between predicted bounding box and label.
To solve this challenge, we propose a new metric called UIoU (Unified Intersection over Union) that combines the best properties of existing metrics (IoU, GIoU and DIoU) and extends them with a similarity factor. By assigning weight to each component of the metric, it allows for a clear differentiation between the three possible cases of box positions (not overlapping, overlapping, boxes inside each other).
The result of this paper is a new metric that outperforms the existing metrics such as IoU, GIoU and DIoU by providing a more understandable measure of the performance of object detection models. This provides researchers and users in the field of explainable AI with a metric that allows the evaluation and comparison of prediction and label bounding boxes in an understandable way.
Enormous potential of artificial intelligence (AI) exists in numerous products and services, especially in healthcare and medical technology. Explainability is a central prerequisite for certification procedures around the world and the fulfilment of transparency obligations. Explainability tools increase the comprehensibility of object recognition in images using Convolutional Neural Networks, but lack precision.
This paper adapts FastCAM for the domain of detection of medical instruments in endoscopy images. The results show that the Domain Adapted (DA)-FastCAM provides better results for the focus of the model than standard FastCAM weights.