Refine
Year of publication
Document type
- Conference Proceeding (1635) (remove)
Keywords
- CD-ROM (41)
- Poster (30)
- Cloud computing (24)
- Security (20)
- Machine learning (16)
- Industry 4.0 (12)
- Posterpräsentation (10)
- Privacy (10)
- Virtual reality (10)
- Manufacturing (9)
Defining tasks and activities for academic nursing in community and long-term care arrangements
(2023)
Transcultural Student Research on SDGs - A Higher Education Project for Sustainable Development
(2023)
The charge response of a force applied to piezoelectric stack actuators was characterized in the range of 0 N – 20 N for application in piezoelectric self-sensing. Results show linear behavior between ap-plied force and collected charge for both actuators tested in this study. One actuator exhibits a 3.55 times higher sensitivity slope than the other related to its larger capacitance. An error analysis reveals a reduction of relative error in charge measurement with rising forces applied to the actuators.
In this paper, we present a study on the utilization of smart medical wearables and the user manuals of such devices. A total of 342 individuals provided input for 18 questions that address user behavior in the investigated context and the connections between various assessments and preferences. The presented work clusters individuals based on their professional relation to user manuals and analyzes the obtained results separately for these groups.
Health informatics plays a crucial role in modern healthcare provision. Training and continuous education are essential to bolster the healthcare workforce on health informatics. In this work, we present the training events within EU-funded DigNest project. The aim of the training events, the subjects offered, and the overall evaluation of the results are described in this paper.
This poster presents a Montenegrin Digital Academic Innovation Hub aimed to support education, innovations, and academia-business cooperation in medical informatics (as one of four priority areas) at national level in Montenegro. The Hub topology and its organisation in the form of two main nodes, with services established within key pillars: Digital Education; Digital Business Support; Innovations and cooperation with industry; and Employment support.
Supervised object detection models are trained to recognize certain objects. These models are classified into two types: single-stage detectors and two-stage detectors. The single-stage detectors just need one pass through the model to anticipate all the bounding boxes, whereas the two-stage detectors require to first estimate the image portions where the object could be located. Due to their speed and simplicity, single-stage anchor-based models are used in many industrial settings. Training such models require bounding boxes that describe the spatial location of an object, which are usually drawn by an expert. However, the question remains, how much area should be considered when drawing the bounding boxes? In this paper, we demonstrate the effects that the size and placement of a rectangular bounding box can have on the performance of the anchor-based models. For this, we first perform experiments on a synthetically generated binary dataset and then on a real-world object detection dataset. Our results show that fixing the size of the bounding boxes can help in improving the performance of the model in the case of single class object detection (approximately 50% improvement in mAP@[.5:.95] for real world dataset). Furthermore, we also demonstrate how freely available tools can be combined for obtaining the best possible semi automated object labeling pipeline.
Grain Vision
(2023)
Mikrofiltration - ganzheitliche Lösungen für nachhaltige und wirtschaftliche Produktionsprozesse
(2023)
Prophylaxis in pink: Susceptibility of human oral bacteria to roseoflavin, a vitamin B2 analogue
(2023)
ARTHUR – Distributed Measuring System for Synchronous Data Acquisition from Different Data Sources
(2023)
In industrial manufacturing lines, different machines are well orchestrated and applied for their well-defined purpose. As each of these machines must be monitored and maintained in the first place, there are scenarios in which a Data Acquisition system brings enormous benefits. Since the cost of such professional systems is often not appropriate or feasible for research projects or prototyping, a proof of concept is often achieved by applying end-user hardware. In this work, a distributed measurement system for supporting the collection of data is described with respect to AI-based projects for research and teaching. ARTHUR (meAsuRing sysTem witH distribUted sensoRs) is arbitrarily expandable and has so far been used in the field of data acquisition on machine tools. Typical measured values are Accoustic Emission values, force plates X-Y-Z force values, simple PLC switching signals, OPC-UA machine parameters, etc., which were recorded by a wide variety of sensors. The overall ATHUR system is based on Raspberry Pis and consists of a master node, multiple independent measurement worker nodes, a streaming system realized with Redis, as well as a gateway that stores the data in the cloud. The major objectives of the ARTHUR system are scalability and the support for low-cost measuring components while solely applying open-source software. The work on hand discusses the advantages and disadvantages regarding the hard- and software of this TCP/IP-based system.
On the way to the smart factory, the manufacturing companies investigate the potential of Machine Learning approaches like visual quality inspection, process optimisation, maintenance prediction and more. In order to be able to assess the influence of Machine Learning based systems on business-relevant key figures, many companies go down the path of test before invest. This paper describes a novel and inexpensive distributed Data Acquisition System, ARTHUR (dAta collectoR sysTem witH distribUted sensoRs), to enable the collection of data for AI-based projects for research, education and the industry. ARTHUR is arbitrarily expandable and has so far been used in the field of data acquisition on machine tools. Typical measured values are Acoustic Emission values, force plate X-Y-Z force values, simple SPS signals, OPC-UA machine parameters, etc. which were recorded by a wide variety of sensors. The ARTHUR system consists of a master node, multiple measurement worker nodes, a local streaming system and a gateway that stores the data to the cloud. The authors describe the hardware and software of this system and discuss its advantages and disadvantages.
Up until now, it has been shown that big parts of the so called Industry 4.0 are impacted by Machine Learning (ML) in some way or another. In many shopfloor situations, there are different sensors involved and all data is eventually structured, accumulated and prepared for application in various ML-based scenarios, e.g., predictive maintenance of a machine, quality monitoring of manufactured workpieces or handling domain-specific aspect of the respective fabricator or product. As the physical environment of Cyber Physical System (CPS) can change rapidly, the overall Data Acquisition (DAQ) process and ML training is impacted, too. This work focuses on datasets which consist of small amounts of tabular information and how to utilize them in image-based Neural Networks (NN) with respect to meta learning and multimodal transformations. Therefore, the conceptual utilization of an DAQ system in industrial environments is discussed regarding a variety of techniques for preprocessing and generating visual material from multimodal data. The outcome of such operations is a new dataset which is then applied in model training. Therefore, the presented approach is three-fold. In first analysing the concept of predicting the similarity of structured and numerical data in different datasets, indicators of the feasibility when applying the methodology in related but more sophisticated learning scenarios can be gained. Although ongoing time-series data is differing from simple multi-class data in terms of a chronologically dimension, basic classification concepts are applied to it and evaluated. In order to extend the similarity prediction with a temporal component, the discussed methods are extended by multimodal transformations and an subsequent utilization in Siamese Neural Networks (SNN). By discussing the concept of applying visual representations of structured time-series data in a meta-learning context, known trends and historic information can be utilized for generating real-world test-samples and predicting their validity on inference.
XAI for Semantic Dependency : How to understand the impact of higher-level concepts on AI results
(2023)
Proactive behaviour of in-vehicle voice assistants is seen as key to develop increasingly intelligent and interactive systems. One of the main questions for proactive voice assistants is when opportune moments for engaging the user are. We present a driving simulator study (N = 32) investigating different situations of proactive interaction during an automated ride. Based on previous findings for opportune moments of interaction during manual driving, the study’s focus is on evaluating the influence of driving situations and the performance of a non-driving related activity (NDRA) on the opportuneness of a proactive interaction. The quantitative and qualitative findings show that most situations do not impact the opportuneness of a proactive interaction during an automated ride. However, an extreme traffic situation with an approaching emergency vehicle is considered as inopportune. Travel time and the current state of the user should also be considered for the selection of an opportune moment. A validation of the results in a real road driving study is planned.
Prototyp eines Controllers und eine Simulationsumgebung für VR-basierte laparoskopische Trainings
(2023)
Are Muscles In Musculoskeletal Pain Syndromes Objectively Stiffer Than Normal? - An Evidence Map
(2023)
Potenzial eines Echtzeit-Patientenmonitoring-Systems zur Unterstützung einer bedarfsgerechten Pflege
(2023)
Einsatz eines Aktivitätstisches in der Akutversorgung : Erfahrungen und Ergebnisse aus der Praxis
(2023)
Retinopathy of Prematurity (ROP) is the highest cause of childhood blindness globally with babies born preterm having a higher probability of contracting the disease. The disease diagnosis remains an economic burden to many countries, lack of enough ophthalmologists for the disease diagnosis coupled with non-existent national screening guidelines still remains a challenge. To diagnose the disease, a fundus photography is conducted, printout images are analyzed to determine the presence or absence of the disease. With the increase in the development of smartphones having advanced image capturing and processing features, the utilization of smartphones to capture retina image for disease diagnosis is becoming a common trend. For regions where ophthalmologists are few and/or for low resource regions with few or no retina capturing equipment, the use of smartphones to capture retina images for retina diseases is an effective method. This, however, is challenged: different smartphones produce images of different resolutions; some images are darker others lighter and with different resolution. A smartphone retina image capturing has a smaller field of view ranging between 450–900 which is a major limitation. A lens to support a bigger view can be combined with this approach to provide a wide view of 1300. This enlargement however distorts the image quality and may result in losing some image features. To overcome these challenges, this work develops an improved U-Net model to preprocess images captured using smartphones for ROP disease diagnosis. Our focus is to determine the presence or absence of the disease from smartphone captured images. Because the images are captured under a smaller field of view (FOV), we develop an improved U-Net model by adding patches to enhance image circumference and extract all features from the image and use the extracted features to train a U-Net model for the disease diagnosis. The model results outperformed similar recent developments.
Changes of human trunk circumferences during different breathing styles in different positions
(2023)
The YOLO series of object detection algorithms, including YOLOv4 and YOLOv5, have shown superior performance in various medical diagnostic tasks, surpassing human ability in some cases. However, their black-box nature has limited their adoption in medical applications that require trust and explainability of model decisions. To address this issue, visual explanations for AI models, known as visual XAI, have been proposed in the form of heatmaps that highlight regions in the input that contributed most to a particular decision. Gradient-based approaches, such as Grad-CAM, and non-gradient-based approaches, such as Eigen-CAM, are applicable to YOLO models and do not require new layer implementation. This paper evaluates the performance of Grad-CAM and Eigen-CAM on the VinDrCXR Chest X-ray Abnormalities Detection dataset and discusses the limitations of these methods for explaining model decisions to data scientists.
Explainable Artificial Intelligence (XAI) seeks to enhance transparency and trust in AI systems. Evaluating the quality of XAI explanation methods remains challenging due to limitations in existing metrics. To address these issues, we propose a novel metric called Explanation Significance Assessment (ESA) and its extension, the Weighted Explanation Significance Assessment (WESA). These metrics offer a comprehensive evaluation of XAI explanations, considering spatial precision, focus overlap, and relevance accuracy. In this paper, we demonstrate the applicability of ESA and WESA on medical data. These metrics quantify the understandability and reliability of XAI explanations, assisting practitioners in interpreting AI-based decisions and promoting informed choices in critical domains like healthcare. Moreover, ESA and WESA can play a crucial role in AI certification, ensuring both accuracy and explainability. By evaluating the performance of XAI methods and underlying AI models, these metrics contribute to trustworthy AI systems. Incorporating ESA and WESA in AI certification efforts advances the field of XAI and bridges the gap between accuracy and interpretability. In summary, ESA and WESA provide comprehensive metrics to evaluate XAI explanations, benefiting research, critical domains, and AI certification, thereby enabling trustworthy and interpretable AI systems.
Data scientists, researchers and engineers want to understand, whether machine learning models for object detection work accurate and precise. Networks like Yolo use bounding boxes as a result to localize the object in the image.
The principal aim of this paper is to address the problem of a lack of an effective metric for evaluating the results of bounding box regression in object detection networks when boxes do not overlap or lie completely within each other.
The standard known metrics, like IoU, lack of differentiating results, which do not overlap but differ in the distance between predicted bounding box and label.
To solve this challenge, we propose a new metric called UIoU (Unified Intersection over Union) that combines the best properties of existing metrics (IoU, GIoU and DIoU) and extends them with a similarity factor. By assigning weight to each component of the metric, it allows for a clear differentiation between the three possible cases of box positions (not overlapping, overlapping, boxes inside each other).
The result of this paper is a new metric that outperforms the existing metrics such as IoU, GIoU and DIoU by providing a more understandable measure of the performance of object detection models. This provides researchers and users in the field of explainable AI with a metric that allows the evaluation and comparison of prediction and label bounding boxes in an understandable way.
In-situ SEM analysis tool for stretchable metal-elastomer-laminate-membranes for flexible sensors
(2023)
Evaluation of high compliant elastomer balloons for the identification of artery biomechanics
(2023)
Bei der Entwicklung von Anwendungen der künstlichen Intelligenz (KI) im klinischen Setting ist es besonders wichtig zukünftige Nutzer*innen einzubeziehen. Denn gerade klinisches Personal benötigt sowohl zur Akzeptanz als auch zur effektiven Nutzung zukünftiger KI-Anwendungen ein tiefergehendes Verständnis der KI-Modelle und derenFunktionsprinzipien. In diesem Sinne wurde im Projekt KIDELIR, dessen Ziel es ist, ein KI Unterstützungssystem zur Delirprädiktion zu entwickeln, ein partizipativer Ansatz gewählt.
Um eine praxistaugliche Gestaltung und Entwicklung des KI-Systems zu erreichen, werden Pflegefachpersonen und Ärzt*innen fortlaufend in den Forschungs- und Entwicklungsprozess einbezogen. Im Folgenden wird berichtet, wie der Partizipationsprozess im Projekt KIDELIR bislang gestaltet wurde und welche weiteren Schritte geplant sind. Der Fokus liegt auf der Reflexion der Methoden, die zur Bedarfserhebung bezüglich der Delirversorgung aus pflegerischer Sicht und dem Nutzen eines KI-Systems zur Delirprädiktion herangezogen wurden. Konkret wird die Kombination von Fallvignetten, Health Information Mapping, Techno-Mimesis sowie Cognitive-Affective-Mapping und einer Gruppendiskussion betrachtet. Im weiteren Verlauf des Projekts sind vertiefende leitfadengestützte Interviews mit den Teilnehmenden sowie Ärzt*innen geplant.
Zusätzlich sind weitere partizipative Formate vorgesehen, unter anderem gemeinsam mit Ärzt*innen und Pflegefachpersonen.
Separation of ventilation and cardiac activity on recorded voltages before EIT image reconstruction
(2023)
Digital transformation is now reaching into topics like End-of-life Care, Funeral Culture, and Coping with Grief. Those developments are inevitably accompanied by the growing challenge to design IT systems that are appropriate and helpful for the stakeholders involved. Our aim in this paper is to further introduce the rather new combined research field of Socioinformatics and Thanatology (the scientific study of death and dying) and to present it with the first results on which requirements to consider for the design of digital tools within ‘Thanatopractice’. By using Participatory Design and the Sustainability Awareness Framework (SusAF) in the context of three workshops on socio-technical systems (Online Pastoral Care, Virtual Graveyards, and AI Memory Avatars), we want to sensitize software practitioners to the multidimensional impacts of their products and services in a field, which the participants in the workshops often described as “highly sensitive”.
Year after year, software engineers celebrate new achievements in the field of AI. At the same time, the question about the impacts of AI on society remains insufficiently answered in terms of a comprehensive technology assessment. This article aims to provide software practitioners with a theoretically grounded and practically tested approach that enables an initial understanding of the potential multidimensional impacts. Subsequently, the results form the basis for discussions on AI software requirements. The approach is based on the Sustainability Awareness Framework (SusAF) and Participatory Design. We conducted three workshops on different AI topics: 1. Autonomous Driving, 2. Music Composition, and 3. Memory Avatars. Based on the results of the workshops we conclude that a two-level approach should be adopted: First, a broad one that includes a diverse selection of stakeholders and overall impact analysis. Then, in a second step, specific approaches narrowing down the stakeholders and focusing on one or few impact areas.
Companies are confronted with the challenge of having to transfer more and more knowledge in a shorter time to fewer available employees. At the same time opportunities based on digitalization and smart services are rising. Digital training offers advantages as flexibility, accessibility, in-teractivity and cost savings. This explorative paper investigates the framework conditions, requirements and opportunities for new approach-es of providing knowledge by smart services for professional users. Fur-ther, the paper will investigate how smart service approaches for knowledge provision can be transferred and integrated into product ser-vice systems and suitable business models.
As machine learning becomes increasingly pervasive, its resource demands and financial implications escalate, necessitating energy and cost optimisations to meet stakeholder demands. Quality metrics for predictive machine learning models are abundant, but efficiency metrics remain rare. We propose a framework for efficiency metrics, that enables the comparison of distinct efficiency types. A quality-focused efficiency metric is introduced that considers resource consumption, computational effort, and runtime in addition to prediction quality. The metric has been successfully tested for usability, plausibility, and compensation for dataset size and host performance. This framework enables informed decisions to be made about the use and design of machine learning in an environmentally responsible and cost-effective manner.
3D Computer Vision for the Industrial Metaverse - On the potentials of Neural Radiance Fields
(2023)
The industrial metaverse refers to the use of virtual reality (VR) and augmented reality (AR) technologies in the context of industry and manufacturing. It is envisioned as a shared, immersive digital space where people can interact with and manipulate virtual representations of physical objects and processes. The industrial metaverse has the potential to transform the way products are designed, manufactured, and maintained,
enabling new levels of collaboration, automation, and innovation.
It further includes virtual representations of humans, also known as avatars. These avatars can be used to enable remote collaboration and communication between people in the virtual space. In this way, the industrial metaverse can facilitate virtual meetings, trainings, and other interactive experiences that involve human participants.
Neural Radiance Fields (NeRFs) are a powerful tool for synthesizing photorealistic images of 3D objects, including virtual representations of humans known as avatars. In this talk, we will discuss the potential applications of NeRFs in generating high-fidelity objects and avatars for use in the industrial metaverse.