apl. Prof. Dr.-Ing. habil. Ayoub Al-Hamadi

apl. Prof. Dr.-Ing. habil. Ayoub Al-Hamadi
Department of Neuro-Information Technology
Current projects
ORAKEL: AI-supported early warning systems for relapse prediction in depressive disorders
Duration: 01.07.2024 to 31.12.2027
The ORAKEL project aims to use artificial intelligence (AI) to precisely identify early warning signs of depressive relapses, thus enabling timely intervention. By collecting multimodal data (video and audio recordings) and developing specific deep learning models, the project will analyse patients' behavioural patterns and emotional states. These innovative technologies will support clinical use via a user-friendly graphical interface and help to treat depressive episodes at an early stage.
The focus is on improving relapse prevention, optimising psychiatric care and relieving the burden on medical staff by means of intelligent assistance systems. The project combines expertise in psychiatry and AI development to create personalised approaches to depression treatment and to advance research in key digital technologies.
Better prediction of relapse in depressive disorders by detecting early warning signs using AI (ORAKEL)
Duration: 01.05.2024 to 31.12.2027
Recent advances in AI and machine learning offer promising opportunities to improve early detection of worsening depressive symptoms. Preliminary studies suggest that AI can analyze subtle cues from speech patterns, facial expressions and gestures to detect depressed mood and suicidal crises. For example, depressed people may exhibit changes in the prosody of speech, reduced facial expressions and spontaneous gestures. There is also evidence that vital signs such as heart rate variability and sleep patterns are indicative of a person's mental state. In our project, we will directly compare how well the assessment of the patient's state of illness or their risk of relapse succeeds: a) through the medical consultation (as has been common up to now), b) through standardized ratings or interviews (as is currently common in psychiatric research), c) by predicting relapses in depressive disorders through the detection of early warning signs by means of AI (new approach of our project), d) by combining the aforementioned approaches.
In this way, we will not only recognize whether AI is in principle capable of detecting early warning signs of depression in a clinical context, but also whether this works better than conventional methods. Camera-based monitoring and AI-driven analyses could then provide real-time feedback for healthcare providers and thus enable earlier interventions. The detection of early warning signs of a relapse using artificial intelligence therefore offers considerable potential for improving the care of patients with depressive disorders. The further development of such technologies can also be a helpful addition, particularly due to the limited time resources in outpatient patient care caused by a shortage of doctors. The addition of AI to analyze speech, facial expressions, gestures and vital signs in the assessment of the course of the disease could help to better manage the outpatient treatment of depressive disorders and sustainably improve the quality of life of those affected.
This text was translated with DeepL
Resilient human-robot collaboration in a mixed-skill environment (ENABLING)
Duration: 01.01.2024 to 31.12.2027
Collaborative robot systems are a key technology in flexible, intelligent production, logistics and medicine, which can be used to link complementary skills in a closely interlinked and potential-oriented collaboration with humans, but also to substitute tasks and skills. The ENABLING project addresses the problem area of developing AI methods to complement the skills of robots and humans. This will enable innovations in the cross-sectional areas of information technology and key enabling technology and create the basis for future applications in mixed-skill environments in the lead markets. ENABLING will fundamentally change collaboration in mixed-skill work environments by enabling humans and robots to understand each other's processes, actions and intentions. ENABLING not only increases efficiency in production and logistics for the complete information processing chain, it also minimizes hazards in the work process.
This text was translated with DeepL
Multimodal AI-based pain measurement in intermediate care patients in the postoperative phase
Duration: 01.06.2024 to 31.05.2027
The project deals with artificial intelligence methods for the automated, multimodal and continuous measurement of pain intensity in a post-operative environment on an intermediate care ward after major surgical procedures. In the long term, the technology should enable better treatment of pain and its causes for patients with limited communication skills by supporting and relieving medical staff in pain assessment through automated real-time pain monitoring and enabling more precise, individual and situation-specific analgesia. The technology could also be further developed, validated and used for other patient groups (e.g. children and dementia patients) in future projects.
This text was translated with DeepL
A robust, reliable and multimodal AI system for pain quantification
Duration: 01.12.2023 to 30.11.2026
In Germany, more than 1.7 million people suffer from dementia. As they are affected by cognitive impairment, external assessment tools should be used for pain detection, as self-report does not provide reliable information for this patient group. Therefore, pain recognition in dementia is a major challenge for clinical monitoring and will remain so for the foreseeable future. Thus, the development of a system for pain detection and quantification is of great relevance for numerous applications in the clinical environment, which fulfills the requirements for robustness and reliability. For example, this would be desirable in emergency and acute medicine in order to provide such technical support from an AI system when making a diagnosis. The project will address the development of a robust, reliable and multimodal AI system for pain detection and quantification. It will firstly address the research goal of using deep neural networks and transfer learning with extensive existing in-the-wild databases to train various facial expression features and to increase robustness against different variances (appearance, illumination, partial occlusion, etc.) that are underrepresented in available pain datasets.The aim is to create the basis for a technology that is suitable for future potential use in the clinical environment with a focus on application in dementia patients, particularly for post-operative monitoring in recovery rooms.
This text was translated with DeepL
Establishment of the "RoboLab" innovation laboratory
Duration: 01.01.2024 to 31.10.2026
The "RoboLab" project ensures the sustainable development and application of powerful and innovative methods for generating intuitive and productive interaction processes between humans and robots. These range from basic, generalizable deep-learning-driven AI modules to multimodular robot demonstrators that can be adaptively adapted and used for both specialized and generalized processes in medicine, production and logistics in the lead market of Smart Production/Industry 4.0. The innovative human-centric system solutions created on the basis of the "RoboLab" project will be integrated into the acquired and modernized robots in order to expand them into complex dynamic systems and enable intuitive human-robot and robot-robot interaction. The focus here is on the development of adaptive and scalable systems whose capabilities can be flexibly modified depending on the current requirements and complexity of the scenario in order to be able to act autonomously and interact intuitively in their area of need. The interaction of the NIT working group's areas of expertise in artificial intelligence, cognitive systems and robotics, flanked by the know-how of the cooperation partners, is an optimal prerequisite for achieving these research goals on the basis of state-of-the-art technology.
This text was translated with DeepL
Need for assistance in human-robot collaboration
Duration: 01.08.2023 to 31.07.2026
The scientific objectives include the research and testing of real-time capable deep learning algorithms for
- Environment detectionand navigation with SLAM algorithms (Simultaneous Localization and Mapping),
- Motion estimation of dynamic objects and user tracking in dense spaces,
- Person recognition and identification in confined spaces and
- Recognizing willingness to interact based on body and head poses and facial expressions
A further scientific objective here is to design the algorithms in such a way that joint optimization of the respective sub-goals can be achieved by means of end-to-end learning.
This text was translated with DeepL
Implicit mobile human-robot communication for spatial action coordination with context-specific semantic environment modeling
Duration: 01.09.2022 to 30.04.2026
The use of robots in the industry as well as in the work and everyday life is becoming more and more flexible. Current methods for machine learning and adaptive motion planning are leading to a more robust behavior and a higher autonomy of the robots. Nevertheless, collaborative human-robot interactions still happen to have interruptions and breakdowns in cases where the human is not able to comprehend the robot's movement behavior. A common cause is that the human has an incorrect or limited picture of what the robot is currently perceiving and what its internal state is. This could be avoided if the robot could understand and incorporate the mental states and the perspective of the interaction partner in its own action generation in order to actively generate a common understanding of the interaction.A key competence for such a collaboration between humans and robots is the ability of communication and mutual coordination via implicit signals of body language and movement. The project investigates the implicit human-robot communication in collaborative actions by using the example of the joint construction of a shelf. In experimental studies, situations will be created and recorded in which the interaction and perception between the human and the robot is disturbed. On the one hand, new perception methods are explored, that robustly detect interaction-relevant features based on head and body poses and facial expressions against occlusions. These are interpreted in the context of the action and the environment, so that implicit communication signals (e.g., turning toward, turning away, compliance, hinting, etc.) and internal states (e.g., approval, disapproval, willingness to interact, etc.) can be inferred. On the other hand, new methods are being explored to make the robot infer the perspective and the state of the human interlocutor in its own action planning and actively requests user reactions.This leads to a spatial coordination of the partners during the construction of the shelf by taking into consideration the mutual perception and the goal of the action. Via an active use of body pose, relative orientation and movement of the robot, conflict situations can be solved in advance without the need for explicit instructions to the robot.
Gaze estimation based on the combined loss of regression and classification
Duration: 01.01.2022 to 31.03.2026
Human gaze is a crucial feature used in various applications such as human-robot interaction, autonomous driving and virtual reality. Recently, approaches using Convolutional Neural Networks (CNN) have made remarkable progress in predicting gaze direction. However, estimating the exact gaze direction in uncooperative in-the-wild situations (i.e., with partial occlusions, highly varying lighting conditions, etc.) is still a challenging problem. Here, it is particularly challenging to capture the essential gaze information from the eye area, as this only makes up a small part of a detected face. In this project, a new multi-loss CNN-based network is developed to detect the angles of gaze direction (pitch and yaw angles) with high accuracy directly from facial images. By separating the common features of the last layer of the network, two independent Fully-Connected Layers will be used for the regression of the two gaze angles to capture the characteristics of each angle. Furthermore, a Coarse-to-Fine strategy using a Multi-Loss CNN that incorporates both the Loss of classification and regression shall be applied. We classify the gaze by combining a softmax layer with the cross-entropy loss. This results in a rough classification of the gaze angle (class). To predict gaze angles, we calculate the class distribution followed by the regression loss of the gaze angle.
This text was translated with DeepL
3D-based human-robot collaboration with spatial situation analysis for ad hoc assistance in dynamic goods transport processes
Duration: 01.09.2022 to 31.12.2025
In this project, methods are being researched and developed that enable a mobile pallet transport robot (AGV) to perform a higher semantic situation analysis of the logistical environment for worker-robot and robot-robot interactions. For this purpose, the first objective comprises the creation of maps including self-localization with the inclusion of dynamic semantic work objects. Another objective is the development of latency-optimized methods for the recognition, identification and tracking of workers in the logistics environment based on body, head pose and other indicators to derive the willingness to interact in order to collaborate efficiently and robustly with the acting worker. The actions include specific activities from warehouse logistics (e.g. unloading, loading, searching for pallets), which are determined by including the context (localization of pallets, determination of the loading status) and worker-centered gesture and voice commands. The solution approaches developed as part of the sub-project contribute to the overall project to enable targeted work coordination of several robots and precise and targeted worker-robot collaboration in a robust and efficient manner (transmission of commands, optimization of routes).
This text was translated with DeepL
Development and systematic validation of a system for contactless, camera-based measurement of the heart rate variability
Duration: 01.06.2022 to 31.12.2025
Heart rate variability (HRV) provides important information for the medical analysis of the cardiovascular system and the activity of the autonomic nervous system, as well as for the diagnosis and prevention of diseases. Traditional HRV monitoring systems are contact-based techniques that require sensors to be attached directly to the person's body, such as an electrocardiogram (ECG) or contact photoplethysmography (PPG). These techniques are only partially suitable for long-term monitoring or early detection of disease symptoms. In addition, they can have some negative effects on the monitored person, such as skin irritations, an increased risk of spreading disease germs due to direct contact, etc.The aim of this research project is the optical measurement of heart rate variability (HRV) from video images using PPG. PPG is an optical, non-invasive technology that uses light to record volumetric variations of blood circulation in the skin. In recent years, this technique has been realized remotely and contact-free through the use of cameras and has already been successfully used for the measurement of heart rate (HR) from video data. For the measurement of HRV a precise temporal determination of the heartbeat peaks in the PPG signal is necessary. The high measurement accuracy of HR in the state of the art can only be achieved by a strong temporal filtering. However, this makes it impossible to localize the heartbeats precisely over time. A challenge is that even smallest movements and facial expressions of the test persons lead to artifacts in the PPG signal. This is where this research project takes effect, by systematically detecting these artifacts in the PPG signal and subsequently compensating them. Up to now, almost all methods for measuring the PPG signal have been based on color value averaging of (partial) areas of the skin in the face. Movement compensation is not possible with these methods because position informations is lost. To train models that are invariant to movement, deep neural networks (Convolutional Neural Network (CNN)) are well suited. Using 3D head pose estimation methods and action unit recognition (facial muscle movements), a system will be trained to extract motion-invariant PPG signals from video data. For this purpose, information on detected skin regions in each image will be generated using new segmentation methods based on CNN and used for motion compensation. The data obtained by this network will be further processed with another recurrent neural network (Long Short-Term Memory (LSTM)) optimized for temporal signal processing in order to determine the pulse peaks in the PPG signal precisely in time.
Completed projects
Person identification in a real human-robot interaction environment
Duration: 02.11.2020 to 31.03.2025
The scientific objectives of the project include the research and testing of real-time capable deep learning algorithms for
- person recognition and identification in dense spaces and
- Recognizing willingness to interact based on body and head poses as well as facial expressions
A further scientific goal here is todesign the algorithms in such a way that joint optimization of the respective sub-goals can be achieved by means ofend-to-end learning.
This text was translated with DeepL
The impact of using AI-powered technology for lie detection in negotiations
Duration: 01.10.2021 to 29.02.2024
The increasing digitization of social and economic interactions is proceeding at a considerable speed. Research on digitization processes should reconcile two areas of knowledge, which are usually examined separately from each other: First, the question of technical development and second, the question of the effects of this development on human behavior. In the project applied for, an attempt will be made to combine both perspectives in an interdisciplinary approach, whereby the focus is on behavioral analysis, but the technical components are nevertheless strongly represented. The use case chosen for this type of analysis of digitization processes is the phenomenon of asymmetric information. Specifically, we investigate to what extent the paradigm of asymmetric information has become at least partially obsolete through the use of AI technologies. In our interdisciplinary project, instead of waiting for technological developments in the field of AI-based lie detection, we would like to contribute to technological progress on our part, while experimentally investigating the possible social consequences of this technology.The project proposal combines two research areas: Economics (WW) and Neuro-Information Technology (NIT). In both fields, the identification of private information plays a major role, but is approached from different angles. While economic analysis focuses on the role and importance of private information in negotiation situations, NIT focuses on the feasibility and quality of automated recognition of personal characteristics.
Autonomous navigation and human-machine interaction of a mobile robot in outdoor applications
Duration: 15.09.2019 to 31.05.2022
The overall goal of this project is to explore methods that enable a mobile robotic system to autonomously navigate in outdoor environments, identify potential and specific interaction partners, recognize their willingness to interact, interact with them and track the interaction partners to maintain cooperation in dense spaces using motion analysis.
The scientific and technical challenge is to capture the environment of the mobile robot in such a way that precise self-localization and, based on this, efficient navigation in an outdoor environment to find cooperating persons can take place. Preliminary information from the robot's environment, such as brands, should be avoided as far as possible. The robot should be able to detect and find its way around an initially unknown environment solely on the basis of its own optical system.
Another challenge is the tracking of interaction partners in dense spaces. This refers to environments with several potential interaction partners and dynamic scene objects and the associated occlusion situations. If two objects fall below a certain spatial distance, they cannot be clearly separated from each other, making it very difficult to track the persons to be tracked.
A particular challenge of unknown, dense spaces is that the potential interaction partners are not known a priori, but must first be identified. This includes both the pure recognition of people and the assessment of their willingness to interact.
In order to overcome these challenges, various technical and scientific sub-problems need to be solved, with the focus being on researching methods for environment detection, navigation and interaction using artificial intelligence (AI) from a scientific perspective and the design of the robot system from a technical perspective.
This text was translated with DeepL
Multimodal detection of pressure and heat pain intensity
Duration: 30.11.2017 to 31.03.2022
The focus of this project is to improve pain diagnostics and the monitoring of pain conditions. The use of multimodal sensor technologies and highly effective data classification can enable reliable and valid automated pain recognition. To achieve this goal, a promising strategy for objective pain recognition is being developed by combining new innovative methods of data analysis, pattern recognition and machine learning on data from an experimental protocol. In order to extract and select features, the experimental data is pre-processed serially using complex filters and decompensation methods. The features obtained in this way are the prerequisite for robust automated detection of pain intensity in real time.
This text was translated with DeepL
Intentional, anticipatory, interactive systems (IAIS)
Duration: 01.01.2018 to 31.12.2021
Intentional, anticipatory, interactive systems (IAIS) represent a new class of user-centered assistance systems and are a nucleus for the development of information technology with corresponding SMEs in Saxony-Anhalt. IAIS uses action and system intentions derived from signal data, and the affective state of the user. By anticipating the further action of the user, solutions are interactively negotiated. The active roles of humans and systems change strategically, which requires neurological and behavioral models. The human-machine-systems are being deployed in our systems lab, based on previous work in the SFB-TRR 62. The goal of lab tests is the understanding of the situated interaction. This supports the regional economy in their integration of assistance systems for Industry 4.0 in the context of demographic change.
Human Behavior Analysis (HuBA)
Duration: 01.10.2017 to 30.06.2021
The project establishes a junior research group to investigate new and improved methods of information processing for the automated understanding of human behavior. Human behavior includes all externally perceptible activities such as body postures, gestures and facial expressions that are shown consciously or unconsciously. Behavior should also be used to draw conclusions about any underlying sensitivities of the person.
This text was translated with DeepL
Human-Machine-Interaction Labs - Robot Lab
Duration: 01.09.2019 to 30.09.2020
The aim of the "Robo-Labs" project is the sustainable continuation of the results obtained on human-machine interaction in the NIT Group. The Robo-Lab contributes to this goal as follows:
- The research and implementation of methods for human-machine interaction using artificial intelligence (AI) requires large computing capacities and large amounts of data. With the help of a deep learning computer, sufficient computing capacity is to be created in order to remain internationally competitive in the future .
- In order to meet the increasing demand for data, a laboratory environment is to be created that allows multimodal data acquisition in human-robot collaboration (HRC). To this end, the existing sensor technology in the laboratory is to be expanded and an environment for data acquisition for natural human-robot interaction is to be created .
- A mobile robot and a stationary robot will be able to simulate different technical manufacturing processes and assistive systems, thus enabling HRC situations that will be implemented in demonstrators in ongoing 3Dsensation projects and beyond .
- The Robo-Lab further expands the competence profile of the NIT working group in the direction of human-robot interaction and creates a unique, internationally competitive laboratory environment for research and teaching thanks to the additional sensory equipment .
- Current and future projects can be supported with the Robo-Lab, as a unique environment for the development of demonstrators as well as for data acquisition and data processing is created. The Robo-Lab enables research at the highest level and allows further research efforts.
This text was translated with DeepL
Ergonomics Assistance Systems for Contactless Human-Machine-Operation
Duration: 01.01.2017 to 31.03.2020
The aim of the project is to research and demonstrate new technologies and design methods or operating concepts integrated into the work context for human-machine interaction (HMI) and human-machine cooperation (HMC), with the help of which input/control by humans, output of information by the machine and collision avoidance for commercial products and in the industrial production environment can be realized. This should also enable SMEs in the social and economic fields of health and production to develop and market interaction concepts and information-oriented visualization solutions that allow safe, ergonomic and application-oriented work in a combination of man and machine in a common value chain. These concepts will be incorporated into the next generations of device developments and production systems of the industrial partners. The focus here is on a high level of integration of the robotic systems through rapid situation detection and processing, including multi-sensor data for multi-user scenarios.
This text was translated with DeepL
Mimic and gestural expression analysis to measure anxiety
Duration: 01.11.2017 to 29.02.2020
Industrial robots are virtually omnipresent in today's production facilities - but for safety reasons they usually work separately from humans. One obstacle to close cooperation, in which both could play to their advantages (humans: perception, judgment, improvisation; robots: reproducibility, productivity, power), is the fear of robots on the part of humans: Due to the potential risk of injury in the event of a collision or ignorance of the technical interrelationships, humans inwardly block collaboration, act in an unfocused manner and tend to make jerky reflex movements. This impairs product quality and increases the likelihood of dangerous accidents. The aim of this project is therefore to reliably recognize people in the production environment and to develop methods for objective, individual and situational fear assessment based on sensory gesture and facial expression. Appropriate interaction measures can be used to react to potentially recognized fears in a situation-specific manner, thus creating trust between humans and machines, which forms the basis for economically attractive human-robot collaboration.
This text was translated with DeepL
Hyperspectral vital parameter estimation for automatic non-contact stress detection
Duration: 01.01.2017 to 30.09.2019
The project is part of the joint project "HyperStress" of the Graduate College of the Alliance "3d-Sensation". Stress is considered the greatest stress factor in the workplace and has been the subject of great research interest for years. However, there are no procedures for obstacle-free (hazard assessment) and interference-free (limitations due to work activity) recording of the vital parameters that are indicative of stress. The aim of the project is to develop a demonstrator that enables contactless stress detection. The aim of the project is to develop a robust, accurate system with an appealing, user-friendly visualization of the data.
This text was translated with DeepL
Contact-free camera-based measurement of vital parameters with improved interference immunity
Duration: 01.07.2017 to 30.06.2019
The recording of important vital parameters in humans, such as heart rate, respiration, heart rate variability and oxygen saturation of the blood, are of great importance for the diagnosis and monitoring of health status. The project aims to obtain new data in order to significantly improve the accuracy of previously developed methods for estimating vital parameters. The skin recognition used is to be generalized and be able to deliver more robust results in real time. The new additional information (e.g. 3D data, infrared images) will also be used to optimize the feature extraction, selection and reduction processes.
This text was translated with DeepL
Optical measurement method with spatially distributed light projections for high-resolution and fast 3D surface reconstruction
Duration: 01.12.2017 to 30.11.2018
The aim of the project is to develop a new active 3D measurement method that does not require a digital projector based on central projection. High light intensity should ensure short integration times for the overall measurement. In particular, the aim is to achieve scalability of the illuminance in principle, so that even larger measuring surfaces, which are frequently encountered in industrial production, can be measured in a time-efficient manner. A multi-camera system should also achieve a considerable reduction in shadowing when measuring complex parts in order to avoid measurements from different positions.
This text was translated with DeepL
Optimization of the reliability and specificity of automated multimodal detection of pressure and heat pain intensity
Duration: 01.09.2015 to 31.05.2018
Currently used methods for clinical pain measurement have limited reliability and validity, are time-consuming and can only be used to a limited extent in patients with limited verbal skills. If valid pain measurement is not possible, this can lead to stress-related cardiologic risk, overuse or underuse of analgesics and suboptimal treatment of acute and chronic pain.
The focus of this project is therefore to improve pain diagnostics and the monitoring of pain conditions. The use of multimodal sensor technologies and highly effective data classification can enable reliable and valid automated pain detection. To achieve this goal, a promising strategy for objective pain recognition is being developed by combining new innovative methods of data analysis, pattern recognition and machine learning on data from an experimental protocol. Biomedical, visual and audio data will be measured under experimental and controlled pain applications in healthy subjects. In order to extract and select features, the experimental data is pre-processed serially using complex filters and decompensation methods. The features obtained in this way are the prerequisite for robust automated recognition of pain intensity in real time.
This text was translated with DeepL
Active line scan camera systems for fast and high-resolution 3D measurement of large surfaces
Duration: 01.12.2015 to 01.03.2018
As part of the BMBF funding program Twenty20-Partnership for Innovation, a joint project is being carried out with partners from industry and science. The aim of the BMBF project is to develop the technological basis for sensors for high-resolution and highly dynamic 3D detection of objects and surfaces. The Otto von Guericke University sub-project focuses on large surfaces of workpieces from industrial production. The basic idea is to overcome the technological limitations of matrix camera systems, particularly when measuring moving surfaces on conveyor belts or continuous material, by developing line scan camera systems with suitable structured lighting.
This text was translated with DeepL
Augmented reality system to support material testing and quality control on industrial plants - data fusion of spatially recorded measured values in the AR application
Duration: 01.02.2016 to 28.02.2018
The cooperation project serves the needs of manual inspection techniques for material inspection and quality assurance on industrial plants. A key objective is to support a human inspector during the inspection using an augmented reality system. The term augmented reality (AR) refers to the computer-aided enhancement of human visual perception of reality by displaying additional virtual information in the inspector's field of vision, e.g. via data goggles. In the context of the application, this additional information consists of measurement results from previous inspections as well as virtual models of the real inspection objects from a database to be developed in-house. In addition, current measurement results with spatial reference to the surface of the test object are to be displayed. The reference source is an optical measuring system, which is linked to the respective inspection device and makes the data available to the AR system for display in real time.
This text was translated with DeepL
Mechanisms of non-verbal communication: Mimic emotion recognition and analysis of head and body gestures
Duration: 01.01.2013 to 31.12.2017
User-adaptive behavior is a fundamental characteristic of companion technologies. This requires sensory capabilities that enable the system to draw conclusions about the user's state (disposition) and other situation-related communication-relevant parameters from non-verbal signals. By visually analyzing facial expressions as well as head and body posture/gestures, sub-project C3 makes a fundamental contribution to deriving the richest possible system-side representation of the user's disposition. The temporal analysis of head and body gestures also makes it possible to recognize or predict a user's actions and intentions. The modeling of cognitive architectures based on biological principles helps to develop universal approaches to information processing and learning-based adaptability.
This text was translated with DeepL
Innovative concept for image-based head pose estimation and driver condition detection
Duration: 01.03.2014 to 01.09.2017
This project involves the development of robust approaches for image-based driver analysis with the aim of increasing safety and driving comfort. This involves both the recognition and simulation of relevant parameters such as head pose, gaze direction, blinking and, in the further course, facial expressions. In particular, the use of active and multi-camera technologies will be used to develop very robust methods that meet the requirements of use under real conditions. The image-based computer graphics-based simulation under predefined parameters should also enable the validation of existing technologies.
This text was translated with DeepL
Contact-free camera-based measurement of vital parameters with improved interference immunity
Duration: 01.07.2015 to 01.07.2017
Heart rate, respiration and heart rate variability are important vital parameters in humans. Devices currently on the market for measuring these parameters only use contact-based measurement methods. These are associated with a number of disadvantages. The aim of this research project is to develop a 3D image-based, non-contact measurement method that offers the user maximum freedom of movement and maximum comfort, is robust and fast, and is easy to use.
This text was translated with DeepL
Environment detection
Duration: 01.01.2013 to 30.12.2016
The goals of subproject C1 are environment recognition and modeling as well as the intention-based interpretation of gestures of potential users of a companion system. For environment modeling, new methods for multi-object tracking, information fusion and temporal filtering are researched and further developed, based on the Random Finite Sets Theory and the Joint Integrated Probabilistic Data Association Filter, which allow a simultaneous estimation of object existence and object state. The recognition of user gestures is image-based and forms the basis for an intention-based interpretation of the gesture and action sequences using intention reference models. These provide the direct link between all intention hypotheses based on an application context and the fused feature vector from gesture sequences. The hypothesis with the maximum evaluation measure should correspond to the user intention.
This text was translated with DeepL
Crowd Behavior Analysis in Video Sequences
Duration: 01.10.2013 to 01.10.2016
The analysis of human activities in crowded scenes is the most challengeable tasks in computer vision. Tracking and understanding individuals actions in dense scenes is a problem till yet not be fully solved due to occlusion between objects.
The new area of interest in computer vision is the crowd behavior analysis and modeling. Broadly speaking, there are two levels of crowd analysis: 1) individual level and 2) global level. At the individual level, the goal is to extract and understand behavior of each moving object in the crowd. At the global level, the goal is to model the behavior of the group as a whole. In both cases, one can perform behavior understanding and anomaly detection by analyzing motion features and characterizing so-called “normal behavior”. In contrast, detecting “anomaly” or “abnormal behavior” refers to the action of locating activities that do not conform to “normal behavior” or fall in its respective labeled class.
In this project we will focus on the modeling of crowd-flow solutions without tracking. After the modeling of crowd behavior, we will be able to detect high-level abnormalities such as traffic jams, crowd of people running amok, etc.
High-resolution surface shape inspection of large industrial surfaces
Duration: 01.11.2014 to 30.04.2016
High-quality surfaces are a challenge, especially for high-priced goods such as body shell parts for the automotive industry. Surface shape inspection systems are capable of detecting the smallest deformations. However, these systems are limited to small measuring ranges, and methods for calculating several partial areas are known from geometry measurement, for example. However, the large areas combined in this way do not meet the requirements for accuracy and resolution that are necessary for surface form inspection, so the aim of the project is to develop a measurement system that enables surface form inspection on large industrial surfaces.
This text was translated with DeepL
Non-intrusive intention-adaptive interactions in an HCI environment
Duration: 01.10.2012 to 30.03.2016
The focus of the PhD project is on the development of a non-intrusive image-based system for the intention-based interpretation of user actions based on multi-modalities (e.g. audio, facial expression and action analysis), the basic idea of which should be as generally valid as possible regardless of the application.
Since the interpretation of longer user actions is becoming increasingly complex due to user errors, unusual articulation or unusual framework conditions, the research focus in this PhD project lies on the one hand in the non-intrusive recording of actions including the interpretation, and on the other hand in the appropriate representation of the discourse context and the implementation of an evaluation strategy for the existing emotional and intentional state of the user in the multi-person scenario.
This text was translated with DeepL
3D gesture interaction and fusion of 3D images (GestFus)
Duration: 01.11.2014 to 01.03.2016
The 3Dsensation alliance brings together partners from various scientific and economic fields with different areas of expertise. The aim of this basic project is to develop the foundations for two subject areas that have proven to be particularly relevant and have high synergy potential in the 3Dsensation strategy project:
a) 3D gesture interaction and
b) the fusion of 3D images from different sources (incl. augmented reality).
The Institute for Information and Communication Technology at Otto von Guericke University Magdeburg is working on the scientific foundations and research results in the field of 3D gesture interaction. These topics are not only to be processed and focused on the topics of 3Dsensation, but also prepared in such a way that they can also be understood and implemented by the partners of later R&D projects who have not yet dealt with these topics, or only to a limited extent.
Another aim of the project is to develop and define a set of basic gestures. This is a collection of gestures, e.g. for taking, giving, navigating in rooms, confirming, emergency stop etc., which can be used as universally as possible in various fields of application.
An important aspect is also seen in hand and body movements, which, in the sense of anticipation, allow the recognition of potentially dangerous situations in safe human-machine cooperation.
A demonstrator is to be developed for the most important gestures to validate and illustrate the gesture interaction as a proof of concept, and a similar procedure is to be followed for mimic interaction as for gesture interaction.
This text was translated with DeepL
3D sensor principles
Duration: 01.11.2014 to 01.03.2016
As part of the BMBF's Twenty20 Partnership for Innovation funding program, a study on the cross-industry applicability of 3D sensor principles is being carried out for the "3DSensation" alliance. The overall objective of this basic project is to identify cross-technology and cross-sector R&D needs in the field of 3D sensor technology in order to improve full 3D sensing and 3D interaction by machines. This requires a rigorous scientific analysis and evaluation of a wide variety of existing 3D sensor technologies, involving as many partners as possible with a wide range of expertise. Building on this basis, the potential of possible 3D sensor innovations must be analyzed.
In the Otto von Guericke University sub-project, methods for precise and fast 3D surface measurement are to be investigated in particular. The focus is on photogrammetric measurement methods that work with active and passive lighting.
This text was translated with DeepL
Mobile Object Tracking Using Metaheuristics
Duration: 01.10.2013 to 01.01.2016
This research project is a fund supporting a collaboration between the University Mohammed 5, Morocco and the Otto-Von-Guericke University, Germany. This cooperation is under the framework of a sandwich program, and the topic of the financed research is "Mobile object tracking using metaheuristics". After having successfully integrated the cuckoo search optimization algorithm, which is a recent and highly discussed metaheuristic, combined with the Kalman filter into the task of tracking a single object in a video sequence, we are now aiming for the integration of the modified cuckoo search algorithm in the tracking of multiple moving objects. This should lead to a robust tracker providing accurate real-time performances. The main aim of this research cooperation project is, on the one hand to build a connexion between the Moroccan and the German university, and on the other hand to develop the quality of Moroccan researches and connect the Moroccan research community with the international industry.
Tracking-based 3D reconstruction of laminar cracks and crack bridges by evaluating ultrasonic signals
Duration: 01.07.2014 to 01.01.2016
Hydrogen-induced cracking in tanks for media storage represents a significant risk factor for the operation of refineries and chemical plants. The inspection methods currently available for regular inspections are essentially based on individual ultrasonic measurements and a subjective estimate of the surface area of detected cracks. In practice, this means that the tanks are often replaced earlier than is absolutely necessary. The aim of the project is to develop a new technology that enables a three-dimensional reconstruction of hydrogen-induced crack formation. By emitting broadband longitudinal and transverse ultrasonic waves, the exact geometric position of a crack in the material can be determined by triangulation. Using a corresponding software solution, the new technology should also be able to display detected cracks in the material in three dimensions, which even allows statements to be made about crack growth through comparisons with previous measurements and thus leads to a more objective and accurate assessment of the risk factor in refinery operations.
This text was translated with DeepL
Automatic recognition of Arabic handwriting
Duration: 01.01.2011 to 31.12.2015
In this thesis, methods for the automatic, segmentation-based recognition of Arabic handwriting are investigated and further developed. Since no reliable segmentation algorithm for Arabic handwriting has yet been established, different segmentation variants are processed one after the other in order to subsequently select the most plausible variant. In addition, for each segmentation variant, the recognized word is compared with a lexicon, which also allows conclusions to be drawn about the correctness of the segmentation and allows some recognition errors to be corrected. Possible procedures for explicit segmentation, feature extraction and classification are compared and implemented. The suitability of common classifiers is also examined and neural networks are implemented to determine the weights of the individual features. This can also be trained using genetic algorithms.
This text was translated with DeepL
3Dsensation Strategy phase
Duration: 01.01.2014 to 30.06.2015
As part of the strategy phase of the BMBF funding program "Twenty20 - Partnership for Innovation", work is being carried out on "Research - 3D Information Processing" and "Education". This includes developing the strategy and analyzing the necessary technologies for the precise three-dimensional representation of static and dynamic objects in order to develop such technologies in accordance with the requirements from the areas of need. This leads to the development of a roadmap for the 3D information processing to be developed with a focus on facial and body pose as well as adaptive modeling of the human body.
The other focus includes the design of a transdisciplinary Master's degree course and a graduate research college, for example, as well as the further development of training programs in the field of vocational education. The Master's course will focus on topics relating to human-machine interaction, e.g. "Ambient Intelligence" or "Interactive Assistance Systems".
This text was translated with DeepL
Image-based emotion recognition and quantification based on data fusion
Duration: 01.05.2012 to 01.05.2015
Analogous to human-human communication, human-machine interaction is viewed as the interaction of two agents that cooperatively solve a problem, recognize the wishes and goals of their counterpart, adapt to them and are aware of the discourse context and its rules. The attempt to explicitly capture and modulate these aspects of interactions are the tasks of an adaptive user interface. The interface is dynamically adapted using knowledge about the current status, goal and emotional state of the individual user. To this end, the typical processing chain ranges from feature identification and extraction to emotion classification and quantification. The combination of image data with speech data for segmentation detection in order to recognize facial expressions in a multi-person scenario is a promising new approach that not only enables robust classification of different types of static and dynamic facial expressions, but also real-time adaptation of the user interface to current user actions.
This text was translated with DeepL
Radar tracking and classification to improve road safety
Duration: 01.08.2011 to 01.01.2015
The objective of this project is to develop an innovative safety system to improve the protection of so-called vulnerable road users (pedestrians, cyclists). This is to be achieved primarily through the use of a newly designed 24 GHz radar sensor, which offers new standards in terms of situation analysis and at the same time covers the existing driver assistance functions. The system will be integrated on two test vehicles for research and testing purposes. These also have actuators for the automatic control of vehicle dynamics in order to carry out appropriate maneuvers to avoid accidents (e.g. automatic braking and swerving). Before the first tests can take place, however, a targeted accident analysis and the development of adequate algorithms for environment and pedestrian detection will be carried out. The expansion of the system with other sensors (camera, LIDAR) is also being considered in order to improve and verify the results using data fusion methods.
This text was translated with DeepL
Companion technology in automotive application scenarios for worker assistance using mobile augmented reality
Duration: 01.05.2012 to 31.12.2014
In this interdisciplinary joint project between information technology (IESK), general psychology (Uulm) and users (VW and IFF), the aim is to develop and test methods for the most natural possible interaction with the aid of non-intrusive hand gestures and the associated interaction recognition. Gesture recognition is carried out taking into account the body and environmental context, their classification by fusion of static and dynamic gestures, the recognition of gesture sequences by means of image-based methods. For this purpose, a robust prototypical system is to be developed, modified and validated on the basis of the results obtained in sub-project C1 of the SFB/TR 62 in the context of the planned application domains. The selection of gestures, devices and other implementation decisions is based, among other things, on psychological findings and will be validated by means of experimental studies; testing will take place via user studies. The application scenario will initially be set up as a prototype demonstrator at the Fraunhofer IFF so that the necessary detailed decisions on implementation can be made on the basis of a quasi-realistic working environment.
This text was translated with DeepL
Further development and systematic validation of a system for automated pain detection based on mimic and psychobiological parameters
Duration: 01.07.2011 to 30.11.2014
The objective assessment of subjectively experienced multidimensional pain is a problem that has not yet been adequately solved. Verbal methods (pain scales, questionnaires) and visual analog scales are commonly used in clinical pain measurement in particular, but these are not very reliable and valid for mentally impaired people. Expressive expressions of pain and/or psychobiological parameters can offer a solution. Such coding systems exist, but they involve a great deal of effort or have not been sufficiently evaluated in terms of test theory. Based on previous experience, a system for automatic pain recognition from visual and biomedical data is to be further developed, its test-theoretical quality determined and its performance optimized. For this purpose, test subjects will be exposed to painful stimuli under controlled conditions and mimic and psychobiological parameters will be used for measurement. Various methods of image processing and pattern recognition for facial analysis will be used and further developed to obtain the facial expression parameters. Based on the static and dynamic facial features from temporal image sequences and psychobiological data, pain-relevant features are to be identified and an automatic system developed with which pain can be measured qualitatively and quantitatively.
This text was translated with DeepL
Automated tank roof inspection
Duration: 01.10.2012 to 01.04.2014
The main objective of the planned project is to develop a new autonomous measuring system to inspect tank roofs on refineries or chemical plants with the aim of 100% inspection for corrosion erosion.
It is essential to develop a technology that enables the use of a special robot for a comprehensive tank roof inspection with corrosion removal measurement. The robot should be able to move autonomously on the tank roof and thus replace a human inspector to reduce the existing risk potential. The robot is equipped with various sensors for this purpose. An ultrasonic measuring system will measure the wall thickness of the roof at the current position. An optical system and landmarks placed on the roof will enable the robot to determine its own position, generating a virtual map of the wall thicknesses as it moves along. Additional sensors can be used for collision detection, as in modern cars.
This text was translated with DeepL
Umgebungserkennung /Environment Perception
Duration: 01.01.2009 to 31.12.2012
Teilprojektziele sind die Umgebungserkennung, dynamische Umgebungsmodellierung und Basisklassifikation von Gesten potentieller Nutzer des Companion-Systems. Zur Umgebungserfassung werden Methoden zur Multi-Sensorfusion, Informationsfusion und zeitlichen Filterung basierend auf der Finite Sets Theorie erforscht und weiterentwickelt, die eine gleichzeitige Schätzung der Objektexistenz und des Objektzustandes erlauben. Die nicht-intrusive Erkennung von Nutzergesten erfolgt bildbasiert unter Nutzung von Hidden-Markov-Modellen.
Intentionsbasierte Interpretation von Gestensequenzen
Duration: 01.07.2009 to 31.12.2011
Die automatische Erkennung der Gestik des Nutzers hat in der Mensch-Computer-Interaktion bei der Realisierung von Interaktionsaufgaben einen wachsenden Stellenwert. Zusammen mit einem adaptiven Plan und den aktuellen Aktionen des Benutzers, kann eine Bestimmung der Intentionen des Benutzers bezüglich seiner weiteren Bedienschritte/Interaktion ermöglicht werden.
Im Rahmen dieses Forschungsprojektes werden die Kopfregion, Gesichtsregion, Hände und Arme des Nutzers stereophotogrammetrisch erfasst, um daraus mittels bildbasierter Verfahren Bewegungen, Gestiken und Kopfhaltungen zu erkennen. Vorteil dabei ist, dass der Anwender nicht mit umständlichen Eingabegeräten hantieren muss, sondern durch die Bewegung seines Körpers intuitiv mit der Maschine interagiert. Verstärkt sollen dabei zweihändige Gesten untersucht und die damit verbundenen gestenbasierten Interaktionstechniken erweitert werden. Zur Erzeugung dieser Interaktionstechniken kann hier auf einfache dynamische und statische Gesten für die Interaktionsaufgaben in Verbindung mit einem adaptiven Plan zurückgegriffen werden. Als Erprobungsdomäne können u.a. Szenarien aus dem Bereich der Gebärdensprache oder von Stadtbesichtigungen dienen.
Pilotstudie zur Entwicklung eines Systems zur automatisierten Schmerzerkennung in der postoperativen Phase
Duration: 01.01.2008 to 31.12.2008
Aufgabe des Projektes ist es, die kameragestützte automatische Analyse von schmerzbedingten Veränderungen des Antlitzes auf ihre Praktikabilität zu untersuchen. Schmerz ist ein regelmäßig im postoperativen Verlauf auftretendes Ereignis, dass durch personalaufwendige Untersuchungsmethoden oft unzureichend oder verspätet erkannt wird und eine ausgeprägte faciale Reflektion aufweist. Dafür bietet die Bildverarbeitung leistungsfähige Algorithmen zur Gesichtserfassung, Merkmalsextrahierung wie auch Mimikanalyse. Im Projekt sollen diese Algorithmen in einem ersten Schritt an Probanden untersucht und in Hinsicht auf Erkennung und Quantifizierung von Schmerzen weiterentwickelt werden. Es erfolgt die Bildaufnahme und Mustererkennung vorerst in farbigen Raum-Zeit-Bildern vom menschlichen Gesicht unter Schmerzen. Dabei werden die Probanden einem definierten Schmerzreiz ausgesetzt und die Schmerzintensität mit klinischen Methoden gemessen. Es werden geeignete Gesichtsmerkmale zur Bewertung der individuellen Schmerzintensität durch die Einbeziehung von Medizinern und beim Training des Systems parallel erfasste biomedizinische Daten festgelegt. Basierend auf der Merkmalsauswahl erfolgt die Klassifikation auf der Grundlage der Merkmalsänderungen, die durch Muskelaktivität hervorgerufen werden.
2025
Peer-reviewed journal article
Robot System Assistant (RoSA) - evaluation of touch and speech input modalities for on-site HRI and telerobotics
Strazdas, Dominykas; Busch, Matthias; Shaji, Rijin; Siegert, Ingo; Hamadi, al- Ayoub
In: Frontiers in robotics and AI - Lausanne : [Verlag nicht ermittelbar], Bd. 12 (2025), Artikel 1561188, insges. 15 S.
Dissertation
Bildbasierte Situationsanalyse zur intuitiven Mensch-Roboter-Interaktion in dynamischen Umgebungen
Hempel, Thorsten; Hamadi, al- Ayoub; Nürnberger, Andreas; Enzberg, von Sebastian
In: Magdeburg: Universitätsbibliothek, Dissertation Otto-von-Guericke-Universität Magdeburg, Fakultät für Elektrotechnik und Informationstechnik 2025, 1 Online-Ressource (xv, 159 Seiten, 43,71 MB) [Literaturverzeichnis: Seite 133-160]
Efficient and robust face recognition in the wild
Khalifa, Aly Ahmed Aly; Hamadi, al- Ayoub; Wendemuth, Andreas
In: Magdeburg: Universitätsbibliothek, Dissertation Otto-von-Guericke-Universität Magdeburg, Fakultät für Elektrotechnik und Informationstechnik 2025, 1 Online-Ressource (X, 143 Seiten, 15,72 MB) [Literaturverzeichnis: Seite 123-143]
2024
Book chapter
An intelligent approach for continuous pain intensity prediction
Al-Radhi, Hassan; Fiedler, Marc-André; Al-Hamadi, Ayoub; Dinges, Laslo
In: IEEE International Symposium on Biomedical Imaging ISBI 2024 , 2024 - [Piscataway, NJ] : IEEE, insges. 5 S. [Symposium: IEEE International Symposium on Biomedical Imaging, ISBI, Athens, Greece, 27-30 May 2024]
NITEC - versatile hand-annotated eye contact dataset for ego-vision interaction
Hempel, Thorsten; Jung, Magnus; Abdelrahman, Ahmed A.; Al-Hamadi, Ayoub
In: 2024 IEEE Winter Conference on Applications of Computer Vision / IEEE/CVF Winter Conference on Applications of Computer Vision , 2024 - Piscataway, NJ : IEEE ; Souvenir, Richard, S. 4425-4434 [Konferenz: IEEE/CVF Winter Conference on Applications of Computer Vision, WACV, Waikoloa, HI, USA, 03-08 January 2024]
LSTM-based heart rate estimation from facial video images
Fiedler, Marc-André; Dinges, Laslo; Rapczyński, Michał; Al-Hamadi, Ayoub
In: IEEE International Symposium on Biomedical Imaging ISBI 2024 , 2024 - [Piscataway, NJ] : IEEE, insges. 5 S. [Symposium: IEEE International Symposium on Biomedical Imaging, ISBI, Athens, Greece, 27-30 May 2024]
Peer-reviewed journal article
Towards efficient and robust face recognition through attention-integrated multi-level CNN
Khalifa, Aly; Abdelrahman, Ahmed A.; Hempel, Thorsten; Al-Hamadi, Ayoub
In: Multimedia Tools and Applications, Springer Science and Business Media LLC
Towards efficient and robust face recognition through attention-integrated multi-level CNN
Khalifa, Aly; Abdelrahman, Ahmed A.; Hempel, Thorsten; Al-Hamadi, Ayoub
In: Multimedia tools and applications - Dordrecht [u.a.] : Springer Science + Business Media B.V, Bd. 84 (2025), Heft 14, S. 12715-12737
Toward robust and unconstrained full range of rotation head pose estimation
Hempel, Thorsten; Abdelrahman, Ahmed A.; Al-Hamadi, Ayoub
In: IEEE transactions on image processing / Institute of Electrical and Electronics Engineers - New York, NY : IEEE, Bd. 33 (2024), S. 2377-2387, insges. 11 S.
Hybrid sparrow search-exponential distribution optimization with differential evolution for parameter prediction of solar photovoltaic models
Abd El-Mageed, Amr A.; Al-Hamadi, Ayoub; Bakheet, Samy; Abd El-Rahiem, Asmaa
In: Algorithms - Basel : MDPI, Bd. 17 (2024), Heft 1, Artikel 26, insges. 34 S.
Exploring facial cues - automated deception detection using artificial intelligence
Dinges, Laslo; Fiedler, Marc-André; Hamadi, Ayoub al-; Hempel, Thorsten; Abdelrahman, Ahmed; Weimann, Joachim; Bershadskyy, Dmitri; Steiner, Johann
In: Neural computing & applications - London : Springer, Bd. 36 (2024), Heft 24, S. 14857-14883
Fine-grained gaze estimation based on the combination of regression and classification losses
Abdelrahman, Ahmed A.; Hempel, Thorsten; Khalifa, Aly; Al-Hamadi, Ayoub
In: Applied intelligence - Dordrecht [u.a.] : Springer Science + Business Media B.V, Bd. 54 (2024), S. 10982-10994
Fine-grained gaze estimation based on the combination of regression and classification losses
Abdelrahman, Ahmed A.; Hempel, Thorsten; Khalifa, Aly; Al-Hamadi, Ayoub
In: Applied Intelligence, Springer Science and Business Media LLC, Bd. 54, Heft 21, S. 10982-10994
A review of visual SLAM for robotics - evolution, properties, and future applications
Al-Tawil, Basheer; Hempel, Thorsten; Abdelrahman, Ahmed; Al-Hamadi, Ayoub
In: Frontiers in robotics and AI - Lausanne : [Verlag nicht ermittelbar], Bd. 11 (2024), insges. 18 S.
2023
Book chapter
Uncovering lies - deception detection in a rolling-dice experiment
Dinges, Laslo; Fiedler, Marc-André; Al-Hamadi, Ayoub; Abdelrahman, Ahmed A.; Weimann, Joachim; Bershadskyy, Dmitri
In: Image Analysis and Processing – ICIAP 2023 , 1st ed. 2023. - Cham : Springer Nature Switzerland ; Foresti, Gian Luca, S. 293-303 - ( Lecture notes in computer science; volume 14233)
Peer-reviewed journal article
Digital technology and mental health during the COVID-19 pandemic - a narrative review with a focus on depression, anxiety, stress, and trauma
Guest, Paul C.; Vasilevska, Veronika; Hamadi, Ayoub; Eder, Julia; Falkai, Peter; Steiner, Johann
In: Frontiers in psychiatry - Lausanne : Frontiers Research Foundation, Bd. 14 (2023), Artikel 1227426, insges. 13 S.
Hybrid bag-of-visual-words and FeatureWiz selection for content-based visual information retrieval
Bakheet, Samy; Al-Hamadi, Ayoub; Soliman, Emadeldeen; Heshmat, Mohamed
In: Sensors - Basel : MDPI, Bd. 23 (2023), Heft 3, Artikel 1653, insges. 24 S.
Classification networks for continuous automatic pain intensity monitoring in video using facial expression on the X-ITE Pain Database
Othman, Ehsan; Werner, Philipp; Saxen, Frerk; Al-Hamadi, Ayoub; Gruss, Sascha; Walter, Steffen
In: Journal of visual communication and image representation - Orlando, Fla. : Academic Press, Bd. 91 (2023), Artikel 103743
Dissertation
An automatic and multi-modal system for continuous pain intensity monitoring based on analyzing data from five sensor modalities
Othman, Ehsan; Hamadi, Ayoub; Wendemuth, Andreas
In: Magdeburg: Universitätsbibliothek, Dissertation Otto-von-Guericke-Universität Magdeburg, Fakultät für Elektrotechnik und Informationstechik 2023, 1 Online-Ressource (xxi, 155 Seiten, 7,42 MB) [Literaturverzeichnis: Seite 137-155]
Kamerabasierte Messung von Vitalparametern mit verbesserter Störsicherheit
Rapczyński, Michał; Hamadi, Ayoub
In: Magdeburg: Universitätsbibliothek, Dissertation Otto-von-Guericke-Universität Magdeburg, Fakultät für Elektrotechnik und Informationstechnik 2023, 1 Online-Ressource (xxiv, 174 Seiten, 14,06 MB) [Literaturverzeichnis: Seite 159-173]
Non-peer-reviewed journal article
Experimental economics for machine learning - a methodological contribution
Bershadskyy, Dmitri; Dinges, Laslo; Fiedler, Marc-André; Hamadi, Ayoub; Ostermaier, Nina; Weimann, Joachim
In: Magdeburg: Otto-von-Guericke-Universität Magdeburg: Fakultät für Wirtschaftswissenschaft, 2023, 1 Online-Ressource (27 Seiten, 0,68 MB) - (Working paper series; Otto von Guericke Universität Magdeburg, Faculty of Economics and Management; 2023, no. 08)
Automated deception detection from videos - using end-to-end learning based high-level features and classivication approaches
Dinges, Laslo; Al-Hamadi, Ayoub; Hempel, Thorsten; Abdelrahman, Ahmed; Weimann, Joachim; Bershadskyy, Dmitri
In: De.arxiv.org - [Erscheinungsort nicht ermittelbar] : Arxiv.org . - 2023, Artikel 2307.06625, insges. 29 S.
2022
Peer-reviewed journal article
Robot System Assistant (RoSA) - towards intuitive multi-modal and multi-device human-robot interaction
Strazdas, Dominykas; Hintz, Jan; Khalifa, Aly; Abdelrahman, Ahmed A.; Hempel, Thorsten; Hamadi, Ayoub
In: Sensors - Basel: MDPI, Bd. 22 (2022), Heft 3, Artikel 923, insges. 24 S.
A fingerprint-based verification framework using Harris and SURF feature detection algorithms
Bakheet, Samy; Al-Hamadi, Ayoub; Youssef, Rehab
In: Applied Sciences - Basel: MDPI, Bd. 12 (2022), 1, insges. 16 S.
Fast and precise binary instance segmentation of 2D objects for automotive applications
Ganganna Ravindra, Darshan; Dinges, Laslo; Al-Hamadi, Ayoub; Baranau, Vasili
In: Journal of WSCG - Plzen, Bd. 30 (2022), 1-2, S. 302-305
Dissertation
Automatisierte bild- und videobasierte Mimikanalyse für die Messung von Schmerzen und Facial Action Units
Werner, Philipp; Hamadi, Ayoub; Tönnies, Klaus
In: Magdeburg: Universitätsbibliothek, Dissertation Otto-von-Guericke-Universität Magdeburg, Fakultät für Elektrotechnik und Informationstechnik 2022, 1 Online-Ressource (xvi, 179 Seiten, 10,89 MB) [Literaturverzeichnis: Seite 159-178][Literaturverzeichnis: Seite 159-178]
2021
Book chapter
Robot System Assistant (RoSA) - concept for an intuitive multi-modal and multi-device interaction system
Strazdas, Dominykas; Hintz, Jan; Khalifa, Aly; Al-Hamadi, Ayoub
In: Proceedings of the 2021 IEEE International Conference on Human-Machine Systems (ICHMS) , 2021 - [Piscataway, NJ] : IEEE ; Nürnberger, Andreas, insges. 4 S.
Using facial action recognition to evaluate user perception in aggravated HRC scenarios
Dinges, Laslo; Hamadi, Ayoub; Hempel, Thorsten; Al Aghbari, Zaher
In: IEEE Xplore digital library / Institute of Electrical and Electronics Engineers - New York, NY : IEEE . - 2021, S. 195-199
Semantic-aware environment perception for mobile human-robot interaction
Hempel, Thorsten; Fiedler, Marc-André; Khalifa, Aly; Al-Hamadi, Ayoub; Dinges, Laslo
In: IEEE Xplore digital library / Institute of Electrical and Electronics Engineers - New York, NY : IEEE . - 2021, S. 200-203
Facial video-based respiratory rate recognition interpolating pulsatile PPG rise and fall times
Fiedler, Marc-André; Rapzyński, Michał; Hamadi, Ayoub
In: IEEE Xplore digital library / Institute of Electrical and Electronics Engineers - New York, NY : IEEE . - 2021, S. 545-549
Peer-reviewed journal article
Automatic vs. human recognition of pain intensity from facial expression on the X-ITE pain database
Othman, Ehsan; Werner, Philipp; Saxen, Frerk; Hamadi, Ayoub; Gruss, Sascha; Walter, Steffen
In: Sensors - Basel : MDPI, Bd. 21 (2021), Heft 9, Artikel 3273, insges. 19 S.
Robust hand gesture recognition using multiple shape-oriented visual cues
Bakheet, Samy; Hamadi, Ayoub
In: EURASIP journal on image and video processing / European Association for Speech, Signal and Image Processing - New York, NY : Hindawi Publishing Corp., Bd. 2021 (2021), Artikel 26, insges. 18 S.
A framework for instantaneous driver drowsiness detection based on improved HOG features and Naive Bayesian classification
Bakheet, Samy; Hamadi, Ayoub
In: Brain Sciences - Basel : MDPI AG, Bd. 11 (2021), Heft 2, Artikel 240
Robo-HUD - interaction concept for contactless operation of industrial cobotic systems
Strazdas, Dominykas; Hintz, Jan; Hamadi, Ayoub
In: Applied Sciences - Basel : MDPI, Bd. 11 (2021), Heft 12, Artikel 5366, insges. 12 S.
Simultaneous prediction of valence/arousal and emotion categories and its application in an HRC scenario
Handrich, Sebastian; Dinges, Laslo; Hamadi, Ayoub; Werner, Philipp; Saxen, Frerk; Aghbari, Zaher
In: Journal of ambient intelligence and humanized computing - Berlin : Springer, Bd. 12 (2021), Heft 1, S. 57-73
SFPD - simultaneous face and person detection in real-time for human-robot interaction
Fiedler, Marc-André; Werner, Philipp; Khalifa, Aly; Hamadi, Ayoub
In: Sensors - Basel : MDPI, Bd. 21 (2021), Heft 17, Artikel 5918, insges. 17 S.
2020
Book chapter
Deep facial expression recognition with occlusion regularization
Pandya, Nikul; Werner, Philipp; Hamadi, Ayoub
In: Advances in visual computing: 15th International Symposium on Visual Computing, ISVC 2020, San Diego, CA, USA, October 5-7, 2020 : proceedings / George Bebis [und 8 andere] (eds.): 15th International Symposium on Visual Computing, ISVC 2020, San Diego, CA, USA, October 5-7, 2020 : proceedings/ ISVC - Cham: Springer . - 2020, insges. 11 S.
SLAM-based multistate tracking system for mobile human-robot interaktion
Hempel, Thorsten; Al-Hamadi, Ayoub
In: Image analysis and recognition ; Part 1/ ICIAR - Cham: Springer; Campilho, Aurélio Part 1 . - 2020, S. 368-376
Facial cction unit recognition in the wild with multi-task CNN self-Ttaining for the EmotioNet Challenge,
Werner, Philipp; Saxen, Frerk; Hamadi, Ayoub
In: IEEE Xplore digital library / Institute of Electrical and Electronics Engineers - New York, NY : IEEE . - 2020, S. 1649-1652
Peer-reviewed journal article
Simultaneous prediction of valence/arousal and emotions on AffectNet, Aff-Wild and AFEW-VA
Handrich, Sebastian; Dinges, Laslo; Hamadi, Ayoub; Werner, Philipp; Al Aghbari, Zaher
In: Procedia computer science - Amsterdam [u.a.]: Elsevier, Bd. 170 (2020), S. 634-641
Multimodale Erkennung von Schmerzintensität und -modalität mit maschinellen Lernverfahren
Walter, Steffen; Al-Hamadi, Ayoub; Gruss, Sascha; Frisch, Stephan; Traue, Harald C.; Werner, Philipp
In: Der Schmerz: Organ der Deutschen Gesellschaft zum Studium des Schmerzes, der Österreichischen Schmerzgesellschaft und der Deutschen Interdisziplinären Vereinigung für Schmerztherapie - Berlin: Springer, Bd. 34 (2020), 5, S. 400-409
Fusion-based approach for respiratory rate recognition from facial video images
Fiedler, Marc-André; Rapczyński, Michal; Al-Hamadi, Ayoub
In: IEEE access/ Institute of Electrical and Electronics Engineers - New York, NY: IEEE, Bd. 8 (2020), S. 130036-130047
Von der Fremdbeurteilung des Schmerzes zur automatisierten multimodalen Messung der Schmerzintensität - narrativer Review zum Stand der Forschung und zur klinischen Perspektive
Frisch, Stephan; Werner, Philipp; Al-Hamadi, Ayoub; Traue, Harald C.; Gruss, Sascha; Walter, Steffen
In: Der Schmerz: Organ der Deutschen Gesellschaft zum Studium des Schmerzes, der Österreichischen Schmerzgesellschaft und der Deutschen Interdisziplinären Vereinigung für Schmerztherapie - Berlin: Springer, Bd. 34 (2020), 5, S. 376-387
Finding K most significant motifs in big time series data
Al Aghbari, Zaher; Hamadi, Ayoub
In: Procedia computer science - Amsterdam [u.a.]: Elsevier, Bd. 170 (2020), S. 595-601
Pixel-wise motion segmentation for SLAM in dynamic environments
Hempel, Thorsten; Hamadi, Ayoub
In: IEEE access/ Institute of Electrical and Electronics Engineers - New York, NY: IEEE, Bd. 8 (2020), S. 164521-164528
Chord-length shape features for license plate character recognition
Bakheet, Samy; Al-Hamadi, Ayoub
In: Journal of Russian laser research - New York, NY [u.a.]: Consultants Bureau, Bd. 41 (2020), 1, S. 156-170
Robots and wizards - an investigation into natural HumanRobot Interaction
Strazdas, Dominykas; Hintz, Jan; Felßberg, Anna-Maria; Al-Hamadi, Ayoub
In: IEEE access / Institute of Electrical and Electronics Engineers - New York, NY : IEEE, Bd. 8 (2020), S. 207635-207642
2019
Abstract
Multimodale Schmerzerkennung mittels Algorithmen der künstlichen Intelligenz
Frisch, Stephan; Werner, Philipp; Al-Hamadi, Ayoub; Gruss, Sascha; Walter, Steffen
In: Der Schmerz - Berlin: Springer, Volume 33 (2019), Suppl. 1, Poster P04.05, Seite S49
Book chapter
Cross-database evaluation of pain recognition from facial video
Al-Hamadi, Ayoub; Othman, Ehsan; Werner, Philipp; Saxen, Frerk; Walter, Steffen
In: ISPA 2019 - Piscataway, NJ: IEEE; International Symposium on Image and Signal Processing and Analysis (11.:2019), S. 181-186[Konferenz: ISPA 2019, Dubrovnik, Croatia, September 23-25, 2019]
Face attribute detection with MobileNetV2 and NasNet-Mobile
Saxen, Frerk; Werner, Philipp; Othman, Ehsan; Handrich, Sebastian; Dinges, Laslo; Hamadi, Ayoub
In: ISPA 2019: 11th International Symposium on Image and Signal Processing and Analysis, Dubrovnik, Croatia, September 23-25, 2019/ International Symposium on Image and Signal Processing and Analysis - Piscataway, NJ: IEEE, 2019; Lončarić, Sven . - 2019, S. 176-180[Konferenz: ISPA 2019, Dubrovnik, Croatia, September 23-25, 2019]
Simultaneous prediction of valence/arousal and emotion categories in real-time
Handrich, Sebastian; Dinges, Laslo; Saxen, Frerk; Hamadi, Ayoub; Wachmuth, Sven
In: Proceedings of the 2019 IEEE International Conference on Signal and Image Processing Applications (IEEE ICSIPA 2019): Kuala Lumpur, Malaysia, September 17-19, 2019 / ICSIPA ; IEEE Signal Processing Society, Malaysia Chapter: Kuala Lumpur, Malaysia, September 17-19, 2019/ IEEE ICSIPA - [Piscataway, NJ]: IEEE, 2019; IEEE ICSIPA (6.:2019) . - 2019, insges. 1 S.[Konferenz: IEEE ICSIPA 2019, Malaysia, September 17-19, 2019]
Generalizing to unseen head poses in facial expression recognition and action unit intensity estimation
Werner, Philipp; Saxen, Frerk; Al-Hamadi, Ayoub; Yu, Hui
In: 14th IEEE International Conference on Automatic Face and Gesture Recognition - Piscataway, NJ: IEEE, insges. 8 S., 2019[Konferenz: FG 2019, Lille, France, 14-18 May, 2019]
Verhinderung der Überwindung von Gesichtserkennung durch kamerabasierte Vitalparameterschätzung
Rapczynski, Michal; Lang, Christopher; Al-Hamadi, Ayoub
In: Berlin: Gesellschaft z. Förderung angewandter Informatik, insges. 10 S., 2019[Tagung: FWS19, Berlin, 02. Oktober 2019]
Improvement of data-driven 3-D surface quality inspection by deformation simulation
von Enzberg, Sebastian; Al-Hamadi, Ayoub
In: [Piscataway, NJ]: IEEE; IEEE ICSIPA (6.:2019), insges. 1 S.[Konferenz: IEEE ICSIPA 2019, Malaysia, September 17-19, 2019]
Twofold-multimodal pain recognition with the X-ITE pain database
Werner, Philipp; Al-Hamadi, Ayoub; Gruss, Sascha; Walter, Steffen
In: IEEE Xplore digital library / Institute of Electrical and Electronics Engineers - New York, NY : IEEE . - 2019, S. 290-296
Peer-reviewed journal article
Predicting group contribution behaviour in a public goods game from face-to-face communication
Othman, Ehsan; Saxen, Frerk; Bershadskyy, Dmitri; Werner, Philipp; Al-Hamadi, Ayoub; Weimann, Joachim
In: Sensors - Basel : MDPI - Volume 19 (2019), 12, Artikelnummer 2786 [Special Issue: Sensors for affective computing and sentiment analysis]
Automatic recognition methods supporting pain assessment$aa survey
Werner, Philipp; Lopez-Martinez, Daniel; Walter, Steffen; Al-Hamadi, Ayoub; Gruss, Sascha; Picard, Rosalind W.
In: IEEE transactions on affective computing - New York, NY: IEEE, S. 1-1, 2019[Early Access]
Effects of video encoding on camera based heart rate estimation
Rapczynski, Michal; Werner, Philipp; Al-Hamadi, Ayoub
In: IEEE transactions on biomedical engineering: a publication of the IEEE Engineering in Medicine and Biology Society/ Institute of Electrical and Electronics Engineers - New York, NY: IEEE, 1964, Bd. 66.2019, 12, S. 3360-3370
Multi-modal signals for analyzing pain responses to thermal and electrical stimuli
Gruss, Sascha; Geiger, Mattis; Werner, Philipp; Wilhelm, Oliver; Traue, Harald C.; Al-Hamadi, Ayoub; Walter, Steffen
In: JoVE - [S.l.], 2019, 146, Art.-Nr. e59057, insgesamt 12 Seiten
2018
Book chapter
Landmark based head pose estimation benchmark and method
Werner, Philipp; Saxen, Frerk; Al-Hamadi, Ayoub
In: 2017 IEEE International Conference on Image Processing: proceedings : 17-20 September 2017, China National Convention Center, Beijing, China - Piscataway, NJ: IEEE, S. 3909-3913, 2018[Konferenz: IEEE International Conference on Image Processing, ICIP 2017, Beijing, China, 17 - 20 September 2017]
Localizing body joints from single depth images using geodetic distances and random tree walk
Handrich, Sebastian; Al-Hamadi, Ayoub
In: 2017 IEEE International Conference on Image Processing: proceedings : 17-20 September 2017, China National Convention Center, Beijing, China - Piscataway, NJ: IEEE, S. 146-150, 2018[Konferenz: IEEE International Conference on Image Processing, ICIP 2017, Beijing, China, 17 - 20 September 2017]
Facial action unit intensity estimation and feature relevance visualization with random regression forests
Werner, Philipp; Handrich, Sebastian; Al-Hamadi, Ayoub
In: 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII): 23-26 Oct. 2017 - [Piscataway, NJ]: IEEE, S. 401-406, 2018[Copyright: 2017; Konferenz: 7. International Conference on Affective Computing and Intelligent Interaction, ACII, San Antonio, Tex., 23 - 26 October 2017]
Real vs. fake emotion challenge - learning to rank authenticity from facial activity descriptors
Saxen, Frerk; Werner, Philipp; Al-Hamadi, Ayoub
In: 2017 IEEE International Conference on Computer Vision workshops: 22-29 October 2017, Venice, Italy : proceedings - Piscataway, NJ: IEEE, S. 3073-3078, 2018[Konferenz: 2017 IEEE International Conference on Computer Vision workshops, Venice, Italy, 22-29 October 2017]
Intention-based anticipatory interactive systems
Wendemuth, Andreas; Böck, Ronald; Nürnberger, Andreas; Hamadi, Ayoub; Brechmann, André; Ohl, Frank W.
In: SMC 2018: 2018 IEEE International Conference on Systems, Man, and Cybernetics$d7-10 October 2018, Miyazaki, Japan : proceedings/ IEEE International Conference on Systems, Man, and Cybernetics - Piscataway, NJ: IEEE, 2018; Hata, Yutaka . - 2018, S. 2579-2584[Konferenz: 2018 IEEE International Conference on Systems, Man, and Cybernetics, SMC, Miyazaki, Japan, 7-10 October 2018]
Towards a novel reidentification method using metaheuristics
Ljouad, Tarik; Amine, Aouatif; Al-Hamadi, Ayoub; Rziza, Mohammed
In: Recent Developments in Metaheuristics - Cham: Springer, S. 429-445, 2018 - (Operations Research; Computer Science Interfaces Series; 62)
Automatic recognition of common Arabic handwritten words based on OCR and N-GRAMS
Dinges, Laslo; Hamadi, Ayoub; Elzobi, Moftah; Nürnberger, Andreas
In: IEEE Xplore digital library / Institute of Electrical and Electronics Engineers - New York, NY : IEEE . - 2018, S. 3625-3629 [Konferenz: 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China]
On analogical comparison of ant colony's selectivity decision for migration to an optimal nest site versus reconstruction problem solution by a mouse inside figure 8 maze
Mustafa, Hassan M. H.; Tourkia, Fadhel B.; Al-Hamadi, Ayoub
In: 1st International Conference on Computer Applications & Information Security: ICCAIS' 2018, Riyadh, Kingdom of Saudi Arabia, 04-06 April 2018 - [Piscataway, NJ]: IEEE[Konferenz: 1st International Conference on Computer Applications & Information Security (ICCAIS), Riyadh, Saudi Arabia, 4-6 April 2018]
Analysis of facial expressiveness during experimentally induced heat pain
Werner, Philipp; Al-Hamadi, Ayoub; Walter, Steffen
In: 2017 Seventh International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW): 23-26 Oct. 2017 - [Piscataway, NJ]: IEEE, S. 176-180, 2018[Copyright: 2017; Konferenz: Seventh International Conference on Affective Computing and Intelligent Interaction Workshops and Demos, ACIIW, San Antonio, Tex., 23 - 26 October 2017]
Peer-reviewed journal article
Facial point localization via neural networks in a cascade regression framework
Saeed, Anwar; Al-Hamadi, Ayoub; Neumann, Heiko
In: Multimedia tools and applications: an international journal - Dordrecht [u.a.]: Springer Science + Business Media B.V, Bd. 77.2018, 2, S. 2261-2283
Vision-based human activity recognition using LDCRFs
Elmezain, Mahmoud; Al-Hamadi, Ayoub
In: International Arab journal of e-technology: IAJet - Amman: Arab Open University, Bd. 15.2018, 3, S. 389-395
Automatic arabic Document classification based on the HRWiTD algorithm
Othman, Ehsan; Al-Hamadi, Ayoub
In: Journal of software engineering and applications: JSEA - Irvine, Calif: Scientific Research Publ, Bd. 11.2018, 4, S. 167-179
Generative vs. Discriminative recognition models for off-line arabic handwriting
Elzobi, Moftah; Al-Hamadi, Ayoub
In: Sensors - Basel: MDPI, Vol. 18.2018, 9, Art. 2786, insgesamt 23 S.
Head movements and postures as pain behavior
Werner, Philipp; Al-Hamadi, Ayoub; Limbrecht-Ecklundt, Kerstin; Walter, Steffen; Traue, Harald C.
In: PLOS ONE - San Francisco, California, US : PLOS - Vol. 13.2018, 2, Art. e0192767, insgesamt 17 S.
Article in conference proceedings
Aktives Zeilenkamerasystem zur schnellen und präzisen Rekonstruktion dreidimensionaler Oberflächen in der Produktion
Lilienblum, Erik; Al-Hamadi, Ayoub
In: Sensoren und Messsysteme: Beiträge der 19. ITG/GMA-Fachtagung 26.-27. Juni 2018 in Nürnberg - Berlin: VDE Verlag GmbH, S. 479-482
2017
Book chapter
Facial action unit intensity estimation and feature relevance visualization with random regression forests
Werner, Philipp; Handrich, Sebastian; Al-Hamadi, Ayoub
In: ResearchGATE : scientific neetwork : the leading professional network for scientists - Cambridge, Mass : ResearchGATE Corp, S. 401-406, 2017 [Konferenz: Conference: International Conference on Affective Computing and Intelligent Interaction (ACII) 2017]
Automated analysis of head pose, facial expression and affect
Niese, Robert; Al-Hamadi, Ayoub; Neumann, Heiko
In: Companion technology - Cham : Springer ; Biundo-Stephan, Susanne *1955-* . - 2017, S. 365-386
Non-intrusive gesture recognition in real companion environments
Handrich, Sebastian; Rashid, Omer; Al-Hamadi, Ayoub
In: Companion technology - Cham : Springer ; Biundo-Stephan, Susanne *1955-* . - 2017, S. 321-343
Investigation of an augmented reality-based machine operator assistance-system
Saxen, Frerk; Köpsel, Anne; Adler, Simon; Mecke, Rüdiger; Al-Hamadi, Ayoub; Tümler, Johannes; Huckauf, Anke
In: Companion technology - Cham : Springer ; Biundo-Stephan, Susanne *1955-* . - 2017, S. 471-483
Low cost calibration of stereo line scan camera systems
Lilienblum, Erik; Handrich, Sebastian; Al-Hamadi, Ayoub
In: Proceedings of the fifteenth IAPR International Conference on Machine Vision Applications : May 8-12, 2017, Toyoda Auditorium, Nagoya University, Nagoya, Japan - [Piscataway, NJ] : IEEE, S. 296-299 [Konferenz: 15. IAPR International Conference on Machine Vision Applications, MVA 2017, Nagoya, Japan, 8 - 12 May 2017]
Human bodypart classification using geodesic descriptors and random forests
Handrich, Sebastian; Al-Hamadi, Ayoub; Lilienblum, Erik; Liu, Zuofeng
In: IEEE Xplore digital library / Institute of Electrical and Electronics Engineers - New York, NY : IEEE . - 2017, S. 292-295
Multi-modal information processing in companion-systems - a ticket purchase system
Siegert, Ingo; Schüssel, Felix; Schmidt, Miriam; Reuter, Stephan; Meudt, Sascha; Layher, Georg; Krell, Gerald; Hörnle, Thilo; Handrich, Sebastian; Al-Hamadi, Ayoub; Dietmayer, Klaus; Neumann, Heiko; Palm, Günther; Schwenker, Friedhelm; Wendemuth, Andreas
In: Companion technology - Cham : Springer ; Biundo-Stephan, Susanne *1955-* . - 2017, S. 493-500
Continuous low latency heart rate estimation from painful faces in real time
Rapczynski, Michal; Werner, Philipp; Al-Hamadi, Ayoub
In: 2016 23rd International Conference on Pattern Recognition (ICPR): Cancún Center, Cancún, México, December 4-8, 2016 - [Piscataway, NJ]: IEEE, S. 1165-1170, 2017[Copyright: 2016; Konferenz: 23rd International Conference on Pattern Recognition, ICPR, Cancún, México, December 4-8, 2016]
Analysis of facial expressiveness during experimentally induced heat pain
Werner, Philipp; Al-Hamadi, Ayoub; Walter, Steffen
In: ResearchGATE : scientific neetwork : the leading professional network for scientists - Cambridge, Mass : ResearchGATE Corp, S. 176-180, 2017 [Konferenz: International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW 2017)]
Human bodypart classification using geodesic descriptors and random forests
Handrich, Sebastian; Al-Hamadi, Ayoub; Lilienblum, Erik; Liu, Zuofeng
In: Proceedings of the fifteenth IAPR International Conference on Machine Vision Applications: May 8-12, 2017, Toyoda Auditorium, Nagoya University, Nagoya, Japan - [Piscataway, NJ]: IEEE, S. 318-321[Konferenz: 15. IAPR International Conference on Machine Vision Applications, MVA 2017, Nagoya, Japan, 8 - 12 May 2017]
Peer-reviewed journal article
Recognition of human actions based on temporal motion templates
Bakheet, Samy; Al-Hamadi, Ayoub; Mofaddel, M. A.
In: British Journal of Applied Science & Technology: BJAST - London [u.a.]: Sciencedomain International, Vol. 20.2017,5, Article no. BJAST. 28318, insgesamt 11 Seiten
Assessment of iterative closest point registration accuracy for different phantom surfaces captured by an optical 3-D sensor in radiotherapy
Krell, Gerald; Nezhad, Nazila Saeid; Walke, Mathias; Hamadi, Ayoub; Gademann, Günther
In: Computational and mathematical methods in medicine - New York, NY [u.a.]: Hindawi, 2006, 2017, Art. 2938504, insgesamt 14 S.
Hand gesture recognition using optimized local Gabor features
Bakheet, Samy; Al-Hamadi, Ayoub
In: Journal of computational and theoretical nanoscience: for all theoretical and computational aspects in science, engineering, and biology - Stevenson Ranch, Calif.: American Scientific Publ., Bd. 14.2017, 3, S. 1380-1389[DOI: 10.1166/jctn.2017.6460]
Article in conference proceedings
Aktive 3D-Zeilensensorsysteme mit vergrößertem Höhenmessbereich
Al-Hamadi, Ayoub; Lilienblum, Erik
In: 3D-NordOst 2017: Tagungsband : 20. Anwendungsbezogener Workshop zur Erfassung, Modellierung, Verarbeitung und Auswertung von 3D-Daten : Berlin, 07./08. Dezember 2017/ Workshop 3D-NordOst - Berlin, Germany: Gesellschaft zur Förderung Angewandter Informatik e.V. (GFaI), 2017 . - 2017, S. 33-42[Konferenz: 3D-NordOst 2017, 07./08. Dezember 2017, Berlin]
2016
Book chapter
Kontaktfreie kamerabasierte Messung der Herzrate in Echtzeit
Rapczynski, Michal; Werner, Philipp; Al-Hamadi, Ayoub
In: 3D SENSATION - transdisziplinäre Perspektiven: innteract conference - Chemnitz: Verlag aw&l Wissenschaft und Praxis, S. 137-147, 2016[Kongress: innteract conference 2016, Chemnitz, 23. - 24. Juni, 2016]
Advancement in the head pose estimation via depzh-based face spotting
Saeed, Anwar; Al-Hamadi, Ayoub; Handrich, Sebastian
In: The 2016 IEEE Symposium Series on Computational Intelligence: December 6th - 9th, 2016, Athens, Greece - Piscataway, NJ: IEEE, insges. 6 S.[Kongress: 2016 IEEE Symposium Series on Computational Intelligence, Athens, Greece, 6-9 December, 2016]
"Bio Vid Emo DB" - a multimodal database for emotion analyses validated by subjective ratings
Zhang, Lin; Walter, Steffen; Ma, Xueyao; Werner, Philipp; Al-Hamadi, Ayoub; Traue, Harald C.; Gruss, Sascha
In: The 2016 IEEE Symposium Series on Computational Intelligence: December 6th - 9th, 2016, Athens, Greece - Piscataway, NJ: IEEE, insges. 6 S.[Kongress: 2016 IEEE Symposium Series on Computational Intelligence, Athens, Greece, 6-9 December, 2016]
Multispektrale Vermessung der Haut zur Verbesserung kontaktloser Herzratenschätzung
Rapczynski, Michal; Zhang, Chen; Rosenberger, Maik; Al-Hamadi, Ayoub
In: 22. Workshop Farbbildverarbeitung: 29.-30. September 2016, Ilmenau / Karl-Heinz Franke, Rico Nestler, Zentrum für Bild- und Signalverarbeitung e.V. (Hrsg.): 29.-30. September 2016, Ilmenau/ Workshop Farbbildverarbeitung - Ilmenau: Zentrum für Bild- und Signalverarbeitung, 2016 . - 2016, S. 115-122[Kongress: 22. Workshop Farbbildverarbeitung: 29.-30. September 2016, Ilmenau / Karl-Heinz Franke, Rico Nestler, Zentrum für Bild- und Signalverarbeitung e.V. (Hrsg.), Ilmenau, 29. - 30. Oktober, 2016]
Der Einfluss von Hautfarbensegmentierung auf die kontaktfreie Schätzung von Vitalparametern
Rapczynski, Michal; Saxen, Frerk; Werner, Philipp; Al-Hamadi, Ayoub
In: 22. Workshop Farbbildverarbeitung: 29.-30. September 2016, Ilmenau - Ilmenau: Zentrum für Bild- und Signalverarbeitung, S. 75-83[Kongress: 22. Workshop Farbbildverarbeitung, Ilmenau, 29. - 30. Oktober, 2016]
A framework for joint facial expression recognition and point localization
Saeed, Anwar; Al-Hamadi, Ayoub
In: 2016 23rd International Conference on Pattern Recognition (ICPR): Cancún, México, December 4-8, 2016 - IEEE, S. 4125-4130[Beitrag auf USB-Stick]
Peer-reviewed journal article
A hybrid cascade approach for human skin segmentation
Bakheet, Samy; Al-Hamadi, Ayoub
In: British Journal of Mathematics & Computer Science - London [u.a.]: Sciencedomain International, Bd. 17.2016, 6, insges. 14 S.
A multiresolution approach to model-based 3-D surface quality inspection
Enzberg, Sebastian; Hamadi, Ayoub
In: IEEE transactions on industrial informatics - New York, NY: IEEE, 2005, Bd. 12.2016, 4, S. 1498-1507
Mimische Aktivität differenzierter Schmerzintensitäten - Korrelation der Merkmale von Facial Action Coding System und Elektromyographie
Limbrecht-Ecklundt, K.; Werner, Philipp; Traue, H. C.; Al-Hamadi, Ayoub; Walter, S.
In: Der Schmerz: Organ der Deutschen Gesellschaft zum Studium des Schmerzes, der Österreichischen Schmerzgesellschaft und der Deutschen Interdisziplinären Vereinigung für Schmerztherapie - Berlin: Springer, Bd. 30.2016, 3, S. 248-256
Automatic pain assessment with facial activity descriptors
Werner, Philipp; Al-Hamadi, Ayoub; Limbrecht-Ecklundt, Kerstin; Walter, Steffen; Gruss, Sascha; Traue, Harald
In: IEEE transactions on affective computing / Institute of Electrical and Electronics Engineers - New York, NY : IEEE . - 2016, insges. 14 S.
A discriminative framework for action recognition using f-HOL features
Bakheet, Samy; Al-Hamadi, Ayoub
In: Information - Basel: MDPI Publ, Bd. 7.2016, 4, S. 68
2015
Book chapter
Optical sensor tracking and 3D-reconstruction of hydrogen-induced cracking
Freye, Christian; Bendicks, Christian; Lilienblum, Erik; Al-Hamadi, Ayoub
In: Advanced Concepts for Intelligent Vision Systems - Cham : Springer . - 2015, S. 521-529 - (Lecture notes in computer science; 9386)
Data fusion for automated pain recognition
Walter, Steffen; Gruss, Sascha; Kächele, Markus; Schwenker, Friedhelm; Werner, Philipp; Al-Hamadi, Ayoub; Andrade, Adriano; Moreira, Gustavo
In: PervasiveHealth : 9th International Conference on Pervasive Computing Technologies for Healthcare, Istanbul, Turkey, 20th - 23rd May 2015. - ICST, Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, S. 261-264
Aktive 3D-Zeilensensorsysteme mit vergrößertem Höhenmessbereich
Lilienblum, Erik; Al-Hamadi, Ayoub
In: 3D-NordOst 2015: Tagungsband : 18. Anwendungsbezogener Workshop zur Erfassung, Modellierung, Verarbeitung und Auswertung von 3D-Daten : Teil des 3D-Veranstaltungs-Clusters Berlin : Berlin, 03./04. Dezember 2015 - Berlin: Gesellschaft zur Förderung angewandter Informatik e.V. (GFaI), insges. 10 S.[Konferenz: 3D-NordOst 2015]
Bio-visual fusion for person-independent recognition of pain intensity
Kächele, Markus; Werner, Philipp; Al-Hamadi, Ayoub; Palm, Günther; Walter, Steffen; Schwenker, Friedhelm
In: Multiple Classifier Systems - Cham [u.a.] : Springer ; Schwenker, Friedhelm . - 2015, S. 220-230 - (Lecture notes in computer science; 9132)
Utilizing the bezier descriptors for hand gesture recognition
Rashid, Omer; Al-Hamadi, Ayoub
In: 2015 IEEE International Conference on Image Processing, ICIP 2015, September 27-30, 2015, Québec City, Canada - Piscataway, NJ: IEEE, insges. 5 S.
Quantitative analysis of surface reconstruction accuracy achievable with the TSDF representation
Werner, Diana; Werner, Philipp; Al-Hamadi, Ayoub
In: Computer Vision Systems - Cham [u.a.] : Springer ; Nalpantidis, Lazaros . - 2015, S. 167-176 - (Lecture notes in computer science; 9163)
Full-body human pose estimation by combining geodesic distances and 3D-point cloud registration
Handrich, Sebastian; Al-Hamadi, Ayoub
In: Advanced Concepts for Intelligent Vision Systems - Cham : Springer . - 2015, S. 287-298 - (Lecture notes in computer science; 9386)
Aktives Zeilensensorsystem zur kontinuierlichen 3D-Oberflächenrekonstruktion von Endlosmaterialien
Lilienblum, Erik; Al-Hamadi, Ayoub
In: 3D-NordOst 2015 : Tagungsband : 18. Anwendungsbezogener Workshop zur Erfassung, Modellierung, Verarbeitung und Auswertung von 3D-Daten : Teil des 3D-Veranstaltungs-Clusters Berlin : Berlin, 03./04. Dezember 2015. - Berlin : Gesellschaft zur Förderung angewandter Informatik e.V. (GFaI), insges. 12 S.Kongress: 3D-NordOst; 18 (Berlin) : 2015.12.03-04
Boosted human head pose estimation using kinect camera
Saeed, Anwar; Al-Hamadi, Ayoub
In: 2015 IEEE International Conference on Image Processing, ICIP 2015, September 27-30, 2015, Québec City, Canada - Piscataway, NJ: IEEE, insges. 5 S.
Peer-reviewed journal article
ASM based synthesis of handwritten arabic text pages
Dinges, Laslo; Hamadi, Ayoub; Elzobi, Moftah; El-etriby, Sherif; Ghoneim, Ahmed
In: The ScientificWorld journal - Boynton Beach, Fla., 2000, 2015, Article ID 323575, insgesamt 18 S.
Head pose estimation on top of haar-like face detection - a study using the Kinect sensor
Saeed, Anwar; Al-Hamadi, Ayoub; Ghoneim, Ahmed
In: Sensors - Basel : MDPI, Bd. 15 (2015), Heft 9, S. 20945-20966
A structured light approach for 3-D surface reconstruction with a stereo line-scan system
Lilienblum, Erik; Al-Hamadi, Ayoub
In: IEEE transactions on instrumentation and measurement: IM : a publication of the Instrumentation and Measurement Society - New York, NY: IEEE, Bd. 64.2015, 5, S. 1266 - 1274
Dissertation
Static and dynamic interpretations of hand specific body language cues for HCI
Ahmad, Omer Rashid; Al-Hamadi, Ayoub; Wendemuth, Andreas
In: Magdeburg, Dissertation Otto-von-Guericke-Universität Magdeburg, Fakultät für Elektrotechnik und Informationstechnik 2015, xxviii, 142 Seiten [Literaturverzeichnis: Seite [125]-142][Literaturverzeichnis: Seite [125]-142]
Article in conference proceedings
Regression-based head pose estimation in 2D images
Saeed, Anwar; Al-Hamadi, Ayoub; Niese, Robert
In: Proceedings of the 1st International Symposium on Companion-Technology (ISCT 2015): September 23rd - 25th, Ulm University, Germany, 2015; Biundo-Stephan, Susanne . - 2015, S. 161-166Kongress: International Symposium on Companion-Technology, ISCT 1 (Ulm : 2015.09.23-25)
2014
Book chapter
Color-based skin segmentation - an evaluation of the state of the art
Saxen, Frerk; Al-Hamadi, Ayoub
In: 2014 IEEE International Conference on Image Processing, ICIP 2014, October 27-30, 2014, Paris, France - Piscataway, NJ: IEEE, S. 4467-4471Kongress: ICIP 2014 (Paris, France : 2014.10.27-30)[Beitrag auf CD-ROM]
Simultaneous multi-camera calibration based on phase-shift measurements on planar surfaces
Rapczynski, Michal; Lilienblum, Erik; Enzberg, Sebastian; Hamadi, Ayoub
In: 2014 IEEE International Instrumentation and Measurement Technology Conference (I2MTC 2014) ; 1 - Piscataway, NJ: IEEE, 2014 . - 2014, S. 175-180Kongress: I2MTC (Montevideo : 2014.05.12-15)
Superpixels for skin segmentation
Saxen, Frerk; Al-Hamadi, Ayoub
In: 20. Workshop Farbbildverarbeitung: 25. - 26. September 2014, Wuppertal ; [Tagungsband] - Ilmenau, S. 153-159Kongress: Workshop Farbbildverarbeitung 20 (Wuppertal : 2014.09.25-26)
Multimodal automatic pain recognition via video signals and biopotentials
Walter, Steffen; Gruss, Sascha; Hazer, D.; Traue, Harald C.; Werner, Philipp; Al-Hamadi, Ayoub; Moreira da Silva, G.; Andrade, A. O.
In: Tecnologia, técnicas e tendências em engenharia biomédica - Bauru: Canal, S. 201-210, 2014
A structured light approach for 3d surface reconstruction with a stereo line-scan system
Lilienblum, Erik; Al-Hamadi, Ayoub
In: 2014 IEEE International Instrumentation and Measurement Technology Conference (I2MTC 2014) ; 2 - Piscataway, NJ: IEEE, 2014 . - 2014, S. 1171-1176Kongress: I2MTC (Montevideo : 2014.05.12-15)
Gabor wavelet recognition approach for off-line handwritten arabic using explicit segmentation
Elzobi, Moftah; Al-Hamadi, Ayoub; Al Aghbari, Zaher; Dings, Laslo; Saeed, Anwar
In: Image Processing and Communications Challenges 5 / S. Choras , Ryszard - Heidelberg : Springer International Publishing ; S. Choras, Ryszard . - 2014, S. 245-254
Pedestrain tracking with occlusion Using a 24 GHz automotive radar
Heuer, Michael; Al-Hamadi, Ayoub; Rain, Alexander; Meinecke, Marc-Michael; Rohling, Hermann
In: 2014 15th International Radar Symposium (IRS 2014): Gdask, Poland, June 16 - 18, 2014/ eds. Anna Kurowska; ...: Gdańsk, Poland, June 16 - 18, 2014/ eds. Anna Kurowska; ... - Warsaw: Warsaw Univ. of Technology, 2014; Kurowska, Anna . - 2014, S. 73-76Kongress: IRS 15 (Gdańsk, Poland : 2014.06.16-18)
Multiple camera approach for SLAM based ultrasonic tank roof inspection
Freye, Christian; Bendicks, Christian; Lilienblum, Erik; Al-Hamadi, Ayoub
In: Image Analysis and Recognition / Campilho , Aurélio - Cham [u.a.] : Springer ; Campilho, Aurélio . - 2014, S. 453-460 - (Lecture Notes in Computer Science; 8814) Kongress: ICIAR 2014 11 Vilamoura, Portugal 2014.10.22-24
Detection and tracking approach using an automotive radar to increase active pedestrian safety
Heuer, Michael; Al-Hamadi, Ayoub; Rain, Alexander; Meinecke, Marc-Michael
In: IEEE Intelligent Vehicles Symposium proceedings, 2014: 8 - 11 June 2014, Dearborn, Michigan, USA [postponed from] Ypsilanti MI, USA - Piscataway, NJ: IEEE, S. 890-893Kongress: IVS 25 (Dearborn, Mich. : 2014.06.08-11
Automatic pain recognition from video and biomedical signals
Werner, Philipp; Walter, Steffen; Al-Hamadi, Ayoub; Gruss, Sascha; Niese, Robert; Traue, Harald C.
In: 22nd International Conference on Pattern Recognition - Piscataway, NJ : IEEE . - 2014, S. 4582-4587
A defect recognition system for automated inspection of non-rigid surfaces
Enzberg, Sebastian; Hamadi, Ayoub
In: 22nd International Conference on Pattern Recognition: 24-28 August 2014, Stockholm, Sweden ; proceedings - Piscataway, NJ: IEEE, 2014 . - 2014, S. 1812-1816
Truncated signed distance function - experiments on voxel size
Werner, Diana; Al-Hamadi, Ayoub; Werner, Philipp
In: Image Analysis and Recognition / Campilho , Aurélio - Cham [u.a.] : Springer ; Campilho, Aurélio . - 2014, S. 357 - 364 - (Lecture notes in computer science; 8815) Kongress: ICIAR 2014 11 Vilamoura, Portugal 2014.10.22-24
Automatic heat rate estimation from painful faces
Werner, Philipp; Al-Hamadi, Ayoub; Walter, Steffen; Gruss, Sascha; Traue, Harald C.
In: 2014 IEEE International Conference on Image Processing, ICIP 2014, October 27-30, 2014, Paris, France - Piscataway, NJ: IEEE, S. 1947-1951Kongress: ICIP 2014 (Paris, France : 2014.10.27-30)[Beitrag auf CD-ROM]
Peer-reviewed journal article
Application of neural networks' modeling on optimal analysis and evaluation of e-learning systems' performance (time response approach)
Abdul Hamid, Zedan M.; Al-Ghamdi, Saeed A.; Mustafa, Hassan M. H.; Al-Hamadi, Ayoub; Al-Bassiouni, Abdel Aziz M.
In: Elixir - [S.l.] : [s.n.], Bd. 70 (2014), S. 24063-24067
Comparative learning applied to intensity rating of facial expressions of pain
Werner, Philipp; Al-Hamadi, Ayoub; Niese, Robert
In: International journal of pattern recognition and artificial intelligence: IJPRAI - Singapore [u.a.]: World Scientific Publ. Co, Vol. 28.2014, 5, Art. 1451008, insgesamt 26 S.
Frame-based facial expression recognition using geometrical features
Saeed, Anwar; Al-Hamadi, Ayoub; Niese, Robert; Elzobi, Moftah
In: Advances in human-computer interaction - New York, NY: Hindawi, 2014, Art. 408953, insgesamt 13 S.
Image-based methods for interaction with head-worn worker-assistance systems
Saxen, Frerk; Rashid, Omar; Al-Hamadi, Ayoub; Adler, Simon; Kernchen, Alexa; Mecke, Rüdiger
In: Journal of Intelligent Learning Systems and Applications: JILSA - Irvine, Calif: Scientific Research Publ, Bd. 6.2014, 3, S. 141-152
Automatic pain quantification using autonomic parameters
Walter, Steffen; Gruss, Sascha; Limbrecht-Ecklundt, Kerstin; Traue, Harald C.; Werner, Philipp; Al-Hamadi, Ayoub; Diniz, Nicolai; Moreira da Silva, Gustavo; Andrade, Adriano O.
In: Psychology & neuroscience - Rio de Janeiro : Departamento de Psicologia, Bd. 7 (2014), Heft 3, S. 363-380
Article in conference proceedings
Active pedestrain safety and road surface estimation - an overview of the ARTRAC initiative
Häkli, Janne; Nummila, Kai; Meinecke, Marc-M.; Heuer, Michael; Al-Hamadi, Ayoub; Rohling, Herrmann; Sorowka, Peter
In: 10th ITS European Congress: Helsinki, Finland, 16 - 19 June 2014 - [Brussels]: ERTICO - ITS Europe, 2014, Paper Nr. TP 0111, insgesamt 9 S.Kongress: ITS European Congress 10 (Helsinki : 2014.06.16-19)
Wireless communication techniques for half-automated evaluation of automotive active pedestrain safety systems
Heuer, Michael; Al-Hamadi, Ayoub; Meinecke, Marc-Michael; Schneider, Eugen; Rohling, Hermann; Sorowka, Peter
In: CERGAL 2014: International Symposium on Certification of GNSS Systems & Services, Dresden, Germany 08 - 09 July 2014 ; proceedings - German Inst. of Navigation, 2014 . - 2014, insges. 9 S.Kongress: CERGAL (Dresden : 2014.07.08-09)
Projection of the modified cuckoo search metaheuristic into the multiple pedestrain tracking problem
Ljouad, Tarik; Al-Hamadi, Ayoub; Amine, Aouatif; Rziza, Mohammed
In: International Conference on Metaheuristics and Nature Inspired Computing, META'14: October 27th-31th 2014, Marrakech, Morocco - Marrakech, insges. 2 S.Kongress: META'14 (Marrakech, Morocco : 2014.10.27-31)[Beitrag auf CD-ROM]
Non-peer-reviewed journal article
Synthetic-based validation of segmentation of handwritten arabic words
Dinges, Laslo; Hamadi, Ayoub; Elzobi, Moftah; El-Etriby, Sherif
In: Manuscript cultures: mc - Hamburg: Univ., SFB 950, 2008 . - 2014, 7, S. 10-18
2013
Book chapter
Using speaker group dependent modelling to improve fusion of fragmentary classifier decisions
Siegert, Ingo; Glodek, Michael; Panning, Axel; Krell, Gerald; Schwenker, Friedhelm; Al-Hamadi, Ayoub; Wendemuth, Andreas
In: Proceedings of the 2013 IEEE International Conference on Cybernetics (CYBCONF 2013) : Lausanne, Switzerland, 13-15 June, 2013. - IEEE, S. 132-137Kongress: CYBCONF; (Lausanne, Switzerland) : 2013.06.13-15
On optimality of teaching quality for a mathematical topic using Neural Networks (with a case study)
Al-Ghamdi, S. A.; Mustafa, H. M. H.; Al-Hamadi, Ayoub; Kortam, M. H.; Mal-Bassiouni, A.
In: IEEE Global Engineering Education Conference (EDUCON), 2013. - Piscataway, NJ : IEEE, S. 422-430Kongress: EDUCON; (Berlin) : 2013.03.13-15
A coded 3d calibration method for line-scan cameras
Lilienblum, Erik; Hamadi, Ayoub; Michaelis, Bernd
In: Pattern Recognition / Weickert , Joachim - Berlin, Heidelberg : Springer ; Weickert, Joachim *1965-* . - 2013, S. 81-90 - (Lecture notes in computer science; 8142) Kongress: GCPR 35 Saarbrücken 2013.09.03-06
An observation model for high resolution radar data in the context of an automotive pedestrian safety system
Heuer, Michael; Al-Hamadi, Ayoub; Meinecke, M.-M.
In: 14th International Radar Symposium (IRS), 2013. - Piscataway, NJ : IEEE, S. 714-719Kongress: IRS; 14 (Dresden) : 2013.06.19-21
Accurate, fast and robust realtime face pose estimation using Kinect camera
Niese, Robert; Werner, Philipp; Al-Hamadi, Ayoub
In: IEEE SMC 2013. - Piscataway, NJ : IEEE, S. 487-490Kongress: SMC 2013; (Manchester, UK) : 2013.10.13-16
The effectiveness of using geometrical features for facial expression recognition
Saeed, Anwar; Al-Hamadi, Ayoub; Niese, Robert
In: Proceedings of the 2013 IEEE International Conference on Cybernetics (CYBCONF 2013). - IEEE, S. 122-127Kongress: CYBCONF; (Lausanne, Switzerland) : 2013.06.13-15
Automatic realtime user performance-driven avatar animation
Behrens, Stephanie; Al-Hamadi, Ayoub; Redweik, Eicke; Niese, Robert
In: IEEE SMC 2013. - Piscataway, NJ : IEEE, S. 2694-2699Kongress: SMC 2013; (Manchester, UK) : 2013.10.13-16
An observation model for high resolution radar data in the context of an automotive pedestrian safety system
Heuer, Michael; Al-Hamadi, Ayoub; Meinecke, Marc-Michael
In: Proceedings // International Radar Symposium - IRS 2013 ; Vol. 1. - Göttingen : Cuvillier, insges. 6 S.Kongress: IRS; 14 (Dresden) : 2013.06.19-21
Automatic user-specific avatar parametrisation and emotion mapping
Behrens, Stephanie; Al-Hamadi, Ayoub; Niese, Robert; Redweik, Eicke
In: Advanced concepts for intelligent vision systems - Cham [u.a.]: Springer . - 2013, S. 192-202 - (Lecture notes in computer science; 8192)Kongress: ACIVS 2013 15 (Poznań : 2013.10.28-31)
Gestic-based human machine interface for robot control
Tornow, Michael; Al-Hamadi, Ayoub; Borrmann, Vinzenz
In: IEEE SMC 2013. - Piscataway, NJ : IEEE, S. 2706-2711Kongress: SMC 2013; (Manchester, UK) : 2013.10.13-16
A multi-agent mobile robot system with enviroment perception and HMI capabilities
Tornow, Michael; Al-Hamadi, Ayoub; Borrmann, Vinzenz
In: 2013 IEEE International Conference on Signal & Image Processing Applications (ICSIPA 2013). - Piscataway, NJ : IEEE, insges. 6 S.
A robust method for human pose estimation based on geodesic distance feature
Handrich, Sebastian; Al-Hamadi, Ayoub
In: IEEE SMC 2013. - Piscataway, NJ : IEEE, S. 906-911Kongress: SMC 2013; (Manchester, UK) : 2013.10.13-16
The BioVid heat pain database
Walter, Steffen; Gruss, Sascha; Ehleiter, Hagen; Tan, Junwen; Traue, Harald C.; Crawcour, Stephen; Werner, Philipp; Al-Hamadi, Ayoub; Andrade, Adriano O.; Moreira da Silva, Gustavo
In: Proceedings of the 2013 IEEE International Conference on Cybernetics (CYBCONF 2013). - IEEE, S. 128-131Kongress: CYBCONF; (Lausanne, Switzerland) : 2013.06.13-15
Tracking of a handheld ultrasonic sensor for corrision control on pipe segment surfaces
Bendicks, Christian; Lilienblum, Erik; Freye, Christian; Al-Hamadi, Ayoub
In: Advanced concepts for intelligent vision systems - Cham [u.a.]: Springer . - 2013, S. 342-353 - (Lecture notes in computer science; 8192)Kongress: ACIVS 2013 15 (Poznań : 2013.10.28-31)
A new approach for hand augmentation based on patch modelling
Ahmad, Omer Rashid; Al-Hamadi, Ayoub
In: Advanced concepts for intelligent vision systems. - Cham [u.a.] : Springer, S. 162-171, 2013 - (Lecture notes in computer science; 8192)Kongress: ACIVS 2013; 15 (Poznań) : 2013.10.28-31
Upper-body pose estimation using geodesic distances and skin-color
Handrich, Sebastian; Al-Hamadi, Ayoub
In: Advanced concepts for intelligent vision systems. - Cham [u.a.] : Springer, S. 150-161, 2013 - (Lecture notes in computer science; 8192)Kongress: ACIVS 2013; 15 (Poznań) : 2013.10.28-31
Peer-reviewed journal article
Reversible data hiding by integer wavelet transform with lossless EZW bit-stream
Ahmad, Mostafa A.; Meligy, Aly M.; Hashim, Amal H.; Al-Hamadi, Ayoub
In: International journal of computer applications. - [S.l.] : Foundation of Computer Science, Bd. 66.2013, 2, S. 8-15
On tutoring quality improvement of a mathematical topic using neural networks
Abdulhamid, Zedan M.; Mustafa, Hassan M. H.; Al-Hamadi, Ayoub
In: Elixir. - [S.l.], Bd. 57.2013, S. 14003-14008
Simulation of improved academic achievement for a mathematical topic using neural networks modeling
Al-Ghamdi, Saeed A.; Mustafa, Hassan M. H.; Al-Bassiouni, Abdel Aziz M.; Al-Hamadi, Ayoub
In: World of computer science and information technology journal. - [S.l.], Bd. 3.2013, 4, S. 77-84
Affine-invariant feature extraction for activity recognition
Sadek, Samy; Hamadi, Ayoub; Krell, Gerald; Michaelis, Bernd
In: ISRN machine vision - New York, NY: International Scholarly Research Network, 2012, 2013, Article ID 215195, insgesamt 7 S.
Toward real-world activity recognition - an SVM based system using fuzzy directional features
Sadek, Samy; Hamadi, Ayoub; Michaelis, Bernd
In: WSEAS transactions on information science and applications/ World Scientific and Engineering Academy and Society - Athens: WSEAS, 2004, Bd. 10.2013, 4, S. 116-127
Performance and algorithmic analogy of behavioral learning phenomenon in neural network versus ant colony optimization systems
Mustafa, Hassan M. H.; Al-Hamadi, Ayoub; Al-Shenawy, Nada M.; Al-Ghamdi, Saeed A.; Al-Bassiouni, AbdelAziz M.
In: International journal of advanced research. - [S.l.], Bd. 1.2013, 6, S. 313-319
Dissertation
Implicit sequence learning in recurrent neural networks
Glüge, Stefan; Wendemuth, Andreas; Al-Hamadi, Ayoub
In: Magdeburg, Univ., Fak. für Elektrotechnik und Informationstechnik, Diss., 2013, VIII, 132 S.
Vision-based representation and recognition of human activities in image sequences
Bakheet, Samy Sadek Mohamed; Al-Hamadi, Ayoub
In: Magdeburg, Univ., Fak. für Elektrotechnik und Informationstechnik, Diss., 2013, XVIII, 197 S., graph. Darst.
Article in conference proceedings
A locale group based line segmentation approach for non uniform skewed and curved arabic handwritings
Dinges, Laslo; Hamadi, Ayoub; Elzobi, Moftah
In: 12th International Conference on Document Analysis and Recognition (ICDAR 2013): Washington, DC, August 25-28, 2013 - Piscataway, NJ: IEEE, 2013 . - 2013, S. 803-806Kongress: ICDAR 12 (Washington, DC : 2013.08.25-28)
Towards pain monitoring - facial expression, head pose, a new database, an automatic system and remaining challenges
Werner, Philipp; Al-Hamadi, Ayoub; Niese, Robert; Walter, Steffen; Gruss, Sascha; Traue, Harald C.
In: Proceedings of the British Machine Vision Conference 2013. - BMVA, insges. 13 S.
A Hidden Markov model-based approach with an adaptive threshold model for off-line arabic handwriting recognition
Elzobi, Moftah; Al-Hamadi, Ayoub; Dings, Laslo; Elmezain, Mahmoud; Saeed, Anwar
In: 12th International Conference on Document Analysis and Recognition (ICDAR 2013). - Piscataway, NJ : IEEE, S. 945-949Kongress: ICDAR; 12 (Washington, DC) : 2013.08.25-28
An approach for arabic handwriting synthesis based on active shape models
Dinges, Laslo; Hamadi, Ayoub; Elzobi, Moftah
In: 12th International Conference on Document Analysis and Recognition (ICDAR 2013): Washington, DC, August 25-28, 2013 - Piscataway, NJ: IEEE, 2013 . - 2013, S. 1292-1296Kongress: ICDAR 12 (Washington, DC : 2013.08.25-28)
2012
Book chapter
Arabic handwriting recognition using Gabor wavelet transform and SVM
Elzobi, Moftah; Al-Hamadi, Ayoub; Saeeb, Anwar; Dings, Laslo
In: 2012 11th International Conference on Signal Processing, ICSP 2012. - Piscataway, NJ : IEEE, insges. 4 S.Kongress: ICSP; 11 (Beijing) : 2012.10.21-25
Effective geometric features for human emotion recognition
Saeeb, Anwar; Al-Hamadi, Ayoub; Niese, Robert; Elzobi, Moftah
In: 2012 11th International Conference on Signal Processing, ICSP 2012. - Piscataway, NJ : IEEE, insges. 4 S.Kongress: ICSP; 11 (Beijing) : 2012.10.21-25
Human action recognition via affine moment invariants
Sadek, Samy; Al-Hamadi, Ayoub; Michaelis, Bernd; Sayed, Usama
In: ICPR 2012. - IEEE Computer Society, S. 218-221Kongress: ICPR; 21 (Tsukuba, Japan) : 2012.11.11-15
Fast computation of dense and reliable depth maps from stereo images
Tornow, Michael; Grasshoff, Michael; Nguyen-Thien, Nghia; Al-Hamadi, Ayoub; Michaelis, Bernd
In: Machine vision. - InTech, S. 47-72, 2012
Pain recognition and intensity rating based on comparative learning
Werner, Philipp; Al-Hamadi, Ayoub; Niese, Robert
In: ICIP 2012. - Piscataway, NJ : IEEE, S. 2313-2316Kongress: ICIP 2012; (Orlando, Fla.) : 2012.09.30-10.03
An active shape model based approach for Arabic handwritten character recognition
Dinges, Laslo; Hamadi, Ayoub; Elzobi, Moftah
In: 2012 11th International Conference on Signal Processing, ICSP 2012: October 21 - 25, 2012, Beijing, China / eds.: Yuan Baozong ... - Piscataway, NJ: IEEE; Yuan, Baozong . - 2012, insges. 4 S.Kongress: ICSP 11 (Beijing : 2012.10.21-25)
Multimodal affect recognition in spontaneous HCI environment
Panning, Axel; Siegert, Ingo; Al-Hamadi, Ayoub; Wendemuth, Andreas; Rösner, Dietmar; Frommer, Jörg; Krell, Gerald; Michaelis, Bernd
In: 2012 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC 2012) : Hong Kong, China, 12-15 August 2012 ; proceedings. - Piscataway, NJ : IEEE, insges. 6 S.Kongress: ICSPCC; (Hong Kong) : 2012.08.12-15
Flow modeling and skin-based gaussian pruning to recognize gestrural actions using HMM
Rashid, Omer; Al-Hamadi, Ayoub
In: ICPR 2012. - IEEE Computer Society, S. 3488-3491Kongress: ICPR; 21 (Tsukuba, Japan) : 2012.11.11-15
Facial feature point detection using simplified Gabor wavelets and confidence-based grouping
Panning, Axel; Al-Hamadi, Ayoub; Michaelis, Bernd
In: 2012 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2012). - Piscataway, NJ : IEEE, S. 2687-2692Kongress: SMC 2012; (Seoul, Korea) : 2012.10.14-17
An SVM approach activity recognition based on chord-length-function shape features
Sadek, Samy; Al-Hamadi, Ayoub; Michaelis, Bernd; Sayed, Usama
In: ICIP 2012. - Piscataway, NJ : IEEE, S. 765-768Kongress: ICIP 2012; (Orlando, Fla.) : 2012.09.30-10.03
Sharpness improvement of warped document images for top view book scanners
Battrawy, Ramy; Schnitzlein, Markus; Nowack, Dietmar; Krell, Gerald; Al-Hamadi, Ayoub
In: The 8th International Conference on Signal-Image Technology & Internet-Based Systems. - Piscataway, NJ : IEEE, S. 818-824, 2012Kongress: SITIS 2012; 8 (Sorrento, Italy) : 2012.11.25-29
Image-based gesture recognition for user interaction with mobile companion-based assistance systems
Saxen, Frerk; Rashid, Omer; Al-Hamadi, Ayoub; Adler, Simon; Kernchen, Alexa; Mecke, Rüdiger
In: Proceedings of the 2012 4th International Conference of Soft Computing and Pattern Recognition (SoCPaR). - Piscataway, NJ : IEEE, S. 200-203Kongress: SoCPaR; (Brunei) : 2012.12.10-13[Beitrag auf CD-ROM]
Recognizing gestural actions
Rashid, Omer; Al-Hamadi, Ayoub
In: 2012 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2012). - Piscataway, NJ : IEEE, S. 2682-2686Kongress: SMC 2012; (Seoul, Korea) : 2012.10.14-17
Multi hypotheses based object tracking in HCI environments
Handrich, Sebastian; Al-Hamadi, Ayoub
In: ICIP 2012. - Piscataway, NJ : IEEE, S. 1981-1984Kongress: ICIP 2012; (Orlando, Fla.) : 2012.09.30-10.03
LDCRFs-based hand gesture recognition
Elmezain, Mahmoud; Al-Hamadi, Ayoub
In: 2012 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2012). - Piscataway, NJ : IEEE, S. 2670-2675Kongress: SMC 2012; (Seoul, Korea) : 2012.10.14-17
Neutral-independent geometric features for facil expression recognition
Saeed, Anwar; Al-Hamadi, Ayoub; Niese, Robert
In: Proceedings of the 2012 4th International Conference of Soft Computing and Pattern Recognition (SoCPaR). - Piscataway, NJ : IEEE, S. 79-83Kongress: SoCPaR; (Brunei) : 2012.12.10-13[Beitrag auf CD-ROM]
Peer-reviewed journal article
Facial expression recognition based on geometric and optical flow features in colour image sequences
Niese, Robert; Al-Hamadi, Ayoub; Farag, Ali; Neumann, Heiko; Michaelis, Bernd
In: IET computer vision. - London : IET, Bd. 6.2012, 2, S. 79-89
Multi-modal fusion framework with particle filter for speaker tracking
Saeed, Anwar; Al-Hamadi, Ayoub; Heuer, Michael
In: International journal of future generation communication and networking. - Taejŏn : SERSC, Bd. 5.2012, 4, S. 65-76
Multi-object tracking in dynamic scenes by integrating statistical and cognitive approaches
Pathan, Saira Saleem; Rashid, Omer; Al-Hamadi, Ayoub; Michaelis, Bernd
In: International journal of computer science issues. - Mahebourg : SoftwareFirst, Bd. 9.2012, 4, S. 180-189
An integrated HCI framework for interpreting meaningful expressions
Rashid, Omer; Al-Hamadi, Ayoub
In: International journal of computer science issues. - Mahebourg : SoftwareFirst, Bd. 9.2012, 5, S. 411-421
Chord-length shape features for human activity recognition
Sadek, Samy; Al-Hamadi, Ayoub; Michaelis, Bernd; Sayed, Usama
In: ISRN machine vision. - New York, NY : International Scholarly Research Network, insges. 9 S., 2012
IESK-ArDB - a database for handwritten Arabic and an optimized topological segmentation approach
Elzobi, Moftan; Al-Hamadi, Ayoub; Al Aghbari, Zaher; Dings, Laslo
In: International journal on document analysis and recognition. - Berlin : Springer, insges. 14 S., 2012
Dissertation
Behavior understanding in non-crowded and crowded scenes
Pathan, Saira Saleem; Michaelis, Bernd; Hamadi, Ayoub al-
In: Magdeburg, Univ., Fak. für Elektrotechnik und Informationstechnik, Diss., 2012, XXIV, 176 S.
Original article in peer-reviewed international journal
Stereo-camera-based urban environment perception using occupancy grid and object tracking
Nguyan, Thien-Nghia; Michaelis, Bernd; Al-Hamadi, Ayoub; Tornow, Michael; Meinecke, Marc-Michael
In: IEEE transactions on intelligent transportation systems. - New York, NY : Inst. of Electrical and Electronics Engineers, Bd. 13.2012, 1, S. 154-165
Interpretation of meaningful expressions by integrating gesture and posture modalities
Rashid, Omer; Al-Hamadi, Ayoub; Dietmayer, Klaus
In: International journal of computer information systems and industrial management applications. - Auburn, Wash : MIR Labs, Bd. 4.2012, S. 589-597
A fast statistical approach for human activity recognition
Sadek, Samy; Al-Hamadi, Ayoub; Michaelis, Bernd; Sayed, Usama
In: International journal of computer information systems and industrial management applications. - Auburn, Wash : MIR Labs, Bd. 4.2012, S. 334-340
Original article in peer-reviewed periodical-type series
A new multi-camera based facial expression analysis concept
Niese, Robert; Al-Hamadi, Ayoub; Michaelis, Bernd
In: Image Analysis and Recognition ; Pt. II. - Berlin [u.a.] : Springer, S. 64-71, 2012 - (Lecture notes in computer science; 7325)Kongress: ICIAR; 9 (Aveiro, Portugal) : 2012.06.25-27
Speaker tracking using multi-modal fusion framework
Saeed, Anwar; Al-Hamadi, Ayoub; Heuer, Michael
In: Image and signal processing. - Heidelberg [u.a.] : Springer, S. 539-546, 2012 - (Lecture notes in computer science; 7340)Kongress: ICISP; 5 (Agadir) : 2012.06.28-30
Improving of gesture recognition using multi-hypotheses object association
Handrich, Sebastian; Al-Hamadi, Ayoub; Rashid, Omer
In: Image and signal processing. - Heidelberg [u.a.] : Springer, S. 298-306, 2012 - (Lecture notes in computer science; 7340)Kongress: ICISP; 5 (Agadir) : 2012.06.28-30
2011
Book chapter
Machine vision based recognition of emotions using the circumplex model of affect
Niese, Robert; Hamadi, Ayoub; Heuer, Michael; Michaelis, Bernd; Matuszewski, Bogdan
In: 2011 International Conference on Multimedia Technology ; Vol. 7 - Piscataway, NJ: IEEE, 2011; Vol. 7 . - 2011, S. 6424-6427Kongress: ICMT (Hangzhou, China : 2011.07.26-28)
Robust methods for hand gesture spotting and recognition using Hidden Markov models and conditional random fields
Elmezian, Mahmoud; Hamadi, Ayoub; Sadek, Samy; Michaelis, Bernd
In: IEEE International Symposium on Signal Processing and Information Technology - Piscataway, NJ: IEEE, 2011 . - 2011, S. 131-136Kongress: ISSPIT 10 (Luxor, Egypt : 2010.12.15-18)[Beitrag auf CD-ROM]
Multi-modal fusion with particle filter for speaker localization and tracking
Heuer, Michael; Hamadi, Ayoub; Michaelis, Bernd; Wendemuth, Andreas
In: 2011 International Conference on Multimedia Technology ; Vol. 7 - Piscataway, NJ: IEEE, 2011; Vol. 7 . - 2011, S. 6450-6453Kongress: ICMT (Hangzhou, China : 2011.07.26-28)
Human activity recognition via temporal moment invariants
Sadek, Samy; Hamadi, Ayoub; Elmezian, Mahmoud; Michaelis, Bernd; Sayed, Usama
In: IEEE International Symposium on Signal Processing and Information Technology - Piscataway, NJ: IEEE, 2011 . - 2011, S. 79-84Kongress: ISSPIT 10 (Luxor, Egypt : 2010.12.15-18)[Beitrag auf CD-ROM]
Original article in peer-reviewed international journal
Quantifying learning creativity through simulation and modeling of swarm intelligence and neural networks
Mustafa, Hassan M.; Al-Somani, Turki F.; Hamadi, Ayoub
In: International journal of online engineering: IJOE / Publisher: International Association of Online Engineering (IAOE): IJOE - Kassel: Kassel University Press GmbH, 2008, Bd. 7.2011, 2, S. 29-35
Offline automatic segmentation based recognition of handwritten arabic words
Dinges, Laslo; Hamadi, Ayoub; Elzobi, Moftah; Al Aghbari, Zaher; Mustafa, Hassan
In: International journal of signal processing, image processing and pattern recognition: IJSIP - Daejeon: Science and Engineering Research Support Center (SERSC), 2008, Bd. 4.2011, 4, S. 131-144
An action recognition scheme using fuzzy log-polar histogram and temporal self-similarity
Sadek, Samy; Hamadi, Ayoub; Michaelis, Bernd; Sayed, Usama
In: EURASIP journal on advances in signal processing - Heidelberg: Springer, 2007 . - 2011, insges. 9 S.
Face detection and localization in color images - an efficient neural approach
Sadek, Samy; Al-Hamadi, Ayoub; Michaelis, Bernd; Sayed, Usama
In: Journal of software engineering and applications. - Irvine, Calif : Scientific Research Publ, Bd. 4.2011, 12, S. 682-687
Due to technical reasons only 200 publications can be displayed.
View more in the research portal.
- University of Sharjah,(UAE), Prof. Dr. Zaher Al Aghbari
- Universität Ulm, Prof. Dr. Harald C. Traue
- Volkswagen AG, Konzernforschung; Forschung Virtuelle Technik/ Dr.-Ing. J. Tümler und Prof. S. Werner
- Fraunhofer IFF Magdeburg, Geschäftsfeld Virtual Engineering/ Dr.-Ing. R. Mecke
- Universität Ulm, Prof. Dr-Ing. Klaus Dietmayer
- Universität Ulm, Prof. Dr. phil. habil. Anke Huckauf
- Universität Ulm, Prof. Dr.-Ing. Heiko Neumann
- University of Louisville,(USA), Prof. Dr. Farag
- Volkswagen AG, Konzernforschung,; Forschung Virtuelle Technik
- Carl-Zeiss AG
- Fraunhofer IHH Berlin
- Fraunhofer-Institut für Werkzeugmaschinen und Umformtechnik IWU
- Martin-Mechanic GmbH
- Pilz GmbH & Co. KG, Ostfildern
- Prof. Dr. Joachim Weimann
- Universität Bielefeld
- Universitätsklinik Ulm, Prof. Eberhard Barth
- Universitätsklinik Ulm, Prof Steffen Walter
- Universität Ulm, Prof. Steffen Walter
- University of Central Lancashire, UK
- ZBS e.V. / GBS GmbH Illmenau
- Bildverarbeitung und -verstehen
- Mustererkennung
- 3D-Vermessung und Oberflächeninspektion in 3D
Mensch-Maschine-Interaktion
- Gesten- und Mimikerkennung
- Aktions- und Eventerkennung
- Handschrifterkennung
Informationsfusion
- Umgebungs- und Situationsmodellierung
- Emotions- und Intentionserkennung
- Ayoub K. Al-Hamadi received his Masters Degree in Electrical Engineering & Information Technology in 1997 and his PhD. in Technical Computer Science at the Otto-von-Guericke-University of Magdeburg, Germany in 2001.
- Since 2003 he has been Junior-Research-Group-Leader at the Institute for Electronics, Signal Processing and Communications at the Otto-von-Guericke-University Magdeburg.
- In 2008 he became Professor of Neuro-Information Technology at the Otto-von-Guericke University Magdeburg.
- In May 2010 Prof. Al-Hamadi received the Habilitation in Artificial Intelligence and the Venia Legendi in the scientific field of "Pattern Recognition and Image Processing" from Otto-von-Guericke-University Magdeburg, Germany
- Prof. Al-Hamadi is the author of more than 300 articles in peer-reviewed international journals, conferences and books.