Human Factors in eHealth

Design of new technologies and cognitive factors for health service, assessment and monitoring.


Principal Investigator

Andreas Duenser, Senior Research Scientist (email)

Team Members

Martin Lochner, Postdoctoral Fellow
Daniel Smith, Research Scientist (affiliate member)
David Silvera Tawil, Research Scientist, Health & Biosecurity (affiliate member)
Jill, Freyne, Principal Research Scientist, Health & Biosecurity (affiliate member)


2015 – ongoing

Project Description

In these projects we investigate human factors for health and collaborate with with others to design new health service technologies and assessment tools. The main goals of this work are to create eHealth technologies in a human-centered approach that help enhance health outcomes, service efficiency and reduce cost of healthcare delivery.

Automatic blood-oxygen targeting in neonatal intensive care – workload and trust in automation

For details on this project please see here Automation, Trust and Workload

Mobile Speech Assessment

One in five Australian children has some sort of speech or language disorder. If left untreated, this can lead to poorer educational outcomes, reduced employment prospects and increased likelihood of social, emotional and mental health issues. Early detection can significantly reduce acute health care and ongoing social costs, however, the current system is overburdened and 15% of children have to wait more than one year for professional assessment. We aim to develop a mobile speech assessment that can be used by child carers to test for potential speech disorders in young children. We investigate and incorporate game-ification to encourage children to speak a set of specific words that can be processed and provide an assessment of developmental issues.

The tool provides decision support and recommendations about whether or not professional follow up is needed. This will allow to better identify children needing early speech pathology intervention. This will reduce specialist time needed in the assessment process, reduce long waiting lists, and help bridge social inequities of service access. The system incorporates signal processing and machine learning techniques to automatically identify if words are correctly pronounced by using phoneme segmentation, classification and diagnostics. Classification outcomes are compared to age and gender specific norms to determine the likelihood of specific issues and whether referral should be recommended.

Past projects:

Assistive Rehabilitation

Stroke is a leading cause of severe disabilities that require intense rehabilitation. The aim of our project is to develop a new low-cost robotic system to improve arm function in stroke patients who have limited upper limb movement. This system will fill an important gap in treatment offerings for people with little to no upper limb movement after stroke, for whom regular treatments often are unsuitable. The system provides real-time visual and sensory feedback and, through eye gaze tracking, the ability for participants to steer the movement direction of the assisted movements by simply looking at objects. Movements are triggered through a Brain-Computer Interface that detects intention to carry out upper limb movements. Hundreds of repetitions are required for rehabilitation to take effect, assist in strengthening neural pathways and improving motor function. This system is designed with gaming technology to increase the motivation to perform the required movements. In this project we are partnering with the National Stroke Foundation, the Menzies Institute, and The Royal Hobart Hospital, and the Department of Health and Human Services.


Figure 1: Left: usual setup of robotics supported rehabilitation system with visual stimuli being presented on an external computer screen; right: our system with shared frame of reference – stimuli being presented on an integrated screen underneath the moving hand.



Figure 2: Development of algorithms for interpreting EEG-based brain signals.



Figure 3: System prototype.

Cognitive Elements of Visual Health

Introduction: Visual health is a key facilitator of productivity and wellbeing in people of all ages.  While a certain subset of the population maintains healthy vision throughout their adult life, many people experience visual dysfunction to a varying degree.  Physiological indicators of visual dysfunction are very well developed, however the cognitive elements of visual dysfunction are much less understood.

There is ample evidence that cognitive and neurological processes have a profound impact on visual performance.  A simple example of this is the fact that when one stares fixedly at an object, vision worsens due to an adaptation of the neurons involved (i.e.,  “Troxler’s Fading”).  It is likely that visual habits formed over time have an impact on how the eyes perform, which then has immediate costs in terms of classroom performance, visual health, and ophthalmological intervention.  By learning how visual performance is linked to cognitive parameters, we will better understand why it is that eyesight begins to fail under certain conditions, and potentially how to reduce this problem.  This research can potentially provide an alternate vector on visual health, and could lead to the early detection of warning signs.  Early detection of a disorder could allow for potential remediation, which involves advantages both for human health, education, and from an economic perspective.

This work aims to develop a battery of tests, using modern psycho-physiological tools to enable a thorough understanding of the cognitive elements of vision and how this relates visual health.  These tests can then be used to make predictions and comparisons across populations.  Considering that vision accounts for a major portion of the human cortex, it is vital that we take into account the cognitive and neurological, as well as the ocular aspects of vision.  The operating hypothesis is that there are aspects of visual-cognitive performance that may be associated with specific visual conditions.  If we can quantify these elements, we can then begin to develop novel methodologies for identifying and categorizing visual dysfunction.