Our aim is to build very rich computational descriptions of ultrasound video content using deep learning architectures that are potentially useful for clinical applications such as diagnosis, information recall, training and audit.
Our current research includes:
Our aim is to enhance understanding of human visual search and navigation in clinical sonography by large-scale data analysis of simultaneously acquired ultrasound, eye-tracking and probe motion data.
We are collecting a large dataset of 1000 full obstetric ultrasound scans with sonographer eye-movement and probe motion data simultaneously acquired for this purpose. This amounts to about 750 hours of clinical workflow/skills assessment video.
We are studying the human visual search and navigation strategies employee by highly-trained sonographers in task-oriented scenarios.
We are studying whether minimally-trained users and experts follow different search strategies, and whether there are different strategies amongst experts.
Knowledge gleaned will lead to better understanding of skills assessment, clinical workflow optimisation and assistive device design.