Artificial intelligence improves biomedical imaging
ETH researchers use artificial intelligence to improve quality of images recorded by a relatively new biomedical imaging method. This paves the way towards more accurate diagnosis and cost-effective devices.
Scientists at ETH Zurich and the University of Zurich have used machine learning methods to improve optoacoustic imaging. This relatively young medical imaging technique can be used for applications such as visualizing blood vessels, studying brain activity, characterizing skin lesions and diagnosing breast cancer. However, quality of the rendered images is very dependent on the number and distribution of sensors used by the device: the more of them, the better the image quality. The new approach developed by the ETH researchers allows for substantial reduction of the number of sensors without giving up on the resulting image quality. This makes it possible to reduce the device cost, increase imaging speed or improve diagnosis.
Optoacoustics (see box) is similar in some respects to ultrasound imaging. In the latter, a probe sends ultrasonic waves into the body, which are reflected by the tissue. Sensors in the probe detect the returning sound waves and a picture of the inside of the body is subsequently generated. In optoacoustic imaging, very short laser pulses are instead sent into the tissue, where they are absorbed and converted into ultrasonic waves. Similarly to ultrasound imaging, the waves are detected and converted into images.
Correcting for image distortions
The team led by Daniel Razansky, Professor of Biomedical Imaging at ETH Zurich and the University of Zurich, searched for a way to enhance image quality of low-cost optoacoustic devices that possess only a small number of ultrasonic sensors.
To do this, they started off by using a self-developed high-end optoacoustic scanner having 512 sensors, which delivered superior-quality images. They had these pictures analyzed by an artificial neural network, which was able to learn the features of the high-quality images.
Next, the researchers discarded the majority of the sensors, so that only 128 or 32 sensors remained, with a detrimental effect on the image quality. Due to the lack of data, distortions known as streak type artefacts appeared in the images. It turned out, however, that the previously trained neural network was able to largely correct for these distortions, thus bringing the image quality closer to the measurements obtained with all the 512 sensors.
In optoacoustics, the image quality increases not only with the number of sensors used, but also when the information is captured from as many directions as possible: the larger the sector in which the sensors are arranged around the object, the better the quality. The developed machine learning algorithm was also successful in improving quality of images that were recorded from just a narrowly circumscribed sector. “This is particularly important for clinical applications, as the laser pulses cannot penetrate the entire human body, hence the imaged region is normally only accessible from one direction,” according to Razansky.