Machine learning algorithms learn from examples in order to make predictions about novel data. In medical imaging, annotations can be difficult to acquire and therefore labeled datasets are still often small. My research focuses of various ways to overcome this challenge, including crowdsourcing, transfer learning and multiple instance learning.
A method that has been very successful in computer vision, is to gather more annotations by outsourcing the task to the crowd, i.e. crowdsourcing. Although the crowd annotators do not have medical training, several studies show that by combining their answers, good results can be achieved. So far I have investigated crowdsourcing for measuring airways in chest CT images and quantifying visual attributes of skin lesion images. I am interested in studying how to best combine crowd annotations with already available expert labels, as well as studying how the task design affects the quality of annotations and the annotator experience.
Transfer learning aims to improve learning a task by leveraging information from a different, but related task. One example is multi-task learning. In medical imaging this scenario can be used for learning to classify multiple abnormalities, or learning from multiple experts, before combining their labels into a consensus.
Another example is training a classifier on an external dataset before continuing to train it on the target task. Recent results have shown that the external dataset can be seemingly unrelated to the target task but still improve performance. A question here is what type of properties the datasets should have, for this type of transfer to be successful.
Multiple instance learning
In medical imaging, labels are often available only for entire scans (bags), and not for regions of interest (instances). This creates challenges for supervised learning. Currently I’m interested in similarities of different MIL problems, especially within different areas medical imaging or between medical and non-medical imaging. For example, do the same assumptions hold for each problem? How can we choose a method when we encounter a new dataset? I am also interested in looking at different ways in which MIL algorithms are evaluated and the trade-off between them.
For more information please see my publications.