Not-so-supervised learning of algorithms

About a month ago I gave a talk at UC Dublin, titled “Not-so-supervised learning of algorithms and academics”. I talked both a bit about my research as well as things I’ve learned through Twitter and my blog. The slides are available here but to give some context to everything, I thought I would write a few things about it. In this post I discuss the first part of the talk – not-so-supervised learning of algorithms. All links below are to publishers’ versions, but on my publications page you can find PDFs.

Not-so-supervised learning of algorithms

Machine learning algorithms need examples to learn how to transform inputs (such as images) into outputs (such as categories). For example, an algorithm for detecting abnormalities in a scan, would typically need scans where such abnormalities have been annotated. Such annotated datasets are difficult to obtain. As a result, algorithms could learn input-output patterns that only hold for the training examples, but not for future test data (overfitting).

There are various strategies to address this problem. I have worked on three such strategies:

  • multiple instance learning
  • transfer learning
  • crowdsourcing

Multiple instance learning

The idea in multiple instance learning is to learn from groups of examples. If you see two photos and I tell you “Person A is in both photos”, you should be able to figure out who that person is, even if you don’t know who the other people are. Similarly, we can have scans which have abnormalities somewhere (but we are not sure where), and we can figure out what things they have in common, which we cannot find in healthy scans. During my PhD I worked on such algorithms, and applying them to detecting COPD.

Transfer learning

Another strategy is called transfer learning, where the idea is to learn from a related task. If you are learning to play hockey, perhaps other things you already know, such as playing football, will help you learn. Similarly, we can first train an algorithm on a dataset on a larger source dataset, like scans from a different hospital, and then further train it on our target problem. Even seemingly unrelated tasks, like recognizing cats, can be a good source task for medical data.

There are several relationships between multiple instance learning and transfer learning. To me, it feels like they both constrain what our algorithm can learn, preventing it from overfitting. Multiple instance learning is itself also a type of transfer learning, because we are transferring from the task with global information, to the task with local information. You can read more about these connections here.

Crowdsourcing

A different strategy is to gather more labels using crowdsourcing, where people without specific expertise label images. This has been successful in computer vision for recognizing cats, but there are also some promising results in medical imaging. I have had good results with outlining airways in lung scans and describing visual characteristics of skin lesions. Currently we are looking into whether such visual characteristics can improve the performance of machine learning algorithms.

***

This was an outline of the first part of the talk – stay tuned for the not-so-supervised learning of academics next week!

1 thought on “Not-so-supervised learning of algorithms”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Mastodon More Mastodon
%d bloggers like this: