How I Fail S01E21: Ian Goodfellow (PhD’14, Computer Science)

Ian Goodfellow is a staff research scientist on the Google Brain team, where he leads a team of researchers studying adversarial techniques in AI. He was included in MIT Technology Review’s “35 under 35” as the inventor of generative adversarial networks. He is the lead author of the MIT Press textbook Deep Learning. You can find out more about Ian on his website and on Twitter.

1. Hi Ian – thanks for joining How I Fail! I have to admit, I almost failed by being too scared to invite you, so I’m very excited you agreed. Can you tell us a little bit about yourself?

Thanks for inviting me!

I’m an AI researcher at Google and I lead a team of other researchers. We’re working to understand failures of AI better so that we can establish clear engineering principles for responsible AI development. I spend most of my own time studying how to make AI secure—for example, how to make sure that malicious attackers can’t fool AI-based systems into doing what the attacker wants instead of what the designers intended.

2. On Twitter you posted a list of rejections – can you elaborate on these a bit, for an “unofficial bio” of sorts?

My colleague Moritz Hardt tweeted to remind everyone that it was the “time of the year to keep in mind that the typical start of a successful academic career is getting rejected from a bunch of good grad schools.”

I replied with a list of grad schools and fellowship stipends that I was rejected from.

That one story probably is not much of a bio. Here is more of a bio mentioning some failures along the way:

  • As an undergrad at Stanford, I struggled in biology and chemistry classes while preparing for a career in neuroscience. I got OK grades, but I didn’t think I was doing well enough to become a professor.
  • After I changed my focus to computer science, I applied for several internships as an undergrad. Notably, Google rejected me from an internship.
  • I once applied for a summer internship with a Stanford professor. My transcript was included in my application. He replied “Why do you have an A in my class?” It turned out I wasn’t actually meant to have an A in his class. I thought there had been a generous curve, but there had only been a computer glitch. The result of my internship application was that Stanford downgraded my transcript.
  • Other large tech companies gave me internship offers, but not to work in machine learning or computer vision. On my CV, you don’t see these failures, just the eventual successes (I’m very grateful that Willow Garage and Stanford’s CURIS program gave me the chance to work on vision for robotics during summer internships)
  • In both my masters and my PhD, I spent most of my time without an outside fellowship. This meant I had to work as a teaching assistant or work on specific paying grants rather than focusing primarily on my research interests. I continually applied for fellowship’s like Quebec’s PBEEE. I spent 2009-2013 trying and failing to get open-ended funding until in 2013 I Google gave me the first PhD Fellowship in Deep Learning.
  • Vision conferences like ECCV rejected most of the papers I wrote before my PhD. I did a lot of work on perception for robots that never saw the light of day.

3. What factors do you think helped you overcome these setbacks?

In high school, I spent three years on my school’s debate team, coached by two really great teachers: Kerry Koda and Thomas King.

I’ve been surprised how many different aspects of a career in science my debate experience was able to help me with. In terms of overcoming setbacks, debate is useful because debaters all learn how to deal emotionally with failure. Every debate round has a winner and a loser. No one is so good that they always win. If you stick with debate for very long, you quickly get used to the idea of losing a round and then immediately going to another classroom and doing another round, failing to place at a tournament and then immediately going to another tournament the next weekend. You learn not to ruminate and beat yourself up. Also, your expectations get adjusted a lot. You get used to having a constant stream of both failures and successes.

4. There have been some responses to CV of Failures being a humblebrag or a sign of privilege – what would your answer to that be?

When I tweeted about this before, people didn’t react that way. A lot of people thanked me for sharing my rejections. I can definitely understand why people would see this as a humblebrag, but I think most people also understand that I’m doing this to help other people escape impostor syndrome.

5. On the opposite end, do you think that with failure being common, people might decide not to share their successes?

No, we’re all basically forced to share our successes, either for performance review at work, or in grant applications, etc. When submitting papers to conferences and journals, everyone is very incentivized to show the upsides of their paper and try to sweep the downsides under the rug. I personally fight this incentive as much as possible but I don’t imagine it going away in general anytime soon.

6. So far we are talking about successes and failures as things with discrete decisions, like positions and publications. Are there also other things that we can fail at?

I actually think that most discrete points of failure (acceptance / rejection to a specific grad school, or acceptance / rejection of a paper submitted to a conference, etc.) do not matter all that much.

I tweeted about being rejected from a lot of graduate schools, but that was fine, because I was accepted to a lot of others.

For example, in 2009, the largest obstacle for me was not that I had been rejected from some top schools, like MIT and Carnegie Mellon. The largest obstacle for me was that I wasn’t sure I would be able to do the research I wanted to at other top schools where I was accepted, like Stanford and Berkeley. It wasn’t clear who my advisor would be at either school (because there is a rotation program for new PhD students; the advisor is not assigned at the time of your admission offer), and relatively few potential advisors at these schools were supportive of deep learning research. I overcame this obstacle by going to Université de Montréal, with Yoshua Bengio locked in as my advisor ahead of time.

Probably the failure I consider the biggest is that I spent most of my PhD trying to solve supervised learning for computer vision using unsupervised feature learning methods, and was caught totally off guard when Alex, Ilya, and Geoff won the ImageNet contest with purely supervised methods. I think that in general wasting time writing papers that turn out to be dead ends is the main way that I fail in my own eyes. Especially now that it’s normal to post papers on, I consider my work a success if it influences other researchers, even if it gets rejected from a conference, and I consider my work a failure if it has little influence, even if it gets accepted to a conference.

7. Is there anything that you feel like you are currently failing at, or you are hesitant about in the future?

For my own point of view: I’ve been working on understanding why neural networks are easily confused by small perturbations to their input (both through my own direct research effort and working to promote interest in the topic to get other researchers to solve it) for nearly 4 years, and still no one knows how to make a model with high accuracy in this setting.

From the point of view of traditional metrics of career success: The reviews of my papers submitted to ICML this year were particularly brutal and I expect most of them to be rejected.

Another thing I think is definitely worth mentioning: The way that I work, I rapidly try out several ideas to see if they show promise, and discard most of them. On a good day when I get a lot of time at my desk, I might code up 3-5 ideas and decide that none of them work. The time investment per idea is small but I can try out a large amount of relatively different ideas. From this point of view, failure of specific ideas is just a constant part of my workflow.

8. When we talk about successful researchers, what do you think about the distribution of weights that are placed on things like publications? Are there some factors that tend to be overlooked?

I think that our metrics for success are causing society to miss out on whole categories of successful people.

For example, we spend a lot of time evaluating work and evaluating people but we don’t spend a lot of time evaluating the evaluation processes themselves. There’s no one whose job is to make sure that conference review processes are fair and accurate. We know from the NIPS experiment that there is a lot of noise in the reviewing process (Eric Price has shown that area chairs disagreed more often than they agreed about how to handle a paper in this process), and yet there is no one spearheading efforts to develop better reviewing processes backed by evidence that they actually work. The research community should value efforts that improve the effectiveness of the community as a whole, but so far we just don’t seem to have any way of putting value on such efforts.

9. Do you think machine learning as a field has a different relationship with failure than other fields? Does this affect different groups of people in different ways?

Machine learning has very high expectations in terms of very rapidly producing a lot of successful work and exerting influence over the firehose of everyone else’s rapidly produced work. For example, Ilya Sutskever has over 50,000 Google Scholar citations, while in mathematics, none of the four most recent Fields Medal winners has over 5,000 citations. It is very strange that in our field success is so explosive. This is probably in part because we use so much rather than being more primarily focused on peer-reviewed publications. To be honest, I don’t know a lot about how it affects different groups of people.

10. What are your thoughts on negative results in machine learning?

I think it’s hard to extract value from negative results in machine learning because it can be so hard to tell what caused the negative result. A negative result might point to something very fundamental wrong with an idea, but it might also just be the result of a very small software bug, the wrong idea of the hyperparameter values to try out, too small of a model, etc.

11. If you could reach all senior academics in ML, what would you tell them?

If I could reach all senior academics / conference organizers / journal editors in ML, I’d tell them: The community needs to have a better way of settling disputes over sharing credit for ideas.

Currently, these are mostly settled by the person who feels they have not been given appropriate credit directly contacting the author of the publication that fails to give them credit. This works if the issue was a simple oversight (author of the new publication wasn’t aware of the old publication) but usually if the two parties disagree it can turn ugly. When there’s no central authority and persuasion fails, individuals fall back to a carrot or a stick, and most people do not have much of a carrot to offer.

This kind of experience is especially stressful if a senior, famous professor demands credit from a junior researcher, such as a PhD student.

As my work has become more well known in the machine learning community, I’ve spent more and more of my time dealing with this kind of conflict.

It would be much better if a conference or journal offered a centralized place to have these conflicts adjudicated efficiently by neutral third parties.

12. What is the best piece of advice you could give to your past self?

I wish I’d used some of those GPUs I bought for deep learning to mine some bitcoin.

3 thoughts on “How I Fail S01E21: Ian Goodfellow (PhD’14, Computer Science)”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Mastodon More Mastodon