This Canadian Genius Created Modern AI - YouTube

Channel: Bloomberg Quicktake: Originals

[0]
(playful music)
[5]
This is Geoff Hinton.
[7]
Because of a back condition, he hasn't been able
[9]
to sit down for more than 12 years.
[14]
I hate standing, I would much rather sit down,
[15]
but if I sit down I have a disc that comes out.
[18]
So. Okay.
[19]
Well, at least now standing desks are fashionable and--
[21]
Yeah, but I was ahead.
[22]
(laughs)
[23]
I was standing when they weren't fashionable.
[27]
Since he can't sit in a car or on a bus,
[30]
Hinton walks everywhere.
[33]
(playful music)
[39]
The walk says a lot about Hinton and his resolve.
[43]
For nearly 40 years, Hinton has been trying
[45]
to get computers to learn like people do.
[48]
A quest almost everyone thought was crazy,
[50]
or at least hopeless.
[52]
Right up until the moment it revolutionized the field.
[56]
Google thinks this is the future of the company.
[58]
Amazon thinks this is the future of the company.
[60]
Apple thinks it's future of the company.
[62]
My own department thinks this stuff's probably nonsense
[64]
and we shouldn't be doing any more of it.
[66]
(laughs)
[67]
So, I talked everybody into it except my own department.
[71]
(playful music)
[79]
You obviously grew up in the UK,
[81]
and you had this very prestigious family
[83]
full of famous mathematicians and economists
[87]
and, I was curious what that was like for you.
[89]
Yeah, there was a lot of pressure.
[91]
I think by the time I was about seven,
[94]
I realized I was gonna have to get a Ph.D.
[96]
(laughing)
[98]
Did you rebel against that?
[100]
Or you went along with it?
[101]
I dropped out every so often.
[102]
I became a carpenter for a while.
[106]
Geoff Hinton pretty early on became obsessed
[108]
with this idea of figuring out how the mind works.
[114]
He started off getting into physiology,
[117]
the anatomy of how the brain works,
[119]
then he got into psychology, and then finally,
[122]
he settled on more of a computer science approach
[125]
to modeling the brain, and got into artificial intelligence.
[130]
My feeling is, if you want to understand
[133]
a really complicated device like a brain,
[136]
you should build one.
[138]
I mean, you can look at cars,
[139]
and you could think you could understand cars.
[140]
When you try to build a car, you suddenly discover
[142]
then there's this stuff that has to go under the hood,
[145]
otherwise it doesn't work.
[146]
Yeah. (laughs)
[148]
As Geoff was starting to think about these ideas,
[151]
he got inspired by some AI researchers across the pond.
[155]
Specifically, this guy: Frank Rosenblatt.
[159]
Rosenblatt, in the the late 1950s,
[162]
developed what he called a perceptron,
[165]
and it was a neural network, a computing system
[169]
that would mimic the brain.
[172]
The basic idea is a collection
[175]
of small units, called neurons.
[177]
These are little computing units,
[179]
but they're actually modeled on the way
[181]
that the human brain does it's computation.
[184]
They take their incoming data like we do from our senses,
[187]
and they actually learn, so the neural net
[189]
can learn to make decisions over time.
[194]
Rosenblatts's hope was that you could feed
[196]
a neural network a bunch of data,
[199]
like pictures of men and women,
[201]
and it would eventually learn how to tell them apart.
[203]
Just like humans do.
[208]
There was just one problem: it didn't work very well.
[212]
Rosenblatt, his neural network was the single layer
[216]
of neurons, and it was limited in what it could do.
[220]
Extremely limited.
[222]
And a colleague of his wrote a book in the late 60s
[226]
that showed these limitations.
[230]
And, it kind of put the whole area of research
[234]
into a deep freeze for a good 10 years.
[236]
No one wanted to work in this area.
[239]
They were sure it would never work.
[241]
Well, almost no one.
[244]
It was just obvious to me that everything
[246]
was about ready to go.
[247]
The brain's a big neural network,
[249]
and so, it has to be that stuff like this can work,
[252]
because it works in our brains.
[254]
There's just never any doubt about that.
[256]
And what do you think that it was inside of you
[259]
that kept you wanting to pursue this
[261]
when everyone else was giving up?
[262]
Just, that you thought it was the right direction to go?
[265]
No, that everyone else was wrong.
[266]
Okay.
[267]
(laughs)
[269]
(upbeat music)
[271]
Hinton decides he's got an idea
[273]
of how these neural nets might work,
[275]
and he's going to pursue it no matter what.
[279]
For a little while, he's bouncing around
[281]
research institutions in the US.
[283]
He kind of gets fed up that most of them
[286]
were funded by the Defense Department,
[288]
and he starts looking for somewhere else he can go.
[291]
I didn't want to take Defense Department money.
[294]
I sort of didn't like the idea that this stuff
[296]
was going to be used for purposes
[298]
that I didn't think were good.
[300]
He suddenly hears that Canada might be interested
[303]
in funding artificial intelligence.
[305]
And that was very attractive,
[308]
that I could go off to this civilized town,
[310]
and just get on with it.
[312]
So I came to the University of Toronto.
[315]
And then in the mid-80s, we discovered
[316]
how to make more complicated neural nets
[318]
so they could solve those problems
[320]
that the simple ones couldn't solve.
[322]
He and his collaborators developed
[324]
a multi-layered neural network, a deep neural network.
[329]
And this started to work in a lot of ways.
[333]
Using a neural network, a guy named
[335]
Dean Pomerleau built a self-driving car in the late 80s.
[339]
And it drove on public roads.
[342]
Yann LeCun, in the 90s, built a system
[345]
that could recognize handwritten digits,
[347]
and this ended up being used commercially.
[351]
But again, they hit a ceiling.
[353]
(upbeat music)
[356]
It didn't work quite well enough,
[357]
because we didn't have enough data,
[358]
we didn't have enough compute power.
[361]
And people in AI and computer science,
[364]
decided that neural networks
[365]
were wishful thinking, basically.
[368]
So, it was a big disappointment.
[372]
Through the 90s, into the 2000s,
[374]
Geoff was one of only a handful of people on the planet
[378]
who were still pursuing this technology.
[382]
He would show up at academic conferences
[384]
and be banished to the back rooms,
[387]
he was treated as, really like a pariah.
[390]
Was there like a time when you thought
[392]
this just wasn't going to work?
[394]
And you had some self-doubt?
[396]
I mean there were many times when I thought,
[399]
"I'm not going to make this work."
[400]
(laughs)
[402]
But Geoff was consumed by this and couldn't stop.
[406]
He just kept pursuing the idea
[408]
that computers could learn.
[410]
Until about 2006, when the world catches up
[414]
to Hinton's ideas.
[416]
(upbeat music)
[420]
Computers are now a lot faster.
[422]
And now, it's behaving like I thought
[424]
it would behave in the mid-80s.
[426]
It's solving everything.
[427]
The arrival of super-fast chips,
[430]
and the massive amounts of data produced on the internet
[433]
gave Hinton's algorithms a magical boost.
[436]
Suddenly, computers could identify what was in an image.
[441]
Then, they could recognize speech
[443]
and translate from one language to another.
[447]
By 2012, words like neural nets and machine learning
[451]
were popping up on the front page
[453]
of the New York Times.
[455]
You have to go all these years,
[456]
and then all of a sudden, in a the span of a few months,
[460]
it just takes off.
[461]
Did it finally feel like aha,
[463]
the world has finally come to my vision?
[466]
It was sort of a relief that people
[468]
finally came to their senses.
[469]
(laughs)
[470]
(gentle music)
[473]
For Hinton, this was clearly a redemptive moment
[477]
after decades of toil.
[480]
And for Canada, it meant something even bigger.
[485]
Hinton and his students put the country on the map
[488]
as an AI superpower,
[491]
something no one, and no computer,
[494]
could ever have predicted.