馃攳
AI detectives are cracking open the black box of deep learning - YouTube
Channel: Science Magazine
[1]
Say anytime you're talking to your phone--
[3]
that's pretty much a neural net doing that.
[6]
It does really well on image recognition,
[9]
autonomous cars coming soon,
[12]
the top flight method for genetic sequencing
[14]
is a neural network.
[16]
You can say they're loosely inspired by the brain.
[19]
It is a network of neurons, people call it neurons.
[22]
They're all connected to each other
[24]
going down through the layers.
[26]
In a way they mimic neurons in that
[28]
they're little triggers.
[29]
Each neuron has kind of a threshold--
[32]
called a weight where it makes a decision.
[33]
So it takes in data, it could be huge amount images.
[38]
And then once the network is trained--
[40]
You can put one image in at the front.
[43]
They fire when they see the thing that they want to see.
[46]
So you get this kind of cascade
[48]
moving through the neural network.
[50]
And that does really poorly. it just does horribly.
[52]
But there's this little magic trick
[54]
called back propagation and this is very unbiological--
[57]
Your neurons don't do this at all.
[59]
But at the end you're like: "OK, well you were wrong.
[62]
But here's a proper picture of a dog."
[64]
And now you send all that information
[66]
back through the network again
[68]
and the network learns just a little bit
[69]
better at how to say: "Oh this is a dog."
[72]
It's elegant math and really the breakthroughs
[75]
in the field were when they stopped
[76]
trying to be so biological in the 1980's.
[79]
At the time they didn't work that great
[81]
but it turned out all they needed
[82]
was a huge amount of processing power
[83]
and a huge amount of examples
[85]
and they would really sing.
[88]
So in many of these cases accuracy is enough
[91]
but when you're talking really life and death decisions--
[94]
driving autonomous cars, making medical diagnoses,
[97]
or pushing the frontiers of science
[100]
to understand why a discovery was made--
[102]
you really need to know what the AI is thinking.
[105]
Not just what its results are.
[107]
Since you have all these layers it just engages
[109]
in such complex decision-making
[111]
that it just is essentially a black box.
[114]
We don't really understand how they think.
[116]
All is not lost with this black box problem.
[119]
People are actually trying to solve this now
[120]
through a variety of different ways.
[122]
One researcher has created a toolkit to get at
[125]
the activation of individual neurons in a neural network.
[129]
This toolkit works by taking this individual neuron
[131]
and telling the network to go back
[132]
and find all the weights that really make that neuron
[135]
just fire like crazy.
[137]
Keep doing this thousands of times and you can see
[140]
the kind of perfect input for this neuron.
[142]
Then looking inside these layers he could see
[145]
that some neurons learn these really complex,
[149]
abstract ideas like a face detector--
[153]
something that can detect any human face,
[155]
no matter what it looks like.
[156]
This is not a property you would expect
[158]
an individual neuron to have.
[159]
Most don't, but some do.
[162]
Many neural net researchers think of it this way:
[164]
You can think of the decision making of a neural net
[166]
as this kind of terrain of valleys and peaks
[170]
and the ball represents this piece of data.
[174]
So you can understand this one decision
[176]
that was made in one valley,
[178]
but all those other valleys
[179]
you have no idea what's going on.
[182]
So one way to get at what an AI is thinking
[185]
is to find a proxy for what it's thinking.
[188]
So one professor has taken the video game Frogger
[191]
and trained an AI to play the game extremely well--
[194]
because that's fairly easy to do now.
[196]
But, you know, what is it deciding to do?
[198]
It's really hard to know
[199]
especially in a sequence of decisions
[201]
and a dynamic environment of that.
[203]
So rather than trying to get the AI itself to explain itself
[206]
He asked people to play the video game
[208]
and have them say what they were doing
[210]
as they were playing it
[211]
and recorded the state of the frog at the same time.
[214]
And then he found a way to use neural networks
[216]
to translate those two languages:
[219]
the code of the game and what they were saying
[222]
And then he imported that back
[223]
to the game playing network.
[225]
So then the network was armed with these
[227]
human insights.
[229]
So, as the frog is waiting for a car to go by
[231]
It'll say: "Oh, I'm waiting for a hole to open up
[233]
before I go."
[235]
Or it could get stuck on the side of the screen
[238]
and it would say: "Uh geez, this is really hard and curse.
[242]
So it's kind of lashing human decision-making
[244]
on to this network with more deep networks.
[247]
It's all about trust.
[248]
If you have a result
[250]
and you don't understand why it made that decision--
[253]
how can you really advance in your research?
[256]
You really need to know that there's not
[258]
some spurious detail that's throwing things all off.
[261]
But these models are just getting larger
[263]
and more complex and I don't think anyone thinks
[266]
they'll get to a global understanding
[268]
of what a neural network is thinking anytime soon.
[270]
But if we can get a sliver of this understanding--
[273]
science can really push forward
[275]
and these neural networks can really play.
Most Recent Videos:
You can go back to the homepage right here: Homepage





