HEWLETT PACKARD ENTERPRISE - Moral Code: The Ethics of AI - YouTube

Channel: unknown

[0]
When people wonder what is AI, they think about robots.
[4]
Robots.
[5]
Let's make a distinction.
[6]
A robot itself is hardware.
[8]
AI is software.
[10]
What is Artificial Intelligence?
[12]
Systems that mimic, perhaps even go beyond,
[15]
our own ability to think and to process information.
[19]
It determines who we want to date,
[22]
whether we get a certain kind of loan,
[25]
operating at all levels of our economy,
[27]
in the political realm, in the judicial system,
[30]
employment practices, in the educational system.
[33]
All of these decisions help to determine
[35]
how well or how poorly our lives go.
[37]
And increasingly, AI is involved in those decisions.
[40]
We are right to be concerned
[42]
when we are just now learning to think about
[44]
the social and ethical implications of using AI.
[48]
[Kirk] So many of our own stories are about the relationship
[50]
between the creation and the creator.
[59]
So we're at this inflection point right now.
[61]
Suddenly all of this data, these algorithms,
[64]
and the computational power is available.
[66]
That's really been the breakthrough.
[68]
Things are becoming real that we've talked about,
[71]
we've dreamt about for decades.
[73]
AI which can do a very specific narrow task.
[76]
That can recognize voice,
[77]
navigate a car, translate language.
[80]
And if this is going to be the ubiquitous tool
[82]
we're doing ourselves a disservice
[84]
by not educating the general population also
[87]
on what is really going on right now.
[88]
We have so many immediate concerns about the kind of AI
[91]
that will be developed that we really need to devote
[93]
most of our educational efforts to those kinds of problems.
[101]
Artificial intelligence uses a massive amount of data
[105]
to recognize certain kinds of patterns
[107]
that are correlated with a successful result.
[110]
Hidden in these vast pools of data that we're now using
[113]
are remarkable opportunities to run off the rails.
[116]
So we have to be careful about the data we give it.
[118]
Data.
[119]
Data can be bad because it's not been properly labeled,
[122]
because it's inaccurate, because it has baked into it
[125]
certain human biases that are naturally a part of society.
[130]
One clear example of data bias is the use of algorithms
[133]
to make decisions about the fate of prisoners.
[136]
If you have enough data points, you should be able to
[138]
judge a person's likelihood of re-offending.
[142]
But the problem is
[143]
[John] if there is bias hidden in the data,
[146]
your outcome in the judicial system may be very different
[149]
than someone who is upper middle class and white.
[151]
Why is it projecting higher risk scores
[154]
onto black defendants?
[156]
It often turns out that the algorithm itself isn't feeding
[160]
on any data that's explicitly labeled in terms of race.
[163]
A zip code doesn't tell you anything about a person's race.
[167]
What the machine learns is, ah, people in this zip code
[171]
are more likely to be criminal, when in fact,
[173]
it may just be that they're more likely to be black or poor
[177]
or any other category that is a subject of biased practices.
[182]
These technologies will actually reinforce
[185]
some of the worst aspects of our society,
[187]
which is really a great tragedy because the technology
[190]
has the potential to actually do exactly the opposite.
[193]
Machines aren't racist.
[195]
Machines aren't sexist.
[196]
Machines aren't ableist or classist.
[198]
We are.
[199]
Our biases really get not only embedded
[202]
in this one program, but get scaled out rapidly.
[205]
Silicon Valley has been a magnet
[207]
for all the world's best,
[209]
but it also tends to be dominated by white males.
[212]
AI has a white guy problem.
[214]
It's going to be crucial for us
[215]
to have different kinds of diversity
[217]
to be part of the development of AI.
[220]
Who builds these AI systems
[223]
and what data are we using in them is vital.
[229]
What I see is a generation that takes all kinds
[233]
of life instructions from this device whether it is
[235]
where to go to eat Korean barbecue or who to marry.
[238]
It all comes magically out of the palm of your hand,
[241]
from an algorithm that you know nothing about.
[243]
Who's behind that screen?
[244]
Who's on the other end of the wire?
[245]
What are the motivations of that algorithm?
[247]
People no longer even think to ask, it's just so convenient.
[251]
[Kirk] That ability to predict and then modify our behavior,
[256]
perhaps even without us fully understanding why or how.
[260]
If you want the machine to behave in an ethical manner,
[264]
you have to know why it does what it does.
[266]
Part of the transparency--
[267]
Transparency.
[268]
Is to be able to explain the logical steps
[271]
that it went through to arrive at a certain recommendation.
[275]
When we are training these machine learning algorithms,
[278]
we're supplying it a million pictures of cats.
[281]
And we just say, "That's a picture of a cat."
[284]
We don't say, "That's a picture of a cat,
[285]
"because of the ears and the tail,
[287]
"the whisker and the nose," and all that.
[288]
The machine itself starts to recognize that such and such
[291]
is a cat and such and such is not a cat.
[294]
We don't actually know how the circuit has reached
[297]
that particular set of conclusions.
[300]
Hot dog, not hot dog, whatever.
[301]
That's funny.
[302]
Fire missile, not fire missile.
[303]
That's not funny.
[305]
[John] The particular nature of the technology
[307]
that's being rolled out, they're so complex
[309]
and they operate on such large data sets
[312]
that the function of the algorithm
[314]
is largely eluding the designers.
[316]
And maybe that's part of the regulation
[317]
that will be built around AI.
[320]
If you cannot explain what this program has done,
[322]
it just should not be released to humankind.
[324]
And there is this movement starting where more and more
[327]
people say, "Well, we need to be able to audit this.
[330]
"We need to be able to understand what's going on."
[337]
We're about to bump into things
[338]
that we really have not thought about as a human species.
[342]
The notion that you can interact with a bot on the internet
[346]
and it can effect your propensity to vote one way or another
[350]
is a tremendous threat to democracy.
[352]
Autonomous weapons is a terrifying concept.
[355]
Drones with smart algorithms being turned into weapons.
[359]
There is a legitimate fear that the power
[362]
of artificial intelligence will be disproportionately
[365]
put in the hands of the already powerful.
[368]
Should we give it to the rich, 'cause they can afford it?
[370]
There is a popular view that over a couple decade period
[374]
we'll have no jobs at all.
[375]
It's not a binary like,
[377]
"Do I have a job in the future or not?"
[379]
It's, "How is my job gonna change?"
[381]
This isn't a bubble.
[383]
AI is going to become a bigger and bigger
[384]
and bigger part of everything.
[386]
And this might be the way that we get to the stars,
[388]
get to the bottom of the ocean.
[389]
Cure cancer, fix climate change.
[391]
The world is getting better
[393]
and these technologies are enabling that.
[395]
At its best, it's going to solve all of our problems.
[397]
And at its worst, it's going to be
[398]
the thing that ends humanity.
[404]
We must develop responsible and safe practices
[408]
before the power gets too unmanageable for us to control.
[413]
Whose responsibility is it?
[414]
Is it corporations?
[415]
Government?
[416]
Is it society?
[417]
And it's really all of us.
[419]
There would probably scientists and technologists
[421]
who would say, "I'm not gonna let any consideration stop me
[425]
"from my investigation of the world we live in."
[428]
We all have to together say,
[430]
"Is that a road we want to go down?"
[432]
Human responsibility for human power.
[436]
[Curt] You shouldn't be okay with us making all the rules.
[439]
The new Manhattan Project is not AI development;
[441]
the new Manhattan Project is AI safety.
[443]
The more that they take on characteristics
[446]
of our own cognitive processes,
[447]
the more that raises our responsibility as their creators,
[451]
as their trainers, to constantly be vigilant
[455]
and ask ourselves, "How should I be doing this work?"
[458]
Technology by itself is not good or bad.
[461]
If AI is done right,
[463]
it's all going to make us better humans.