The 4 things it takes to be an expert - YouTube

Channel: Veritasium

[0]
- Do you bring this trick out at parties?
[3]
- Oh no. It's a terrible party trick.
[5]
Here we go.
[6]
3.141592653589793
[10]
- This is Grant Gussman.
[11]
He watched an old video of mine
[13]
about how we think
[13]
that there are two systems of thought.
[16]
System two is the conscious slow effortful system.
[19]
And system one is subconscious.
[21]
Fast and automatic.
[23]
To explore how these systems work in his own head,
[26]
Grant decided to memorize a hundred digits of pi.
[29]
- Three eight four four six...
[30]
- Then he just kept going.
[33]
He has now memorized 23,000 digits of pi
[36]
in preparation to challenge the north American record
[38]
- .95493038196.
[40]
That's 200.
[41]
(Derek laughs)
[45]
- That's amazing.
[47]
I have wanted to make a video about experts for a long time.
[56]
This is Magnus Carlsen,
[58]
the five time world chess champion.
[60]
He's being shown chessboards
[62]
and asked to identify the game in which they occurred.
[65]
- This looks an awful lot like Tal V Botvinnik.
[70]
(playful music)
[73]
- Whoops.
[76]
- Okay. This is the 24th game from Sevilla obviously.
[78]
(chuckling)
[79]
- Now I'm going to play through an opening.
[82]
And stop me when you recognize the game.
[85]
And if you can tell me who was playing black in this one.
[88]
Okay.
[89]
(playful music)
[92]
I'm sure you've seen this opening before.
[94]
- Okay. It's gonna be Anand.
[95]
(laughs)
[97]
- Against?
[99]
- Zapata.
[100]
- How can he do this?
[102]
It seems like superhuman ability.
[104]
Well decades ago,
[105]
scientists wanted to know
[106]
what makes experts like chess masters special.
[109]
Do they have incredibly high IQ's,
[111]
much better spatial reasoning than average,
[114]
bigger short term memory spans?
[116]
Well, it turns out that as a group,
[118]
chess masters are not exceptional on any of these measures.
[122]
But one experiment showed
[124]
how their performance was vastly superior to amateurs.
[128]
In 1973, William Chase and Herbert Simon
[130]
recruited three chess players,
[132]
a master, an A player,
[134]
who's an advanced amateur, and a beginner.
[138]
A chess board was set up with around 25 pieces
[140]
positioned as they might be during a game.
[142]
And each player was allowed
[144]
to look at the board for five seconds.
[146]
Then they were asked
[147]
to replicate the setup from memory
[149]
on a second board in front of them.
[151]
The players could take as many
[152]
five second peeks as they needed
[154]
to get their board to match.
[156]
From just the first look,
[158]
the master could recall the positions of 16 pieces.
[161]
The A player could recall eight,
[163]
and the beginner only four.
[165]
The master only needed half the number of peeks
[168]
as the A player to get their board perfect.
[170]
But then the researchers arranged the board
[172]
with pieces in random positions
[174]
that would never arise in a real game.
[177]
And now, the chess master performed
[179]
no better than the beginner.
[181]
After the first look,
[182]
all players, regardless of rank
[184]
could remember the location of only three pieces.
[186]
The data are clear.
[188]
Chess experts don't have better memory in general,
[190]
but they have better memory specifically
[192]
for chess positions that could occur in a real game.
[195]
The implication is what makes the chess master special,
[198]
is that they have seen lots and lots of chess games.
[202]
And over that time,
[202]
their brains have learned patterns.
[204]
So rather than seeing
[205]
individual pieces at individual positions,
[208]
they see a smaller number of recognizable configurations.
[212]
This is called 'chunking'.
[213]
What we have stored in long-term memory
[216]
allows us to recognize complex stimuli as just one thing.
[220]
For example, you recognize this as pi
[222]
rather than a string of six unrelated numbers
[225]
or meaningless squiggles for that matter.
[227]
- There's a wonderful sequence I like a lot
[229]
which is three zero one seven three.
[232]
Which to me, means Stephen Curry number 30, won 73 games,
[237]
which is the record back in 2016.
[239]
So three oh one seven three.
[241]
- At its core, expertise is about recognition.
[245]
Magnus Carlsen recognizes chess positions
[247]
the same way we recognize faces.
[250]
And recognition leads directly to intuition.
[253]
If you see an angry face,
[255]
you have a pretty good idea
[257]
of what's gonna come next.
[259]
Chess masters recognize board positions
[261]
and instinctively know the best move.
[264]
- Most of the time, I know what to do.
[268]
I don't have to figure it out.
[272]
- To develop the long term memory of an expert
[274]
takes a long time.
[276]
10,000 hours is the rule of thumb
[278]
popularized by Malcolm Gladwell,
[280]
but 10,000 hours of practice by itself is not sufficient.
[284]
There are four additional criteria that must be met.
[288]
And in areas where these criteria aren't met,
[291]
it's impossible to become an expert.
[294]
So the first one is many repeated attempts with feedback.
[298]
Tennis players hit hundreds of fore hands in practice.
[301]
Chess players play thousands of games
[303]
before they're grand masters
[304]
and physicists solve thousands of physics problems.
[308]
Each one gets feedback.
[310]
The tennis player sees
[311]
whether each shot clears the net and is in or out.
[314]
The chess player either wins or loses the game.
[316]
And the physicist gets the problem right or wrong.
[319]
But some professionals don't get repeated experience
[322]
with the same sorts of problems.
[324]
Political scientist, Philip Tetlock picked 284 people
[328]
who make their living commenting or offering advice
[330]
on political and economic trends.
[332]
This included journalists,
[334]
foreign policy specialists,
[335]
economists, and intelligence analysts.
[337]
Over two decades,
[339]
he peppered them with questions like
[340]
Would George Bush be re-elected?
[342]
Would apartheid in South Africa end peacefully?
[345]
Would Quebec secede from Canada?
[348]
And would the .com bubble burst?
[350]
In each case, the pundits rated the probability
[353]
of several possible outcomes.
[354]
And by the end of the study,
[356]
Tetlock had quantified 82,361 predictions.
[360]
So, how did they do?
[363]
Pretty terribly.
[364]
These experts, most of whom had post graduate degrees,
[367]
performed worse than if they had just
[369]
assigned equal probabilities to all the outcomes.
[372]
In other words,
[373]
people who spend their time
[374]
and earned their living studying a particular topic,
[377]
produce poorer predictions than random chance.
[380]
Even in the areas they knew best,
[381]
experts were not significantly better than non-specialists.
[385]
The problem is,
[386]
most of the events they have to predict are one-offs.
[388]
They haven't had the experience
[390]
of going through these events
[391]
or very similar ones many times before.
[393]
Even presidential elections only happen infrequently,
[396]
and each one in a slightly different environment.
[399]
So we should be wary of experts
[401]
who don't have repeated experience with feedback.
[404]
(upbeat music)
[406]
The next requirement is a valid environment.
[409]
One that contains regularities
[410]
that make it at least somewhat predictable.
[413]
A gambler betting at the roulette wheel for example,
[415]
may have thousands of repeated experiences
[418]
with the same event.
[419]
And for each one,
[419]
they get clear feedback
[420]
in the form of whether they win or lose,
[423]
but you would rightfully not consider them an expert
[425]
because the environment is low validity.
[428]
A roulette wheel is essentially random,
[430]
so there are no regularities to be learned.
[433]
In 2006, legendary investor, Warren Buffet
[436]
offered to bet a million dollars
[438]
that he could pick an investment
[440]
that would outperform Wall Street's best hedge funds
[442]
over a 10 year period.
[444]
Hedge funds are pools of money
[445]
that are actively managed by some of the brightest
[447]
and most experienced traders on Wall Street.
[449]
They use advanced techniques like short selling,
[452]
leverage, and derivatives
[453]
in an attempt to provide outsized returns.
[456]
And consequently, they charge significant fees.
[459]
One person took Buffet up on the bet;
[461]
Ted Seides of Protege Partners.
[463]
For his investment, he selected five hedge funds.
[466]
Well actually, five funds of hedge funds.
[469]
So in total, a collection of over 200 individual funds.
[473]
Warren Buffet took a very different approach.
[476]
He picked the most basic,
[477]
boring investment imaginable;
[479]
a passive index fund that just tracks
[481]
the weighted value of the 500 biggest
[483]
public companies in America, the S&P 500.
[487]
They started the bet on January 1st, 2008,
[489]
and immediately things did not look good for Buffet.
[492]
It was the start of the global financial crisis,
[495]
and the market tanked.
[496]
But the hedge funds could change their holdings
[499]
and even profit from market falls.
[501]
So they lost some value,
[502]
but not as much as the market average.
[504]
The hedge funds stayed ahead
[506]
for the next three years,
[507]
but by 2011, the S&P 500 had pulled even.
[511]
And from then on, it wasn't even close.
[513]
The market average surged
[515]
leaving the hedge funds in the dust.
[517]
After 10 years, Buffet's index fund gained 125.8%
[521]
to the hedge funds' 36%.
[524]
Now the market performance
[525]
was not unusual over this time.
[527]
At eight and a half percent annual growth,
[529]
it nearly matches the stock market's long run average.
[532]
So why did so many investment professionals
[534]
with years of industry experience,
[536]
research at their fingertips,
[537]
and big financial incentives to perform,
[540]
fail to beat the market?
[542]
Well because stocks are a low validity environment.
[544]
Over the short term,
[545]
stock price movements are almost entirely random.
[548]
So the feedback, although clear and immediate
[550]
doesn't actually reflect anything
[552]
about the quality of the decision making.
[555]
It's closer to a roulette wheel than to Chess.
[559]
Over a 10 year period,
[560]
around 80% of all actively managed investment funds
[564]
fail to beat the market average.
[566]
And if you look at longer time periods,
[567]
under performance rises to 90%.
[570]
And before you say,
[571]
"Well that means 10% of managers have actual skill,
[574]
consider that just through random chance,
[576]
some people would beat the market anyway.
[578]
Portfolios picked by cats or throwing darts
[581]
have been shown to do just that.
[582]
And in addition to luck,
[584]
there are nefarious practices
[585]
from insider trading to pump and dump schemes.
[588]
Now I don't mean to say there are no expert investors.
[591]
Warren Buffet himself is a clear example.
[593]
But the vast majority of stock pickers
[596]
and active investment managers,
[597]
do not demonstrate expert performance
[600]
because of the low validity of their environment.
[603]
Brief side note,
[604]
if we know that stock picking
[606]
will usually yield worse results over the long term,
[609]
and that what active managers charge in fees
[611]
is rarely compensated for in improved performance,
[614]
then why is so much money
[616]
invested in individual stocks,
[618]
mutual funds, and hedge funds?
[620]
Well let me answer that with a story.
[622]
There was an experiment carried out with rats and humans,
[625]
where there's a red button and a green button
[627]
that can each light up.
[629]
80% of the time, the green button lights up.
[631]
And 20% of the time the red button lights up,
[634]
but randomly.
[635]
So you can never be sure which button will light.
[638]
And the task for the subject,
[640]
either rat or human,
[641]
is to guess beforehand which button will light up
[643]
by pressing it.
[644]
For the rat,
[645]
if they guess right, they get a bit of food.
[647]
And if they guess wrong, a mild electric shock.
[649]
The rat quickly learns to press only the green button
[652]
and accept the 80% win percentage.
[655]
Humans on the other hand,
[657]
usually press the green button.
[659]
But once in a while,
[660]
they try to predict when the red light will go on.
[662]
And as a result, they guess right only 68% of the time.
[666]
We have a hard time accepting average results.
[668]
And we see patterns everywhere, including in randomness.
[671]
So we try to beat the average by predicting the pattern.
[675]
But when there is no pattern, this is a terrible strategy.
[678]
Even when there are patterns,
[680]
you need timely feedback in order to learn them.
[683]
And YouTube knows this,
[684]
which is why within the first hour
[685]
after posting a video,
[687]
they tell you how its performance compares
[689]
to your last 10 videos.
[691]
There's even confetti fireworks
[692]
when the video is number one.
[694]
I know it seems like a silly thing,
[696]
but you have no idea how powerful a reward this is
[699]
and how much YouTuber effort
[700]
is spent chasing this supercharged dopamine hit.
[704]
To understand the difference between
[705]
immediate and delayed feedback,
[707]
psychologist Daniel Kahneman contrasts
[709]
the experiences of anesthesiologists and radiologists.
[713]
Anesthesiologists work alongside the patient
[715]
and get feedback straight away.
[717]
Is the patient unconscious with stable vital signs?
[720]
With this immediate feedback,
[721]
it's easier for them to learn
[722]
the regularities of their environment.
[725]
Radiologists, on the other hand,
[726]
don't get rapid feedback on their diagnoses
[728]
if they get it at all.
[730]
This makes it much harder for them to improve.
[732]
Radiologists typically correctly diagnose
[735]
breast cancer from x-rays just 70% of the time.
[738]
Delayed feedback also seems to be a problem
[741]
for college admissions officers and recruitment specialists.
[744]
After admitting someone to college,
[746]
or hiring someone at a big company,
[747]
you may never, or only much later find out how they did.
[751]
This makes it harder to recognize the patterns
[753]
in ideal candidates.
[755]
In one study,
[755]
Richard Melton tried to predict
[757]
the grades of freshmen
[758]
at the end of their first year of college.
[760]
A set of 14 counselors
[762]
interviewed each student
[763]
for 45 minutes to an hour.
[765]
They also had access to high school grades,
[767]
several aptitude tests,
[768]
and a four page personal statement.
[771]
For comparison, Melton created an algorithm
[774]
that used as input,
[775]
only a fraction of the information.
[777]
Just high school grades and one aptitude test.
[779]
Nevertheless, the formula was more accurate
[782]
than 11 of the 14 counselors.
[785]
Melton's study was reported alongside
[787]
over a dozen similar results
[789]
across a variety of other domains,
[790]
from predicting who would violate parole
[792]
to who'd succeed in pilot training.
[795]
If you've ever been denied admission
[796]
to an educational institution,
[798]
or turned down for a job,
[799]
it feels like an expert has considered your potential
[802]
and decided that you don't have what it takes to succeed.
[804]
I was rejected twice from film school
[807]
and twice from a drama program.
[809]
So it's comforting to know
[810]
that the gatekeepers at these institutions
[812]
aren't great predictors of future success.
[814]
So if you're in a valid environment,
[816]
and you get repeated experience with the same events,
[819]
with clear, timely feedback from each attempt,
[821]
will you definitely become an expert
[823]
in 10,000 hours or so?
[825]
The answer unfortunately is no.
[827]
Because most of us want to be comfortable.
[830]
For a lot of tasks in life,
[831]
we can become competent in a fairly short period of time.
[834]
Take driving a car for example,
[836]
initially it's pretty challenging.
[837]
It takes up all of system two.
[839]
Bu after 50 hours or so it becomes automatic.
[842]
System one takes over,
[843]
and you can do it without much conscious thought.
[845]
After that, more time spent driving
[848]
doesn't improve performance.
[849]
If you wanted to keep improving,
[851]
you would have to try driving in challenging situations
[853]
like new terrain, higher speeds, or in difficult weather.
[857]
Now I have played guitar for 25 years,
[858]
but I'm not an expert because I usually play the same songs.
[862]
It's easier and more fun.
[864]
But in order to learn,
[865]
you have to be practicing at the edge of your ability,
[867]
pushing beyond your comfort zone.
[869]
You have to use a lot of concentration
[871]
and methodically repeatedly attempt things
[873]
you aren't good at.
[875]
- You can practice everything exactly as it is
[878]
and exactly as it's written,
[881]
but at just such a speed that
[883]
you have to think about
[885]
and know exactly where you are
[886]
and what your fingers are doing
[887]
and what it feels like.
[889]
- This is known as deliberate practice.
[891]
And in many areas
[892]
professionals don't engage in deliberate practice,
[894]
so their performance doesn't improve.
[896]
In fact, sometimes it declines.
[898]
If you're experiencing chest pain
[900]
and you walk into a hospital,
[901]
would you rather the doctor is a recent graduate
[904]
or someone with 20 years experience?
[906]
Researchers have found
[907]
that diagnostic skills of medical students
[909]
increase with their time in medical school,
[911]
which makes sense.
[912]
The more cases you've seen with feedback,
[914]
the better you are at spotting patterns.
[915]
But this only works up to a point.
[917]
When it comes to rare diseases of the heart or lungs,
[920]
doctors with 20 years experience were actually worse
[923]
at diagnosing them than recent graduates.
[925]
And that's because they haven't thought about
[926]
those rare diseases in a long time.
[928]
So they're less able to recognize the symptoms.
[931]
Only after a refresher course,
[933]
could doctors accurately diagnose these diseases.
[936]
And you can see the same effect in chess.
[938]
The best predictor of skill level,
[940]
is not the number of games or tournaments played,
[942]
but the number of hours dedicated
[944]
to serious solitary study.
[946]
Players spend thousands of hours alone
[948]
learning chess theory,
[949]
studying their own games and those of others.
[952]
And they play through compositions,
[953]
which are puzzles designed
[954]
to help you recognize tactical patterns.
[956]
In chess, as in other areas,
[958]
it can be challenging to force yourself
[960]
to practice deliberately.
[962]
And this is why coaches and teachers are so valuable.
[965]
They can recognize your weaknesses
[966]
and assign tasks to address them.
[969]
To become an expert,
[970]
you have to practice for thousands of hours
[972]
in the uncomfortable zone,
[974]
attempting the things you can't do quite yet.
[977]
True expertise is amazing to watch.
[979]
To me, it looks like magic, but it isn't.
[982]
At its core, expertise is recognition.
[985]
And recognition comes from the incredible amount
[987]
of highly structured information
[988]
stored in long-term memory.
[990]
To build that memory, requires four things:
[993]
a valid environment, many repetitions, timely feedback,
[997]
and thousands of hours of deliberate practice.
[1000]
When those criteria are met,
[1001]
human performance is astonishing.
[1003]
And when it's not,
[1005]
you get people we think of as experts
[1007]
who actually aren't.
[1009]
(techno sound)
[1013]
If you want to become a STEM expert,
[1016]
you have to actively interact with problems.
[1018]
And that's what you can do with Brilliant,
[1020]
the sponsor of this video.
[1021]
Check out this course on computer science,
[1023]
where you can uncover the optimal strategy
[1025]
for finding a key in a room.
[1027]
And you quickly learn
[1028]
how your own strategy can be replicated in a neural network.
[1031]
Logic is another great course
[1033]
that I find challenges me mentally.
[1034]
You go from thinking you understand something
[1036]
to actually getting it.
[1038]
And if it feels difficult, that's a good thing.
[1040]
It means you're getting pushed outside your comfort zone.
[1043]
This is how Brilliant facilitates deliberate practice.
[1046]
And if you ever get stuck,
[1047]
a helpful hint is always close at hand.
[1049]
So don't fall into the trap of just getting comfortable
[1052]
in doing what you know how to do.
[1053]
Build in the habit of being uncomfortable,
[1056]
and regularly learning something new.
[1058]
That is the way to lifelong learning and growth.
[1060]
So I invite you to check out the courses
[1062]
over at Brilliant.org/veritasium,
[1065]
and I bet you will find something there
[1066]
that you wanna learn.
[1067]
Plus if you click through right now,
[1068]
Brilliant are offering 20% off
[1070]
an annual premium subscription
[1072]
to the first 200 people to sign up.
[1074]
So I wanna thank Brilliant for supporting Veritasium,
[1076]
and I wanna thank you for watching.