🔍
Could Scientists Predict the Next Political Crisis? - YouTube
Channel: unknown
[0]
For centuries, people have been trying to
predict the future.
[3]
The Greeks had their oracles; the Romans had
their soothsayers.
[6]
And today we have… well, for some things,
we have scientists.
[9]
Like, thanks to the laws of physics, I can
tell you with near certainty when the sun
[13]
will rise tomorrow, if you give me your exact
location.
[16]
Technically, it’s still not 100%, since
some weird freak planetary collision could nudge Earth out of its typical orbit.
[24]
But it’s a very reliable guess.
[26]
Seriously. Don’t lose sleep over that.
[28]
We’re not getting hit by a planet.
[29]
As scientists have learned more about how
the world works — and we’ve started feeding
[32]
computers a lot of data — we’ve gotten
better and better at making predictions about the future.
[38]
In a way, it’s surprising how much we can
predict.
[41]
And yet, there are still these gaping holes,
especially when it comes to human behavior.
[46]
Do you remember for example, all of 2016.
[49]
Psychologists, though, are actually making
headway on figuring out how we can learn to be better predictors.
[56]
So today, on SciShow, here’s what we can
and can’t predict with much accuracy — and
[61]
how science is moving the art of prediction
forward.
[69]
[♪ INTRO]
[74]
Perhaps the best success story — and cautionary
tale — for prediction science is the weather.
[79]
The forecast used to be not much more than
a guess about what the next few days’ weather would hold.
[84]
But by learning more about how clouds form,
and how pressure interacts with temperature,
[89]
meteorologists have dramatically improved
their predictions in recent decades.
[93]
These days, they use complicated computer
models that take into account the underlying physics.
[98]
And by feeding those models reams of data
from a variety of instruments all over the
[103]
world, their five-day forecasts today are
as accurate as three-day forecasts in 2005.
[110]
That’s a huge improvement in a pretty short
period of time.
[114]
And it’s not just helpful for planning your
weekend cookout.
[117]
Getting better with the weather also means
we’ve been able to save more lives during natural disasters.
[122]
But while we’ve improved, we’re still
pretty bad about forecasting weather much beyond a week.
[127]
That’s because of inherent unpredictability
in the way something like a cloud forms.
[132]
We can know every detail of it, but part of
it still depends on an initial starting condition.
[138]
And according to chaos theory, small changes
that you cannot predict will change that outcome.
[143]
This is also known as the butterfly effect.
[145]
So we can keep learning more and improving
our measurements and models, but most of our progress will be incremental.
[151]
There’s a limit to how accurate long-range
forecasts can get.
[155]
All things considered, though, weather prediction
is pretty darn good, as long as you go in with the right expectations.
[160]
On the other hand, there are certain things
that you’d think we’d be able to predict
[164]
that we just haven’t been able to crack.
[165]
Like earthquakes.
[166]
They’re largely a natural phenomenon, which,
like weather, you’d think we’d be able
[170]
to understand the basics of and then load
in a bunch of data to model.
[174]
But so far, we can’t — at least, not in
the same way.
[177]
We do know a lot about them, like where they’re
most likely, based on fault lines and historical data.
[183]
But seismologists haven’t yet found a signal
that reliably precedes a quake that we can follow for advanced warning.
[189]
You can detect rumbling just prior to one,
but it’s not enough time to evacuate an entire city.
[195]
We just don't understand the factors that
go into how two tectonic plates will interact with each other.
[199]
So, the timing and magnitude of any single
specific earthquake remains a mystery.
[203]
Which is obviously bad for trying to keep
people safe.
[206]
Maybe, one day, seismologists will discover
new basic phenomena that will allow us to
[210]
forecast earthquakes with much better foresight.
[213]
But it’s also possible that we won’t.
[215]
And without that foundation, earthquakes will
remain an enigma that we can only loosely estimate.
[220]
It’s a reminder that predicting the future
depends on mountains of carefully collected
[225]
data — which is great, but also sometimes
hard to come by.
[228]
The vast majority of things that we have real
trouble predicting, though, aren’t based on the physical world.
[233]
There, at least we can partially model things
to get some unbiased idea of the probability.
[239]
Instead, the real mystery is... you, and me,
and us.
[244]
Elections, stock markets, political uprisings
— things that hinge on people and societies
[249]
— these are much more challenging, which
is not that big of a surprise.
[253]
The classic approach to these questions is
to use experts.
[257]
After all, if someone knows a lot about a
specific country, they should be able to say
[261]
with more accuracy whether a foreign leader
will make a certain trade deal, right?
[265]
Well, it turns out that experts aren’t very
good at economic and political forecasts.
[270]
In one landmark experiment that collected
these kinds of predictions from more than
[274]
280 experts over nearly two decades, the so-called
‘experts’ were only a tad better than random guessing.
[281]
We’ll get back to why scientists think experts
aren’t very good — it has a lot to do
[285]
with how they think and common psychological
biases we all fall prey to.
[290]
But don’t always assume that knowledge is
power when it comes to the future.
[294]
At least, when people are involved.
[296]
The other main way to tackle these sorts of
questions is to use data.
[300]
For something like an election, you can use
polling data — and the more there is, the better.
[304]
And if you know a bit about the quality of
each poll, you can weight them accordingly
[308]
and aggregate them together to get your best
guess.
[310]
This isn’t foolproof, but this type of analytical
approach is usually much better than asking a single expert.
[316]
Depending on the question you’re trying
to answer, you can even use artificial intelligence
[319]
and machine learning to make forecasts, although
this is work in progress.
[323]
In machine learning, computers use algorithms,
which are basically just a set of rules, to
[328]
teach themselves over time.
[330]
The advantage here is if you don’t actually
know how something works — like, say, what
[335]
causes political violence — you can feed
a computer a bunch of data, and see if it
[340]
can find any patterns for you.
[342]
So far, this method hasn’t pulled off any
notable victories — at least, in those tricky
[345]
human situations — but it’s something
to keep tabs on in, that future we’ve been talking about.
[352]
It’s all pretty new, but as we keep trying
out this technology and improving it, we’ll hopefully make some progress.
[357]
Okay, so we know that experts aren’t as
good as we’d think they’d be.
[361]
But the whole story is a little more complicated.
[363]
Because one of the things that that long-running
study found was that some experts are
better than others.
[370]
Experts who believed in big, grand ideas — like
the idea that all governmental regulation
[374]
is bad, or that the environment is doomed
— generally didn’t do so well.
[379]
Those who were less wedded to these kinds
of concepts, and were willing to change their opinions, did far better.
[385]
This suggests that personality and styles
of thinking are important for our ability to make good predictions.
[391]
And that perhaps, if you’re willing, you
can learn to get better at it, too.
[395]
The strongest case for this comes from a remarkable
project sponsored by a US agency called Intelligence
[401]
Advanced Research Projects Activity, or IARPA.
[405]
It’s kind of like DARPA, but for military
intelligence.
[408]
Back in 2011, IARPA realized that even well-trained
intelligence officers weren’t so hot at
[413]
predicting events, and that maybe they could
find a better way.
[417]
So they set up a four-year forecasting tournament
for people to predict political or economic outcomes.
[423]
It was a contest, but also an experiment.
[426]
Different teams tried out different ideas
for coming up with a strategy to produce the most accurate predictions.
[432]
And one team, called the Good Judgment Project,
blew the other four out of the water — so
[438]
much that the government stopped funding the
others just to focus on the winner.
[442]
The Good Judgment Project was actually led
by the same psychologist behind the other
[446]
study showing that experts are, on average,
poor predictors.
[449]
But what he also realized was that a small
number of people are remarkably good at answering
[455]
certain questions — stuff like, ‘Will
Serbia leave the EU in the next six months?’
[460]
or, ‘Will this politician resign by March?’
[462]
It wasn’t just luck, and it wasn’t just
that these people were smart or well-versed on international affairs, either.
[469]
The participants were normal folks who volunteered;
they had no particular expertise.
[474]
And they outperformed intelligence analysts
with access to classified material.
[479]
Which sounds, like pretty humbling for those
intelligence analysts.
[483]
What set these so-called superforecasters
apart were certain shared personality traits,
[487]
like an openness to consider new ideas, and
a willingness to revise them in the face of new facts.
[493]
They’re intelligent, but not geniuses, and
while they are usually comfortable with numbers,
[497]
they weren’t using statistics or models
to arrive at their answers.
[501]
Instead, the superforecasters were thinking
through the problems probabilistically.
[505]
In other words, they carefully assessed the
likelihood of various things, and factored everything into their decision.
[512]
This prevented them from being susceptible
to a lot of biases, including our natural
[516]
tendency to make quick, intuitive decisions
by falling back on heuristics, or shortcuts.
[522]
For instance, forecasters who read a lot about
terrorism, even in an effort to become more
[526]
informed, might begin to think terrorism is
more frequent than it actually is, simply because they’re exposed to it a lot.
[533]
This is known as the availability heuristic.
[535]
But by becoming aware of these pitfalls, and
sticking to probabilistic thinking, these
[540]
superforecasters could avoid it.
[542]
Fortunately for us mere mortals, the Good
Judgment Project was able to develop a short
[546]
training program that can improve accuracy
by 10% over a year.
[551]
In it, participants learn about cognitive
biases and are encouraged to break down big
[555]
problems into smaller parts so they can think
more easily about them.
[559]
They’re also taught to think about problems
from all sorts of perspectives.
[563]
They learned not overreact or underreact to
new information.
[567]
And most importantly, to learn from their
mistakes.
[570]
This is where most experts don’t put in
the work.
[573]
But if you never pause to think about where
you went wrong, you can’t learn how to be better.
[577]
Which is true for a lot of things, come to
think about it.
[579]
To arrive at its winning predictions, the
Good Judgment Project team also took advantage
[583]
of the wisdom of the crowd, but added a tweak
to traditional methods.
[588]
Basically, if you average everyone’s predictions,
they’re usually fairly close.
[592]
But this team didn’t stop there.
[594]
Instead, they then gave extra weight to their
group of 40 or so superforecasters, and finally,
[599]
adjusted that number up or down a bit further,
in what is called extremizing.
[604]
This technique worked really well.
[606]
It’s still not perfect, of course but is
proof that sometimes, people can be fairly good about glimpsing the future.
[613]
Well, as long as you ask a lot of them, and
do some fancy math to bias things towards your most talented group.
[619]
Like with any prediction, data is still really
important.
[622]
This method won’t work for everything, and
many people think there are very rare, but
[626]
still very important events that are too hard
to predict — something like 9/11.
[631]
They call these black swans.
[633]
But it’s possible breaking things down and
learning more will allow us to get better at these, too.
[638]
Ultimately, most experts agree that the best
predictions about these sorts of tough questions
[642]
will come from a combination of human and
machine.
[645]
Really, though, the only thing we can be certain
of is that we won’t be able to predict everything.
[649]
Thank you for watching this episode of SciShow!
[651]
If you want to learn more about how our minds
work and influence the ways we think and respond
[656]
to things, we have a channel called SciShow
Psychology and you can check it out over at youtube.com/scishowpsych.
[663]
[♪ OUTRO ]
Most Recent Videos:
You can go back to the homepage right here: Homepage





