The Case for AI as an Existential Threat
A short Youtube documentary that covers the basic doomer position.
For a more detailed argument, see this article by Professor Yoshua Bengio, and the two debate links below.
Related Links
The AI Dilemma
The AI Dilemma
Open Source, Existential Risk Debate
AI Pause Debate
AI Pause Debate
Sparks of AGI
Smart GPT
Tree of Thought Prompting
The Road to Superintelligence
Transcript (for the forum bots)
0: 03 I mean how how would like human beings
0:06 sort of experience such a super
0:08 intelligence I mean like in practice
0:10 what’s that like well unless it’s
0:12 limited narrow super intelligence I
0:15 think you mostly don’t get to observe it
0:16 because you are dead unfortunately what
0:21 [Music]
0:25 we started open AI seven years ago
0:28 because we felt like something really
0:30 interesting was happening in AI
0:32 we wanted to help steer it in a positive
0:35 direction
0:37 openai unveiled the chat GP has been in
0:40 circulation for just three months and
0:42 already an estimated 100 million people
0:44 have used it how many folks in the
0:46 audience have used chat TPC
0:48 I think it’s the single largest
0:50 opportunity and biggest Paradigm Shift
0:52 we’ve seen since the internet originally
0:53 came out so I’m going to show you how to
0:54 use chatp to make money online absolute
0:57 best chat gbt problem it will turn your
0:59 drawing into a fully functional website
1:01 chatbot gcp thank you for talking to me
1:03 today you’re welcome I’m here to help
1:05 answer any questions you may have six
1:07 weeks these guys have gone from so I
1:09 appreciate it thanks for having me
1:10 zeroll evaluation
1:12 [Music]
1:13 now being a 29 Billion Auto Company
1:16 [Music]
1:17 tonight we take you inside the
1:20 headquarters of a small company in San
1:22 Francisco called open AI creators of
1:25 chat GPT CEO Sam Altman is just 37. I
1:30 think
1:31 people should be happy that we’re a
1:32 little bit scared of this you’re a
1:35 little bit scared a little bit you
1:36 personally what is your like best case
1:39 scenario for AI and worst case the bad
1:42 case and I think this is like important
1:43 to say is like lights out for all of us
1:46 oh
1:49 listen up this morning a massive
1:51 development on the AI front Elon Musk
1:54 and other major Tech leaders calling for
1:55 a pause on giant artificial intelligence
1:58 experiments writing bluntly in an open
2:00 letter AI systems with human competitive
2:03 intelligence impose profound risks to
2:06 society and Humanity
2:08 and nobody would suggest that we allow
2:10 anyone to just build nuclear warheads if
2:12 they want that would be insane and block
2:14 my words
2:16 AI is far more dangerous than nukes the
2:20 man widely seen as The Godfather
2:22 official intelligence has quit his job
2:27 he is what I call the existential threat
2:30 which is the chance that they get more
2:32 intelligent than us and they’ll take
2:34 longer than I heard the old dude that
2:36 created AI somehow this is not safe
2:38 because the AIS got their own minds I’m
2:40 like if we’re in a [ __ ] movie right
2:42 now or what
2:43 [Music]
2:47 [Applause]
2:59 um so so you know the original story
3:00 that I heard on open AI when you were
3:02 founded as a non-profit where you were
3:04 there as the great sort of check on the
3:07 big companies doing their unknown
3:09 possibly evil thing with AI and you were
3:12 going to build models that sort of
3:13 somehow held them accountable and was
3:15 capable of slowing the fuel down if need
3:18 be and yet what’s happened arguably is
3:20 the opposite is that your release of
3:22 chapter put such shock waves through the
3:24 tech world that now Google and meta and
3:26 so forth are all scrambling to catch up
3:31 but this isn’t an arms race it’s a
3:33 suicide race where everybody loses if
3:35 anybody’s AI goes out of control
3:41 absolutely do you believe I’m quoting
3:44 him that it is not inconceivable that it
3:46 could actually lead to the extinction of
3:48 the human race not only is it not
3:50 inconceivable I think it’s quite likely
3:52 unfortunately and I’m not just the only
3:54 one saying this overall you know maybe
3:55 you’re getting more up to like 50 50
3:56 chance of Doom shortly after you have a
3:59 systems that are human level
4:01 this is a stat that took me by surprise
4:04 50 of AI researchers believe there’s a
4:07 10 or greater chance that humans go
4:09 extinct from our inability to control AI
4:14 we have to realize what people are
4:16 talking about the destruction of the
4:18 human race the end of human civilization
4:24 you see how it all evens out who would
4:26 want to continue playing with that risk
4:31 but it is happening today and companies
4:33 are continuing there’s not enough
4:35 divestment there is not enough real
4:37 meaningful action by the experts to say
4:39 we are going to change our behavior in
4:42 the interest of protecting Humanity it
4:45 just sounds absurd that serious people
4:47 like yourself these tech people can talk
4:50 about the end of the human race it
4:52 really it’s really concentrates the mind
4:56 so every time you release a model every
4:58 time you build such a model you’re
5:00 rolling it down you know maybe this
5:01 time’s fine maybe next time but at some
5:04 point it won’t be like it’s Russian
5:05 Roulette
5:08 so the risk is that Chad gpt6 won’t be
5:11 written by humans it’ll be written by
5:13 Chad GPT 5.5
5:19 I’ve been watching lots of these kind of
5:21 long-form podcasts with people like Sam
5:23 Altman who you mentioned and then we’ll
5:24 continue to speak about the research
5:26 they’re doing after saying that it might
5:28 bring about the end of humanity why do
5:30 they carry on doing it yeah I
5:33 um
5:34 I think that’s a really important
5:36 wow and the investment in the creation
5:39 of the foundation models is on the order
5:41 of 50 million 100 million we don’t share
5:44 base much more than that billions of
5:46 dollars and you know thousands tens of
5:48 thousands of our brightest engineers and
5:49 scientists are working day in day out to
5:51 create ever more powerful systems well
5:53 the number of people who work full-time
5:55 unlike the alignment problem is probably
5:57 less than 200 people if I had to guess
5:59 the alignment means making it safe the
6:01 moral alignment
6:02 so present so 99 of the money is going
6:05 into developing them and one percent is
6:07 going into sort of people saying all
6:08 these things might be dangerous it
6:10 should be more like 50 50 animals
6:13 alignment is moving like this
6:15 capabilities are moving like this for
6:18 the listener capabilities are moving
6:20 much faster than the alignment
6:23 kind of like we’re rushing towards this
6:26 cliff but the closer the cliff we get
6:28 the more Scenic the views are and the
6:30 more money there is there and the more
6:31 so we keep going
6:33 but we have to also stop at some points
6:35 right
6:36 given how fast things are moving and how
6:39 fast you’re developing this technology
6:41 how much time do we actually have
6:48 videos involved in artificial
6:49 intelligence development meeting with
6:51 President Biden and vice president
6:52 Harris in Washington the White House
6:54 said that Biden told the CEOs they need
6:56 to mitigate risks posed by AI to
6:58 individual Society National Security
7:01 I’m Scott skeptical and I think many are
7:04 skeptical maybe that’s warranted because
7:06 the technology just developed so so
7:09 quickly and public policy takes so much
7:12 longer to develop just calling the
7:15 private companies and say you’re in
7:17 charge and you have more obligation is
7:19 nothing
7:20 especially for Microsoft and Google the
7:22 two leaders here and open AI I guess you
7:24 know we hear the word responsible
7:26 responsible responsible we’re going to
7:27 do this responsibly seems like you’re
7:30 not buying that what do you think well
7:31 those companies are responsible to their
7:33 shareholders they’re not necessarily
7:35 responsible to humanity as a whole
7:38 it’s the systemic processes that are
7:41 protecting business interests over human
7:44 concerns that create this pervasive
7:47 environment of irresponsible technology
7:50 development
7:51 and raise your right hand
7:56 as these systems do become more capable
7:58 and I’m not sure how far away that is
8:00 but maybe not not super far I think it’s
8:03 important that we also spend time
8:05 talking about how we’re going to
8:06 confront those challenges
8:08 so that’s what a large language model is
8:10 it’s this giant trillion parameter
8:13 circuit that’s been trained to predict
8:15 the next word what goes on inside we
8:18 haven’t the faintest idea
8:20 I expect there will be times when we
8:22 find something that we don’t understand
8:23 and we really do need to take a pause
8:25 but we don’t see that yet we probably
8:28 have more idea what’s happening inside
8:30 the human brain than we do about what’s
8:32 happening inside the large language
8:34 models
8:35 there is an aspect of this which all of
8:37 us in the field call it as a black box
8:40 you know you don’t fully understand you
8:42 can’t quite tell why it said this or why
8:45 it got wrong we have some ideas you
8:47 don’t fully understand how it works and
8:49 yet you’ve turned it loose on society
8:51 [Music]
8:54 just shut down all the giant training
8:57 wheels that they don’t know what they’re
8:58 doing they’re not taking it seriously
9:00 there’s an enormous gap between where
9:02 they are now and taking it seriously and
9:03 if they were taking it seriously they’d
9:04 be like we don’t know what we’re doing
9:05 we have to stop that is what it looks
9:07 like to take this seriously
9:09 a traditional Software System programmer
9:11 writes code which solves the problem AI
9:13 is very different AIS are not really
9:16 written they’re more like grown
9:19 you have a sample of data of what you
9:23 wanted to accomplish and then you use
9:25 huge supercomputers to Crunch these
9:27 numbers to kind of like organically
9:30 almost grow a program that solves these
9:32 problems and importantly we have no idea
9:36 how these programs work internally they
9:39 are complete Black boxes we don’t
9:40 understand at all how their internals
9:42 work this is a unsolved scientific
9:45 problem
9:46 and we do not know how to control these
9:48 things
9:50 what a lot of safety researchers have
9:52 been saying for many years is the most
9:54 dangerous things you can do with an AI
9:56 is first of all teach it to write code
9:57 because that’s the first step towards
9:59 recursive self-improvement which can
10:01 take it from AGI to much higher levels
10:03 Bart has already learned more than 20
10:05 programming languages that’s good chat
10:07 gbt to write some code for us
10:09 foreign
10:10 [Music]
10:12 oops we’ve done that another thing high
10:15 risk is connected to the internet Let It
10:17 Go to websites download stuff on its own
10:18 and talk to people a big part of our
10:21 strategy is while these systems are
10:23 still relatively weak and deeply
10:25 imperfect to find ways to get people to
10:29 have experience with them to have
10:30 contact with reality and to figure out
10:33 what we need to do to make it safer and
10:35 better oops we’ve done that already
10:37 that’s like saying well the only way we
10:39 can test our new medicine the only way
10:40 we can know whether it’s safe or not is
10:41 actually put white into the water give
10:43 it to literally everybody as fast as
10:44 possible
10:46 and then before we get the results for
10:47 the last one
10:49 to make an even more potent drug and put
10:51 that into the water supply as well and
10:52 do this as fast as possible
10:54 [Music]
10:57 have you seen don’t look up
10:59 the film
11:02 this feels like a gigantic uh don’t look
11:05 up scenario it’s a movie about like this
11:07 asteroid hurtling to Earth
11:09 good afternoon everybody there’s an
11:12 expert from the machine intelligence
11:14 Research Institute who says that if
11:16 there is not an indefinite pause on AI
11:19 development this is a quote
11:21 literally everyone on Earth will die
11:28 would you agree that does not sound good
11:33 Peter is quite it’s quite something we
11:36 are taking this very seriously we put
11:37 our blueprint out it is a cohesive
11:39 federal government approach to AI
11:41 related risks as you just laid out in a
11:44 very dramatic way but clearly we’re
11:47 trading more dramatic I mean you just
11:49 read it literally that one on earth will
11:52 die pretty pretty dramatic pretty
11:54 dramatic isn’t that
11:57 level event
11:59 wow that’s not dramatic here at this
12:02 very moment I say we sit tight and
12:06 assess
12:07 we are actually acting out it’s life
12:09 imitating art humanity is doing exactly
12:11 that right now except it’s an asteroid
12:14 that we are building ourselves I feel
12:16 like we’re at the beginning of a
12:17 disaster film where they show the news
12:19 Clips
12:22 okay well as it’s damaging will it hit
12:24 this one house in particular that’s
12:26 right on the coast of New Jersey it’s my
12:27 ex-wife’s house I needed to be here can
12:29 we make that happen what is your like
12:31 best case scenario for AI and worst case
12:33 I mean I I think the best case is like
12:35 so unbelievably good that it’s like hard
12:38 to
12:40 I I think it’s like hard for me to even
12:42 imagine and when these uh Treasures from
12:45 Heaven are claimed poverty as we know
12:48 it’s social injustice also by diversity
12:51 all these multitudes of problems are
12:54 just going to become relics of the past
12:56 we are working to build tools that one
12:58 day can help us make new discoveries and
13:00 address some of Humanity’s biggest
13:01 challenges like climate change and
13:03 curing cancer they found a bunch of gold
13:06 and diamonds and rare [ __ ] on the comet
13:09 so they’re gonna let it hit the planet
13:11 to make a bunch of rich people even more
13:14 disgustingly Rich almost nobody is
13:17 talking about it and
13:18 people are squabbling across the planet
13:20 about all sorts of things which seem
13:22 very minor compared to the asteroid
13:23 that’s about to hit us and one of the
13:25 things that worries me most about the
13:26 development of AI at this point so do I
13:28 need to invest in the AI so I can have
13:30 one would we seem unable to Marshal an
13:33 appropriate emotional response to the
13:35 dangers that lie ahead
13:39 right now we’re at a fork on the road
13:41 this is the most important Fork the
13:43 humanity has reached and it’s over a
13:44 hundred thousand years on this planet
13:46 we’re building effectively a new species
13:49 that’s smarter than us
13:50 it’s as if aliens had landed but we
13:53 didn’t really take it in because they
13:54 speak good English
13:57 we think that regulatory intervention by
13:58 governments will be critical to mitigate
14:00 the risks of increasingly powerful
14:02 models
14:03 for example
14:05 the US government might consider a
14:07 combination of Licensing and testing
14:09 requirements for development and release
14:10 of AI models above a threshold of
14:12 capabilities humans have kind of changed
14:15 the environment on Earth very
14:16 significantly as a result of our
14:18 intelligence relative to other species
14:20 and that’s had you know significant
14:22 consequences for some species and for
14:23 the biosphere in general Common Sense
14:25 tells you that something similar might
14:27 happen if we invent something more
14:29 intelligent than us
14:30 arguably we are on the Event Horizon
14:34 of the black hole that is
14:37 official superintelligence
14:40 if we were to write a book about the
14:43 Folly of the history of human hubris
14:45 dealing with nukes and Ai and things
14:48 like that we could easily have the last
14:49 chapter in that book if we are not more
14:51 careful about confident wrong ideas
14:55 it’s possible that there’s no way we
14:57 will control these super intelligences
14:59 and that humanity is just a passing
15:01 phase in the evolution of intelligence
15:03 most observers and experts would say
15:06 we’re on this path towards superhuman
15:08 intelligence and we’re not prepared for
15:11 success
15:12 we’re investing hundreds of billions of
15:14 dollars into a technology that if
15:16 eventually it succeeds could be
15:18 civilization ending could be a huge
15:21 catastrophe
15:24 I have not met with anyone right now in
15:27 this lab so says that sure the risk is
15:29 less than one percent of blowing up the
15:30 planet it’s important that people know
15:32 that the lies are being risked by this
15:33 very particular experiments
15:36 let’s be clear you’re Racing for their
15:38 own personal gain for their own Glory
15:40 towards an existential catastrophe that
15:43 no one is consented to
15:46 we just had a little baby and I keep
15:48 asking myself you know
15:53 how old is even gonna get you know and
15:56 and I said to my wife recently it feels
15:58 a little bit like I was just diagnosed
15:59 with some sort of cancer which has some
16:03 you know risk of dying from in some risk
16:06 of surviving you know
16:08 except this is the kind of cancer which
16:09 would kill all of humanities
16:11 [Music]
16:12 oh
16:14 if somebody’s listening to this and
16:16 they’re young and trying to figure out
16:18 what to do with their life what advice
16:19 would you give them
16:21 don’t expect it to be a long life don’t
16:23 don’t put your happiness into the future
16:25 the future is probably not that long at
16:27 this point
16:28 but none know the hour nor the day
16:39 [Music]
16:58 [Music]