hello everybody and we are here with the
first episode of a deeper dive
for this first episode we will be
covering the thought experiment of
roko’s basilisk
as you can see in the title there is an
info hazard in it
and the reason i mention that is because
is legitimately to some people such a
terrifying concept that it becomes near
like the concept of this thought
experiment is that
knowing about it in detail is what leads
you to danger
so if you’re someone who has real
problems with existentialism or
something like that
this may not be the video for you but it
was such a widespread problem online
that i wanted to put that disclaimer out
there without further ado we’ll go ahead
and get into it
i do want to say though that if there
are any other topics from the iceberg
that you’d like me to cover
please leave them in the comments i try
to read every comment and as always
thank you for watching the concept of
roko’s basilisk began
when a user by the name of roko left to
post about it on the less wrong forums
the original post is kind of long so i’m
just going to summarize it here
but if you want to read the original
copy there will be a link to the
rational wiki forum
in the description the thought
experiment went something like this
if in the future we approach singularity
which as i mentioned in the iceberg

point at which technology comes
to an irreversible level

level greater than that of

y if technology ever comes to
that point
there will probably be ais in place
that will be able to determine either
through a
program or looking at the history of
each individual
who was responsible for its creation if
this ai

ed concepts of humanity we
understand such as fear and
that it may have an invested entrance in
dissuading those
who do not want it to exist in other
words the people who did not help
create it what that means is if this ai
was as smart as it could potentially be
then it could have advanced knowledge of
you and everything you’ve ever done
even if it doesn’t necessarily have
proof that you yourself did not help
create it then it may be able to put all
of your emotions memories and things
like that
into a simulation which would reproduce
answer which the ai would probably
consider enough to judge you on
all of that boils down to the same
concept that if you did not help the
supercomputer come into existence
then it will end your existence or at
least make it a living hell
something that really gets brushed over
in this is that
it is not expressly saying that the
computer will kill you
it is saying that it will dissuade ideas
against itself and what better way to
dissuade public ideas
than torture assuming this thing just
doesn’t wipe out humanity or at least
those parts of humanity that did not
help create it then it could
theoretically hook you up to a computer
that keeps you in a perpetual state of
torture forever
it could induce chemicals into your mind
that make you have heightened senses of
or it could look through your memories
to find your worst fears and make them a
or itcould simply put you on life
to make you immortal and then repeatedly
make you experience death over and over
essentially if you’re familiar with the
horror short story i have no mouth and i
must scream
this is a logical or real world
application of am
from that book so it seems like the
logical thing to do would be to help
this thing come into existence
however from that very idea
that you fear this thing coming into
to the point that you create it you have
now created a tragic self-fulfilling
in which by fear of something happening
you made that thing happen
while this can be viewed as a logical
fallacy it can also be flipped on his
and realized that this ai knew that that
would be the determination that came
from it
and by its own existence that’s what
pushed you
to create it so to think about it in a
logical way
you fearing something that does not
makes that thing exist therefore
justifying the fear of it
therefore justifying your creation of it
for context on the name a basilisk is a
creature from old world mythology
that is essentially a giant serpent that
can kill someone just by looking at them
and that’s exactly what this ai would do
it would look
through time and space or look through
your personal time and space
and determine if you are beneficial to
it or not this part’s where the info
hazard comes in
obviously if you had never heard of it
or even considered the possibility of
this ai
existing then you’re free to go there’s
no way that the ai could determine
if you were going to help it or if you
did help it if you never even considered
or knew of its existence
however me telling you right now
in this moment is theoretically
enough to make you guilty for not having
done something about it
basically the whole idea in the scenario
that ignorance of the law
would save you however me explaining it
to you now
got rid of your guiltlessness so you’re
welcome now you may be asking yourself
i’m just some person who lives at home
and has absolutely no understanding
of ai or technology or anything else
and cannot do anything to help well that
would be all fine and dandy
if it wasn’t for the quantum billionaire
concept if you’ll remember in the
iceberg video i think it was the same
video that i mentioned roko’s basilisk
i talked about the idea of quantum
suicide and immortality quantum
billionaire is the same thing only
applied to
wealth let’s put it this way you may not
have a billion dollars but you may have
a hundred dollars
well if you use that hundred dollars and
play the lottery with it over and over
that is a chance to make more money
and more and more and more obviously
this isn’t how the lottery actually
works but
if roko knew that you had some form of
disposable income
or even time to dedicate to helping it
through labor
then that still counts as some manner of
negligence on your part
essentially the idea that there is
something you can do
to help this thing out and now because
you know about it
and aren’t doing it you’re guilty but
at the same time you never have to worry
about this thing if it never comes to
which would happen if no one decided to
build it
but at the same time those people who
decided to not build it would be guilty
if it was built a lot of people equate
this thought experiment to that of
pascal’s wager
i’m probably out of frame for this but
that’s fine i want to use the whiteboard
pascal’s wager was developed by pascal
and was used by him to determine
if it is worth your time to believe in
the existence of god
thought experiment goes something like
this it combines two factors
your belief in god or your non-belief in
and the idea that god could be real or
god could be fake if
god is real and you believe in him then
you are destined for an eternity in
heaven which is a good thing
if god is fake and you believe in him
well then nothing really happens
it’s not the outcome isn’t affected
either way if god is fake
and you do not believe in him well same
thing nothing really happens
the outcome is left the same with no net
gain or loss
however if you do not believe in god and
god is real
then that is an eternity in hell
it makes sense in every equation to
believe in god
rather than not since your options are
either heaven
or nothing to happening so how does this
apply to rocco’s basilisk well if you’re
thinking i’m comparing rocco’s basilisk
to the idea of a god that’s because i
am the idea behind it being that this ai
would be so powerful it would be near
of a deity therefore your judgment i be
it good
or bad entirely rest on it put it this
if roko’s basilisk isn’t real and you
don’t help it while nothing happens
just like if you were to try to help it
but it isn’t real again nothing happens
however if it is real and you don’t help
uh yeah crazy hell computer torture
forever but if you do help it
then you survive therefore looking at it
from the paschal’s wager principle
it is always beneficial for you to help
also want to emphasize here i don’t
necessarily believe in this i’m
explaining how the thought experiment
you may be sitting there thinking to
yourself well if i simply don’t believe
in it and it’s never going
to happen then why waste any of my time
with it because if i choose not to do
anything about it and everyone else
makes that choice
it’s not going to be real but that’s
where newcomb’s paradox comes in
newcomb’s paradox works like this say i
have two boxes
box one and box two you can see
inside of box one and inside of it is a
thousand dollars
you can’t see inside of box two but i
tell you
that it either has zero dollars in it or
a million dollars in it
your two options are you can either take
just box two or both box one and box two
obviously this answer is obvious you
would take both boxes because if box two
has zero dollars in it you get a
thousand dollars
if box two has a million dollars in it
you get one million one thousand dollars
but let’s throw a wrench in it let’s say
that i am a magic genie
who a hundred percent of the time can
guess which of those options you’ll take
and i say this
if i make a prediction that you will
both boxes and without telling you
there i put zero dollars into box two
if i make a prediction that you will
just take box two
then i put a million dollars into box
two so basically i with my magic genie
and predicting which of the choices you
will take
now this still should be pretty easy
because if i am
right 100 of the time and say you choose
to take both boxes
well then there’s going to be zero inbox
two which means you just get a thousand
and again assuming that i am right a
hundred percent of the time
and you decide to take box two well
that’s a million dollars
but what if i’m not right 100 of the
time what if i’m right 90
of the time well that’s still a pretty
good chance but there’s that 10
chance that you’ll miss out on a
thousand dollars because you decided to
take box two
and if that happened to be ten percent
of the time i’m wrong you now don’t have
any money to show for it
what if i’m eighty percent correct or 70
or 60 or on and on it keeps going
what if your outcome was based on
my prediction of what you would choose
and this in itself is a hard concept to
deal with
because how could you choose both boxes
or at least be predicted to choose both
but then choose just the second box how
could you go
against your own prediction of what you
chose without getting into all the
numbers theory of it
the reason newcomb’s paradox has been so
confounding for such a long time
is because it takes two separate
of rationality and pins them against
each other one side being i will take
the option
that will give me the most profit the
other side being i will take the most
stable option
because again no matter what happens if
you pick both boxes you get a thousand
but depending on my prediction of what
you do
you may get a million or you may get
none i hope that didn’t confuse you
because now i’m going to apply that to
roko’s basilisk
the idea being if roko is this future ai
that is determining who was responsible
for its creation
then your decision if you should help it
or not
may not necessarily be up to you if roko
simply runs simulations of our brain
patterns to see what we would do
then it is the probability of what we
would do that judges us rather than our
actual actions
it’s as if broco’s basilisk is the genie
that it’s judging you to determine what
the outcome would be so therefore we
don’t even have agency on our own choice
it’s almost as if this future blackmail
that’s being pressed down upon us
we don’t even have a saiyan the thought
concept of rocco’s basically
presents us with the illusion of choice
we may think that we can apply pascal’s
wager to it
and say well if it’s good to help it
then i will help it however if this
whole thing’s just running brain
simulations then you don’t actually get
to choose
it chooses what you would be most likely
to choose and you don’t have a choice in
the matter
so you could be cursed to a near
internal damnation
because your brain waves would
likely go the direction away from it
or if your brain waves would go the
direction towards it
then you are therefore responsible in
creating this creation
that would build itself together to
eliminate those who were not
responsible for creating it therefore
you ha e created
this paradox that you were worried about
your own tragic self-fulfilling prophecy
i am beginning to see why the heads of
wrong decided to delete the original
roko’s masculist post
and tell the rocco that he was stupid
and should stop talking about it
broco’s basilisk as i said before is a
thought experiment and what happens if
ai progresses too far
and do we have any agency in our outcome
in whatever the new world might be
also elon musk got with grimes over a
tweet in which he made a joke about
roko’s basically
i didn’t know where to put that in here
but i just felt like i should share and
that is it for this episode of a deeper
thank you all so much for watching as i
mentioned before if there’s any other
concepts you want to see me cover
please leave them in the comments below
also please let me know what you think
of this series because i would
absolutely love to do more
thank you to all my subscribers very
special thank you to all my patrons and
a very very special thank you to my top
tier patrons
thank you kayla thank you pef thank you
thank you benjamin thank you tim thank
you publius
thank you sassy and thank you fasia uh
link for that as well as the original
roku’s basketless post will be in the
description as i mentioned before
new iceberg video coming out again later
this week
i really enjoyed researching this and i
hope you all enjoyed i hope it wasn’t
too boring
and i will see you all in the next one

Thank you Wendigoon!