Jump to content

Roko's Basilisk


adw95

Recommended Posts

Now I know about the Basilisk I have the choice to help it or risk eternal torture in the form of my future simulation.  Now I live with the inner turmoil, how do I help the Basilisk?  Arrrrgh!

 

Well, you're already thinking about it which is increasing the odds of the Basilisk ever existing in the future so you may as well devote your life to bringing about its creation.

 

tumblr_ljj5dfWMVD1qe0bak.gif

Edited by RunRickyRun
  • Like 1
Link to comment
Share on other sites

I think this argument that merely thinking about the super-AI makes it more likely to exist is a false one, a non sequitur. 

 

It's a bit the infinite number of monkeys typing Shakespeare thing. Yes, it's theoretically possible, but the chances are so infinitesimally small as to effectively equal zero. 

Link to comment
Share on other sites

tl;dr

Oh Levi! I thought the same, and left the thread alone, then I saw you'd posted and thought "that'll be interesting, I wonder what analysis Levi's posted" then when I clicked on it, you'd just too long'ed it .

 

wah! wah! wah! salty tears  :( .

Link to comment
Share on other sites

  • 2 weeks later...

How to Stop Killer Robots Taking over the World

 

 

 




We Need to Stop Killer Robots Taking Over the World

By James Pallister Aug 11 2014

Screen-Shot-2014-08-10-at-10-50-02.jpg

Nick Bostrom. Photo via.

Nick Bostrom’s job is to dream up increasingly lurid scenarios that could wipe out the human race: Asteroid strikes; high-energy physics experiments that go wrong; global plagues of genetically-modified superbugs; the emergence of all-powerful computers with scant regard to human life – that sort of thing.

In the hierarchy of risk categories, Bostrom’s speciality stands above mere catastrophic risks like climate change, financial market collapse and conventional warfare.

As the Director of the Future of Humanity Institute at the University of Oxford, Bostrom is part of a small but growing network of snappily-named academic institutions tackling these "existential risks": the Centre for the Study of Existential Risk at the University of Cambridge; the Future of Life Institute at MIT and the Machine Intelligence Research Institute at Berkeley. Their tools are philosophy, physics and lots and lots of hard maths.

Five years ago he started writing a book aimed at the layman on a selection of existential risks but quickly realised that the chapter dealing with the dangers of artificial intelligence development growth was getting fatter and fatter and deserved a book of its own. The result is Superintelligence: Paths, Dangers, Strategies. It makes compelling – if scary – reading.

The basic thesis is that developments in artificial intelligence will gather apace so that within this century it’s conceivable that we will be able to artificially replicate human level machine intelligence (HLMI).

Once HLMI is reached, things move pretty quickly: intelligent machines will be able to design even more intelligent machines, leading to what mathematician I J Good called back in 1965 an "intelligence explosion" that will leave human capabilities far behind. We get to relax, safe in the knowledge that the really hard work is being done by super-computers we have brought into being.

Screen-Shot-2014-08-10-at-10-51-24.jpg

An intelligence explosion. Illustration via.

All sound good? Not really, thanks to the “control” problem. Basically it’s a lot easier to build an artificial intelligence than it is to build one that respects what humans hold dear. As Bostrom says: “There is no reason to think that by default these powerful future machine intelligences would have any human-friendly goals.”

Which brings us to the gorillas. In terms of muscle, gorillas outperform humans. However, our human brains are slightly more sophisticated than theirs, and millennia of tool-making (sharp sticks, iron bars, guns, etc.) have compounded this advantage. Now the future of gorillas depends more on humans than on the gorillas themselves.

In his book, Bostrom argues that once a super intelligence is reached, present and future humanity become the gorillas; stalked by a more powerful, more capable agent that sees nothing wrong with imprisoning these docile creatures or wrecking their natural environments as part of a means of achieving its aims.

Screen-Shot-2014-08-10-at-10-55-50.jpg

Photo via. 

“A failure to install the right kind of goals will lead to catastrophe,” says Bostrom. A super intelligent AI could rapidly outgrow the human-designed context it was initially designed for, slip the leash and adopt extreme measures to achieve its goals. As Bostrom puts it, there comes a pivot point: “when dumb, smarter is safer, when smart, smarter is more dangerous”.

Bostrom gives the example of a super intelligent AI located in a paperclip factory whose top-level goal is to maximise the production of paperclips, and whose intelligence would enable it to acquire different resources to increase its capabilities. “If your goal is to make as many paperclips as possible and you are a super-intelligent machine you may predict that human beings might want to switch off this paperclip machine after a certain amount of paperclips have been made,” he says.

“So for this agent, it may be desirable to get rid of humans. It also would be desirable ultimately to use the material that humans use, including our bodies, our homes and our food to make paperclips.”

“Some of those arbitrary actions that improve paperclip production may involve the destruction of everything that we care about. The point that is actually quite difficult is specifying goals that would not have those consequences.”

Bostrom predicts that the development of a superintelligent AI will either be very good or catastrophically bad for the human race, with little in between.

It’s not all doom though. Bostrom’s contention is that humans have the decisive advantage: We get to make the first move. If we can develop a seed AI that ensures future superintelligences are aligned with human interests, all may be saved. Still, with this silver lining comes a cloud.

“We may only ever get one shot at this,” he says. Once a superintelligence is developed, it will be too sophisticated for us to control effectively.

Screen-Shot-2014-08-10-at-10-50-52.jpg

Image via.

How optimistic is Bostrom that the control problem can be solved? “It partly depends on how much we get our act together and how many of the cleverest people will work on this problem,” he says. “Part of it depends just how difficult this problem is, but that’s something we will not know until we have solved it. It looks really difficult. But whether it’s just very difficult or super-duper ultra difficult remains to be seen”.

So, across the world’s labs there must be hoards of boffins beavering away on what Bostrom calls “the essential task of our time”? Err, not quite. “It’s hard to estimate how many exactly, but there’s probably about six people working on it [in the world] now”.

Perhaps this has something to do with the idea that working on an all-powerful AI was the preserve of mouth-breathing eccentrics. “A lot of academics were wary of entering a field where there were a lot of crackpots or crazies. The crackpot factor deterred a lot of people for a long time,” says Bostrom.

One who wasn’t deterred was Daniel Dewey, who left a job at Google to work with Bostrom at the FHI and at Oxford University’s Martin School, lured in by the prospect of dealing with the AI control problem. “I still think that the best people to work with are in academia and non-profits, but that could be changing, as big companies like Google start to deeply consider the future of AI” says Dewey.

The former Google staffer is optimistic that the altruistic nature of his former colleagues will trump any nefarious intentions connected with AI. “There's a clear common good here. People in computer science generally want to improve the world as much as they can. There's a real sense that science and engineering make the world a better place.”

Jaan Tallinn, the founder of Skype and co-founder of the CSER, has invested millions in funding research into the AI ‘control’ problem, after his interest was piqued by realising that, as he puts it, “the default outcome was not good for humans”.

Screen-Shot-2014-08-10-at-10-59-01.jpg

The CSER at Cambridge. Photo via. 

For Tallin, there’s an added urgency to making sure AI is controlled appropriately. “AI is a kind of meta-risk. If you manage to get AI right then it would help mitigate the other existential risks, whereas the reverse is not true. For example AI could amplify the risks associated with synthetic biology,” he says.

He maintains that we are not at the point where effective regulation can be introduced, yet “these existential risks are fairly new”. Tallin continues, “Once these topics get more acknowledged worldwide, people in technology companies may put in new kinds of polices to make these technologies safer.”

“The regulations around bio-hazard levels are a good example of off-the-shelf policies that you use if you are dealing with bio-hazards. [in the future] It’d be great to have that for AI.”

Jason Matheny, Programme Manager of IARPA at the USA’s Office of the National Intelligence agrees. “We need improved methods for assessing the risks of emerging technologies and the efficacy of safety measures,” he says.

The threat of superintelligence is to Matheny far worse than any epidemic we have ever experienced. “Some risks that are especially difficult to control have three characteristics: autonomy, self-replication and self-modification. Infectious diseases have these characteristics, and have killed more people than any other class of events, including war. Some computer malware has these characteristics, and can do a lot of damage. But microbes and malware cannot intelligently self-modify, so countermeasures can catch up. A superintelligent system [as outlined by Bostrom] would be much harder to control if it were able to intelligently self-modify.”

Meanwhile, the quiet work of these half dozen researchers in labs and study rooms across the globe continues. As Matheny puts it: “existential risk [and superintelligence] is a neglected topic in both the scientific and governmental communities, but it's hard to think of a topic more important than human survival.”

He quotes Carl Sagan, writing about the costs of nuclear war: “We are talking about [the loss of life of] some 500 trillion people yet to come. There are many other possible measures of the potential loss – including culture and science, the evolutionary history of the planet and the significance of the lives of all of our ancestors who contributed to the future of their descendants. Extinction is the undoing of the human enterprise.”

And it all could come from clever computers. You’ve been warned.

 

Link to comment
Share on other sites

×
×
  • Create New...
Â