12min
logo 12min

Acelere sua evolução!

Obtenha os melhores resumos para impulsionar sua carreira e sua vida.

Acelere sua evolução!

Obtenha os melhores resumos para impulsionar
sua carreira e sua vida.

logo 12min

Superintelligence Summary

4 min read ⌚ 

Superintelligence SummaryPaths, Dangers, Strategies

We have seen many movies about what could happen to the world if a superintelligent AI gets introduced.

Is it all fiction?

In our summary of “Superintelligence,” we give you all the reasons you should be hyped about the future, along with all the dangers that come with it.

Read on.

Who Should Read “Superintelligence”? and Why?

Nick Bostrom, the author of “Superintelligence,” suggests that artificial intelligence does bring the promise of a more prosperous, smarter and safer world. However, he argues, humanity may not succeed in bringing all AI’s promises to life.

The further you enter the book’s deconstructions of public opinion about AI, the more you will start to believe that people completely lack the capabilities and imagination to redesign the world from what we know as a human-lead reality, to a reality that superintelligent machines could dominate (and even threaten).

Furthermore, he explains not only the positive aspects and possibilities but also the concerns that pop into mind when thinking of superintelligent AI agents.

We recommend this powerful, morally complex book to all futurists, inventors, students, high-tech enthusiasts, and policymakers.

About Nick Bostrom

Nick BostromNick Bostrom is an author, a professor at Oxford University and the founding director of the Future of Humanity Institute.

“Superintelligence Summary”

At Dartmouth College in the late spring of 1956, a gathering of researchers started charting another course for the world’s future.

They started out with the idea that machines could recreate parts of human intelligence.

As you can already notice all around you, their effort evolved and “expert systems” thrived in the 80s, along with the promise of artificial intelligence.

However, at that point advances came to a plateau and hence, the funding subsided.

In the 90s, the “genetic algorithms” and “neural networks,” pushed the idea to take off once again.

However, how do scientists measure the power of AI?

Well, for starters, by measuring how well specifically designed machines play games such as chess, poker Scrabble, Go and Jeopardy. For instance, a machine with successful algorithms and calculations will beat the best human Go player in about ten years.

But, that is not all – games are just the beginning.

AI’s applications do not stop at games. They stretch out to listening devices, face and speech recognition, scheduling and planning, diagnostics, navigation, inventory management and a wide range of industrial robots.

It sounds nice, doesn’t it?

In spite of AI’s increasing fame and possibilities of utilization, indications of its confinements are rising.

For example, in the “Flash Crash” of 2010, algorithmic traders coincidentally made a descending spiral that cost the market a trillion dollars in moments.

However, we have to bear in mind that the innovation that caused the crisis in the first place was the one that ultimately helped to solve it.

In any case, the question remains: will AI’s curve take after the transforming pattern of human intelligence?

As a matter of fact, AI’s evolution may follow several paths.

Researchers believe that one day AI will evolve into “superintelligence,” which would be a profoundly different sort of intelligence.

This brings up another question:

Would such superintelligence be able to produce human feelings? And if that is the case, how?

A superintelligence could take up three forms.

First, “speed superintelligence” which could imitate human intelligence, but work more quickly.

Second, “collective superintelligence” which would be a network of subsystems that could autonomously take care of discrete issues that are a part of a larger undertaking.

The third is ambiguously defined as “quality superintelligence.” It alludes to an AI of such high caliber that it is as superior to human’s intellect as humans are too, say – dolphins.

With respect to how quick science could make another intelligence, the appropriate response relies upon “optimization power and system recalcitrance,” or willingness to comply.

Key Lessons from “Superintelligence”:

1.      “Orthogonality”
2.      AI Architecture and Scenarios
3.      Moral Character

“Orthogonality”

Keep in mind that the character of superintelligence is not exactly human.

Do not get into fantasies about humanized AI. Although it may sound counterintuitive, the orthogonality thesis states that levels of intelligence do not correlate with final objectives.

In fact, more intelligence does not mean that the number of shared or collective objectives among different AIs will increase.

However, one thing is sure: an AI’s motivation will inevitably consist of some “instrumental goals” such as achieving technological perfection.

AI Architecture and Scenarios

To study different possible scenarios in which the world will function after the widespread introduction of superintelligence, just think of how the new technologies influenced the horse.

Just some time ago, carriages increased the horse’s capabilities, but when the cars were introduced, they almost completely replaced it. As a result, horse populations rapidly declined.

If that is the case, what will happen to people when superintelligence replaces many of their abilities? Humans have property, capital, and political power, but many of those advantages may become unimportant when superintelligent AIs enter the scene.

Moral Character

Scientists have practical strategies that could help them develop a moral character inside an AI.

When we say moral character, it does not necessarily mean that these values will match those of people. Instead, think of a moral which will be unique for superintelligence.

Like this summary? We’d Like to invite you to download our free 12 min app, for more amazing summaries and audiobooks.

“Superintelligence” Quotes

There is no reason to expect a generic AI to be motivated by love or hate or pride or other such common human sentiments. Click To Tweet An artificial intelligence can be far less human-like in its motivations than a green scaly alien. Click To Tweet Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Click To Tweet Go-playing programs have been improving at a rate of about 1 dan/year. If this rate of improvement continues, they might beat the human world champion in about a decade. Click To Tweet There is a pivot point at which a strategy that has previously worked excellently suddenly starts to backfire. We may call the phenomenon the treacherous turn. Click To Tweet

Our Critical Review

“Superintelligence” by Nick Bostrom is filled with insight and backed information, and offers a lot of variables to contemplate on. Readers deeply interested in technology and all of its what-ifs will find this book valuable and intriguing.

logo 12min

Melhore Seu Hábito de Leitura em 28 dias

Explore os principais aprendizados de +2500 títulos em áudio e texto

logo 12min

Melhore Seu Hábito de Leitura em 28 dias

Explore os principais aprendizados de +2500 títulos em áudio e texto