Technological Singularity, defined

Having mentioned a technological singularity before, it is time to define it.

From Wikipedia’s entry on technological singularity.

In futures studies, a technological singularity represents an “event horizon” in the predictability of human technological development. Past this event horizon, following the creation of strong artificial intelligence or the amplification of human intelligence, existing models of the future cease to give reliable or accurate answers. Futurists predict that after the Singularity, posthumans and/or strong AI will replace humans as the dominating force in science and technology, rendering human-specific social models obsolete.

I. J. Good originally described the impact of superhuman intelligence as an “intelligence explosion”—a state in which small improvements in intelligence are used to develop larger improvements, which allow for even larger improvements, ad infinitum. Vernor Vinge referred to this event as the Singularity, and popularized it with lectures, essays, and science fiction in the 1980s.

Ray Kurzweil considers the advent of superhuman intelligence to be part of an overall exponential trend in human technological development seen originally in Moore’s Law and extrapolated into a general trend in Kurzweil’s own Law of Accelerating Returns. Unlike a hyperbolic function, Kurzweil’s predicted exponential model never experiences a true mathematical singularity.

While some regard the Singularity as a positive event and work to hasten its arrival, others view the Singularity as dangerous, undesirable, or unlikely to occur. The most practical means for initiating the Singularity are debated, as are how (or whether) the Singularity can be influenced or avoided if dangerous.

History and definitions

Though often thought to have originated in the last two decades of the 20th century, the idea of a technological singularity actually dates back to the 1950s:

“One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”Stanislaw Ulam, May 1958, referring to a conversation with John von Neumann

This quote is sometimes taken out of context and attributed to von Neumann himself, likely due to von Neumann’s celebrity standing.

In 1965, statistician I. J. Good described a scenario more like the Singularity in that it emphasized the effects of superhuman intelligence:

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

In his book “Mindsteps to the Cosmos” (HarperCollins, August 1983), Gerald S. Hawkins elucidated his notion of ‘mindsteps’, dramatic and irreversible changes to paradigms or world views. He identified five distinct mindsteps in human history, and the technology that accompanied these “new world views”: the invention of imagery, writing, mathematics, printing, the telescope, rocket, computer, radio, TV… “Each one takes the collective mind closer to reality, one stage further along in its understanding of the relation of humans to the cosmos.” He noted: “The waiting period between the mindsteps is getting shorter. One can’t help noticing the acceleration.” Hawkins’ empirical ‘mindstep equation’ quantified this, and gave dates for future mindsteps. The date of next mindstep (5; the series begins at 0) is given as 2021, with two more successively closer mindsteps, until the limit of the series in 2053. His speculations ventured beyond the technological:

“The mindsteps… appear to have certain things in common – a new and unfolding human perspective, related inventions in the area of memes and communications, and a long formulative waiting period before the next mindstep comes along. None of the mindsteps can be said to have been truly anticipated, and most were resisted at the early stages. In looking to the future we may equally be caught unawares. We may have to grapple with the presently inconceivable, with mind-stretching discoveries and concepts.”

The Singularity was greatly popularized by mathematician and novelist Vernor Vinge. Vinge began speaking on the Singularity in the 1980s and first addressed the topic in print in the January 1983 issue of Omni Magazine. He later collected his thoughts in the 1993 essay “The Coming Technological Singularity”, which contains the oft-quoted statement “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly thereafter, the human era will be ended.”

Vinge writes that superhuman intelligences, however created, will be even more able to enhance their own minds faster than the humans that created them. “When greater-than-human intelligence drives progress,” Vinge writes, “that progress will be much more rapid.” This feedback loop of self-improving intelligence, he predicts, will cause large amounts of technological progress within a short period of time.

 

Creating superhuman intelligence

Most proposed methods for creating smarter-than-human or transhuman minds fall into one of two categories: intelligence amplification of human brains and artificial intelligence.

The means speculated to produce intelligence augmentation are numerous, and include bio- and genetic engineering, nootropic drugs, AI assistants, direct brain-computer interfaces, and mind transfer. Radical life extension techniques, cryonics, and molecular nanotechnology are often advocated by transhumanists as means to live long enough to benefit from future medical techniques, thus allowing for open-ended lifespans and participant evolution.

Despite the numerous speculated means for amplifying human intelligence, non-human artificial intelligence (specifically seed AI) is the most popular option for organizations trying to directly initiate the Singularity, a choice the Singularity Institute addresses in its publication “Why Artificial Intelligence?” (2005).

George Dyson speculates in Darwin Among the Machines that a sufficiently complex computer network may produce “swarm intelligence“, and that improved future computing resources may allow AI researchers to create artificial neural networks so large and powerful they become generally intelligent. Mind uploading is a proposed alternative means of creating artificial intelligence—instead of programming a new intelligence, one copies an existing human intelligence into a digital form.

 

Kurzweil’s law of accelerating returns

Main article: Law of Accelerating Returns

 

Kurzweil writes that, due to paradigm shifts, the trend of exponential growth holds true from integrated circuits to earlier transistors, vacuum tubes, relays and electromechanical computers.

 


Kurzweil writes that, due to paradigm shifts, the trend of exponential growth holds true from integrated circuits to earlier transistors, vacuum tubes, relays and electromechanical computers.

Ray Kurzweil justifies his belief in an imminent singularity by an analysis of history from which he concludes that technological progress follows a pattern of exponential growth. He calls this conclusion The Law of Accelerating Returns. He generalizes Moore’s law, which describes exponential growth in integrated semiconductor complexity, to include technologies from far before the integrated circuit.

Whenever technology approaches a barrier, he writes, new technologies will cross it. He predicts paradigm shifts will become increasingly common, leading to “technological change so rapid and profound it represents a rupture in the fabric of human history” (Kurzweil 2001). Kurzweil believes the Singularity will occur before the end of the 21st century, setting the date at 2048 (Kurzweil 2005). His predictions differ from Vinge’s in that he predicts a gradual ascent to the Singularity, rather than Vinge’s rapidly self-improving superhuman intelligence. The distinction is often made with the terms soft and hard takeoff.

 

A logarithmic timeline showing an exponentially accelerating trend towards increasing frequency of major events (as chosen by Theodore Modis) in human and natural history. Some critics of Kurzweil's theory dispute the choice of such specific events.

 


A logarithmic timeline showing an exponentially accelerating trend towards increasing frequency of major events (as chosen by Theodore Modis) in human and natural history. Some critics of Kurzweil’s theory dispute the choice of such specific events.

Before Kurzweil proposed his Law, many sociologists and anthropologists created social theories of sociocultural evolution. Some, like Lewis H. Morgan, Leslie White, and Gerhard Lenski, declare technological progress to be the primary factor driving the development of human civilization. Morgan’s three major stages of social evolution can be divided by technological milestones. Instead of specific inventions, White decided that the measure by which to judge the evolution of culture is its control of energy, which he describes as “the primary function of culture.” His model eventually led to the creation of the Kardashev scale. Lenski takes a more modern approach and declares the more information a given society has, the more advanced it is.

Since the late 1970s, others like Alvin Toffler (author of Future Shock), Daniel Bell and John Naisbitt have approached the theories of postindustrial societies similar to visions of near- and post-Singularity societies. They argue the industrial era is coming to an end, and services and information are supplanting industry and goods. Some more extreme visions of the postindustrial society, especially in fiction, envision the elimination of economic scarcity.

Theodore Modis and Jonathan Huebner have argued, from different perspectives, that the rate of technological innovation has not only ceased to rise, but is actually now declining. John Smart has criticized their conclusions. Others criticize Kurzweil’s choices of specific past events to support his theory.

In fact, “technological singularity” is just one of a few singularities detected through the analysis of a number of characteristics of the World System development, for example, with respect to the world population, world GDP, and some other economic indexes (e.g., Johansen, A., and D. Sornette. 2001. Finite-time Singularity in the Dynamics of the World Population and Economic Indices. Physica A 294(3–4): 465–502). It has been shown (e.g., Korotayev A., Malkov A., Khaltourina D. Introduction to Social Macrodynamics: Secular Cycles and Millennial Trends. Moscow: URSS, 2006) that the hyperbolic pattern of the world population and technology growth (observed for many centuries, if not millennia prior to the 1970s) could be accounted for by a rather simple mechanism, the nonlinear second order positive feedback, that was shown long ago to generate precisely the hyperbolic growth, known also as the “blow-up regime” (implying just finite-time singularities). In our case this nonlinear second order positive feedback looks as follows: more people – more potential inventors – faster technological growth – the carrying capacity of the Earth grows faster – faster population growth – more people – more potential inventors – faster technological growth, and so on. On the other hand, this research has shown that since the 1970s the World System does not develop hyperbolically any more, its development diverges more and more from the blow-up regime, and at present it is moving “from singularity”, rather than “toward singularity”.

 

Desirability and safety of the Singularity

Some speculate superhuman intelligences may have goals inconsistent with human survival and prosperity. AI researcher Hugo de Garis suggests AIs may simply eliminate the human race, and humans would be powerless to stop them. Other oft-cited dangers include molecular nanotechnology and genetic engineering. These threats are major issues for both Singularity advocates and critics, and were the subject of a Wired Magazine article by Bill Joy, Why the future doesn’t need us (2000). Oxford philosopher Nick Bostrom summarizes the potential threats of the Singularity to human survival in his essay Existential Risks (2002).

Many Singularitarians consider nanotechnology to be one of the greatest dangers facing humanity. For this reason, they often believe seed AI should precede nanotechnology. Others, such as the Foresight Institute, advocate efforts to create molecular nanotechnology, claiming nanotechnology can be made safe for pre-Singularity use or can expedite the arrival of a beneficial Singularity.

Advocates of Friendly Artificial Intelligence acknowledge the Singularity is potentially very dangerous and work to make it safer by creating AI that will act benevolently towards humans and eliminate existential risks. This idea is also embodied in Isaac Asimov‘s Three Laws of Robotics, intended to prevent artificially intelligent robots from harming humans, though the crux of Asimov’s stories is often how the laws fail.

 

Neo-Luddite views

Some argue advanced technologies are simply too dangerous for humans to morally allow them to be built, and advocate efforts to stop their invention. Theodore Kaczynski, the Unabomber, writes that technology may enable the upper classes of society to “simply decide to exterminate the mass of humanity.” Alternatively, if AI is not created, Kaczynski argues that humans “will have been reduced to the status of domestic animals” after sufficient technological progress. Portions of Kaczynski’s writings have been included in both Bill Joy‘s article and in a recent book by Ray Kurzweil. However, Kaczynski not only opposes the Singularity but also supports neo-luddism. Many people oppose the Singularity without opposing present-day technology as Luddites do.

Along with Kaczynski, many other anti-civilization theorists, such as John Zerzan and Derrick Jensen, represent the school of anarcho-primitivism or eco-anarchism, which sees the rise of the technological singularity as an orgy of machine control, and a loss of a feral, wild, and uncompromisingly free existence outside of the factory of domestication (civilization). In essence, environmental groups such as the Earth Liberation Front and Earth First! see the singularity as a force to be resisted at all costs. Author and social change strategist James John Bell has written articles for Earth First! as well as mainstream science and technology publications, like The Futurist, providing a cautionary environmentalist perspective on the singularity, including his essays Exploring The “Singularity” and Technotopia and the Death of Nature: Clones, Supercomputers, and Robots. Also, the publication Green Anarchy, to which Kaczynski and Zerzan are regular contributors, has published articles about resistance to the technological singularity, e.g. A Singular Rapture, written by MOSH (which is in reference to Kurzweil’s M.O.S.H., for “Mostly Original Substrate Human”).

Just as Luddites opposed artifacts of the industrial revolution, due to concern for their effects on employment, some opponents of the Singularity are concerned about future employment opportunities. Although Luddite fears about jobs were not realised, given the growth in jobs after the industrial revolution, there was an effect on involuntary employment: a dramatic decrease in child labor and the labors of the overaged. It can be argued that only a drop in voluntary employment should be of concern, not a reduced level of absolute employment (such a position is held by Henry Hazlitt). Economically, a post-Singularity society would likely have more wealth than a pre-Singularity society (via increased knowledge of matter and energy manipulation to meet human needs). One possible post-Singularity future, therefore, is one in which per capita wealth increases dramatically while per capita employment decreases.

 

Singularity predictions just a by-product of memory compression?

Jürgen Schmidhuber calls the Singularity Omega, referring to Teilhard de Chardin‘s Omega point (1916). For Omega = 2040 he says the series Omega – 2^n human lifetimes (n<10; one lifetime = 80 years) roughly matches the most important events in human history. But he also questions the validity of such lists, suggesting they just reflect a general rule for “both the individual memory of single humans and the collective memory of entire societies and their history books: constant amounts of memory space get allocated to exponentially larger, adjacent time intervals further and further into the past.” He suggests that this may be the reason “why there has never been a shortage of prophets predicting that the end is near – the important events according to one’s own view of the past always seem to accelerate exponentially.”

Advertisements

0 Responses to “Technological Singularity, defined”



  1. Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s





%d bloggers like this: