Here are comments on Vinge's Singularity by:


Comment by Gregory Benford

On the Singularity I have but one comment: most of humanity won't take part, for every singularity can be made non-singular by a simple resistive term...and there are always such. If a small segment takes off beyond view, they will still need to protect their physical well-being--necessities, etc. -- against the slings and arrows of outraged humanity (and there are always such; envy is eternal. So this juncture will provide the real working surface for change...those in the Singularity will be beyond view, anyway.

By "resistive term" I mean that nothing works perfectly and the separation of a fraction will be interrupted by glitches, failures of power/support. And if a fraction proceeds beyond view, it won't affect the bulk of humanity, who will barely be aware anyway. Plus, those who pass through will still have to fight the tough laws of the universe. They won't fly faster than light, exploring the galaxy, etc. But conceptually they can go beyond our horizon and if so, more power to them. But commanding PHYSICAL power will again depend upon interfacing with the rest of us.


Comment by David Brin

Singularities

Vernor Vinge's 'singularity' is a worthy contribution to the long tradition of contemplations about human transcendence. Throughout history, most of these musings have dwelled upon the spiritual -- the notion that human beings can achieve a higher state through prayer, moral behavior, or mental discipline. In the last century, an intellectual tradition that might be called 'techno-transcendentalism' has added a fourth track. The notion that a new level of existence, or a more appealing state of being, might be achieved by means of knowledge and skill.

Sometimes, techno-transcendentalism has focused on a specific branch of science, upon which the adherents pin their hopes. Marxists and Freudians created complex models of human society or mind, and predicted that rational application of these rules would result in a higher level of general happiness.

At several points, eugenics has captivated certain groups with the allure of improving the human animal. This dream has lately been revived with the promise of genetic engineering.

Enthusiasts for nuclear power in the 1950's promised energy too cheap to meter. Some of the same passion was seen in the enthusiasm for space colonies, in the 1970' s and 80's, and in today's cyber-transcendentalism, which appears to promise ultimate freedom and privacy for everyone, if only we just start encrypting every internet message and use anonymity online to perfectly mask the frail beings who are actually typing at a real keyboard.

This long tradition -- of bright people pouring faith and enthusiasm into transcendental dreams -- tells us a lot about one aspect of our nature that crosses all cultures and all centuries. Quite often it comes accompanied by a kind of contempt for contemporary society -- a belief that some kind of salvation can be achieved outside of the normal cultural network ... a network that is often unkind to bright philosophers, or nerds.

We need to keep this long history in mind as we discuss the latest phase - a belief that the geometrically, or possibly exponentially increase in the ability of our machines to make calculations will result in an equally profound magnification of our knowledge and power.

Having said all of the above, let me hasten to add that I believe in the high likelihood of a coming singularity! The alternative is simply too awful to accept. The means of mass destruction, from A-bombs to germ warfare, are 'democratizing' -- spreading so rapidly among nations, groups, and individuals -- that we had better see a rapid expansion in sanity and wisdom, or else we're all doomed. Indeed, strong evidence indicates that the overall education and sagacity of Western civilization and its constituent citizenry has never been higher, and may continue to improve rapidly in the coming century. One thing is certain: we will not see a future that resembles Bladerunner, or any other cyberpunk dystopia. Such worlds, where massive technology is unmatched by improved accountability, will not be able to sustain themselves. The options before us appear to be limited:

  1. Achieve some form of 'singularity' -- or at least a phase shift, to a higher and more knowledgeable society (one that may have problems of its own that we can't imagine.)
  2. Self-destruction
  3. Retreat into some form of more traditional human society. One that discourages the sorts of extravagant exploration that might lead to results 1 or 2.

In fact, when you look at our present culture from a historical perspective, it is already profoundly anomalous in its emphasis upon individualism, progress, and above all, suspicion of authority. These themes were actively and vigorously repressed in a vast majority of human cultures because they threatened the stable equilibrium upon which ruling classes depended.

Although we are proud of the resulting society -- one that encourages eccentricity, appreciation of diversity, social mobility, and scientific progress, we have no right, as yet, to claim that this new way of doing things is sane or obvious. Many in other parts of the world consider westerners to be quite mad, and only time will tell who is right about that. Certainly if current trends continue -- if for instance, we take the suspicion of authority ethos to its extreme, and start paranoically mistrusting even our best institutions -- it is quite possible that western civilization might fly apart before ever achieving its vaunted aims. Certainly, a singularity cannot happen if only centrifugal forces operate, and there are no compensating centripetal virtues to keep us together as a society of mutually respectful common citizens.

Above all (as I point out in The Transparent Society) our greatest innovations -- science, justice, democracy and free markets -- all depend upon the mutual accountability that comes from open flows of information.

But what if we do stay on course, and achieve something like Vernor's singularity? There is plenty of room to argue over what TYPE would be beneficial or even desirable. For instance, if organic humans are destined to be replaced by artificial beings,vastly more capable than we souped-up apes, can we design those successors to think of themselves as human? Or will we simply become obsolete?

Some people remain big fans of Teilhard de Chardin's apotheosis – the notion that we will all combine into a single macro-entity, almost literally godlike in its knowledge and perception. Tipler speaks of such a destiny in his book The Physics of Immortality, and Isaac Asimov offers a similar prescription as mankind's long-range goal, in Foundation's Edge. I have never found this notion particularly appealing -- at least in the standard version in which the macro-being simply subsumes all individuals within it, and proceeds to think just one thought at a time. In Earth, I talk about a variation on this theme that might be more palatable, in which we all remain individual while at the same time contributing to a new layer of planetary consciousness -- in other words we get to both have our cake and eat it too. At the opposite extreme, in the new 'Foundation' novel, that I am currently writing as a sequel to Asimov's famous novels, I make more explicit what Isaac has been painting all along -- the image that conservative robots who fear human transcendence, might actively work to prevent a human singularity for thousands of years, fearing that it would bring us harm.

In any event, it is a fascinating notion, and one that can be rather frustrating at times. A good parent wants the best for his or her children, and for them to be better. And yet, it can be poignant to imagine them -- or perhaps their grandchildren -- living almost like gods, with omniscient knowledge and perception, and near immortality.

But when has human existence been anything but poignant? All of our speculations and musings today may seem amusing and naive, to those descendants. But I hope they will also experience moments of respect. They may even pause and realize that we were really pretty good for souped-up cavemen.

© David Brin, 1998. Reprint only with permission.


Comment by Damien Broderick

`Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.' (Vernor Vinge, NASA VISION-21 Symposium, 1993)

Around 2050, or maybe as early as 2020, is when Dr Vernor Vinge's technological Singularity is expected to erupt, in the considered opinion of a number of scientists. Call such an event `the Spike', because technology's exponential curve resembles a spike on a graph of progress against time. Of course, it's a profoundly suspect suggestion. We've heard this sort of thing prophesied before, in literally Apocalyptic religious revelations of End Time and Rapture. It's a pity the timing coincides fairly closely with the dates proposed by the superstitious.

What Vinge means is just a barrier to confident anticipation of future technologies. Despite the millennial coincidence, estimates of when the Spike is due, and even of its slope and the time remaining before that slope carries us upward faster than we can foresee, remain elusive. We will not find ourselves hurled headlong into the Singularity in the next few years, or even the next decade. The curve steepens only later, even if that runaway surge is something that many of us might expect to see in our lifetimes. And our lifetimes could turn out to be far longer tha we currently expect.

For now, what is required of us is not reverence but hard thinking and public dialogue. It is a nice paradox. If we postpone our analysis of the path to the Spike, on the understandable grounds that it's too frightening, or too silly, we'll lose the chance to keep our hands on the reins. (And see how the old, passé metaphors remain in charge of our thoughts? Hands on the reins?)

To date, many of the best minds of the human race have been devoted to short-term goals suitable for short-lived people in a volatile, hungry, dangerous world. That will change. Perhaps none of the complex lessons of our long history will have the slightest bearing on our conduct, will offer us any good guidance, in the strange days after the middle of the 21st century and beyond. Except, perhaps, the austere rules of game-theory, and the remote laws of physics.

Many will deplore this bleak obliteration of the wisdom of the past. `Love one another!' is, after all, part of our deepest behavioral grammar, inherited from three million years on the plains of Africa. So too is its hateful complement: `Fear the stranger! Guard the food! Kill the foe!' The Spike might resolve these ancient dilemmas by rendering some of them pointless - why bother to steal another's goods when you can make your own in a matter-compiler? - and others remediable - why shiver in fear of disease and sexism and death, when you can re-write your DNA codes, bypass mortality, switch gender from the chromosomes up, guided by wonderful augmented minds?

Or is this, after all, no better than cargo-cult delusion, the ultimate mistaken reliance for salvation upon some God-in-the-Machine?

Anders Sandberg, one of the few people in the world to think about transhumanist issues in any depth, recently lamented the way our minds tend to cave in when faced by the possibility of a Spike. The concept is so immense: `it fits in too well with our memetic receptors!' He tried to unpick the varieties of singularity that have been proposed to date.

One is the Transcension, an approximation to the Parousia where we become more than human, changing by augmentation or uploading into something completely different and unknowable.

A second is the Inflexion Point, the place where the upward scream of the huge sigmoid curve of progress tips over, slowing, and starts to ebb.

The third is the Wall, or Prediction Horizon, the date after which we can't predict or understand much of what is going on because it simply gets too weird. While this last version somewhat resembles the first, that is just a side-effect of our ignorance. It does not imply, as Transcension can, that with the Spike we enter a realm of spacetime engineering, creation of budded universes, and wholesale re-writing of the laws of physics: what Australian transhumanist Mitchell Porter calls `the Vastening'.

For all its appeal - precisely because of its crypto-mystical, pseudo-religious appeal - the Transcension is, Anders Sandberg suggests,

"the most dangerous of the three versions, since it is the most overwhelming. Many discussions just close with, `But we cannot predict anything about the post singularity world!', ending all further inquiry just as Christians and other religious believers do with, `It is the Will of God'. And it is all too easy to give the Transcension eschatological overtones, seeing it as Destiny. This also promotes a feeling of helplessness in many, who see it as all-powerful and inevitable."

The collective mind we call science - fallible, contestatory, driven by ferocious passions and patient effort - promises to deliver us what religions and mythologies have only preached as a distant, impalpable reward (or punishment) to be located in another world entirely.

Yet many people, to my amazement, denounce the idea that we might live indefinitely extended and ceaselessly revised lives in a post-Spike milieu.

Where is the sweetness of life, they ask, without the stings, pangs and agonies of its loss? Life is the bright left hand of death's darkness. No yin without yang, and so forth. I have some sympathy for this suspicion (everyone knows, for example, that well-earned hunger makes the finest sauce to a meal), but not much. I'm not persuaded that simple dualistic contrasts and oppositions are the most useful way to analyse the world, let alone to form the basis for morality.

Does freedom require the presence of a slave underclass? Are we only happy in our health because someone else - or we, ourselves, in the future - might die in agony from cancer? Let's hope not! I must state bluntly that this line of thinking smacks to me far too sordidly of a doctrine of cowed consolation, the kind of warrant muttered by prisoners with no prospect of eluding a cruel and unyielding captor, and with no taste for daring an escape bid.

It is a compliant slave's self-defeating question to ask: what would we do with our freedom? The answer can only be: whatever you wish. Yes, freedom from imposed mortality will be wasted by some, life's rich spirit spilled into the sand, just as the gift of our current meagre span is wasted and spoiled by all too many in squabbles, fatuous diversions, bored routine, numbing habits and addictions of a dozen kinds.

Others, bent by the torment of choice and liberty, will throw it away in terror, taking their own lives rather than face the echoing void of open endlessness. That would be their choice, one that must be respected (however much we might regret it). For the rest of us, I think, there will be a slow dawning and awakening of expectations. People of exceptional gifts will snatch greedily and thankfully at the chance to grow, learn, suck life dry as never before. But so too, surely, will the ordinary rest of us.

With a span limited to a single century, a quarter devoted to learning the basics of being a human and another quarter, or even more, lost in failing health, it's little wonder that we constrict our horizons, close our eyes against the falling blows of time. Even now, of course, any one of us could learn in middle age to play the piano or violin, or master a new language, or study the astoundingly elegant mathematics we missed in school, but few manage the resolve. But to make such efforts would be regarded by our friends as futile, derided as comic evidence of `mid-life crisis'.

In a world of endless possibilities, however, where our mental and physical powers do not routinely deteriorate, opportunities to expand our skills and our engagement with other people, the natural world, history itself, will challenge all of us, including the most ordinary of citizens. Although it strains credulity right now, I believe one of the great diversions for many people in the endless future will be the unfolding tapestry of science itself (as well as the classic arts, and altogether new means of expression).

Enhanced by our machines, we will embrace the aesthetic grandeur of systematic knowledge tested against stringent criticism, sought for its own soul-filling sake - and embrace as well knowledge as virtuoso technique, as the lever of power, enabling each of us to become immersed, if only as an informed spectator, in the enterprise of discovery. And there might be no limits to what we can discover, and do, on the far side of the Spike.

 

Damien Broderick's book about the Singularity is The Spike: Accelerating Into The Unimaginable Future (Australia: New Holland Books, 1997). His forthcoming book about drastic life extension and the impact of science on society is The Last Mortal Generation (Australia: New Holland Books, 1999).


Comment by Nick Bostrom

Singularity and Predictability

I find myself to be in close agreement with much of what Vinge has said about the singularity. Like Vinge, I do not regard the singularity as being a certainty, just one of the more likely scenarios.

1.
"The singularity" has been taken to mean different things by different authors, and sometimes by the same author on different occasions. There are at least three clearly distinct theoretical entities that might be refered to by this term:

Vinge seems to believe in the conjunction of these three claims, but it's not clear whether they are all part of the definition of "the singularity". For example, if Verticality and Superintelligence turn out to be true but Unpredictability fails, is it then the case that there was no singularity? Or was there a singularity but Vinge was mistaken about its nature? This is a purely verbal question, but one it would be useful to be clear about so we are not talking past one another.

2.
Two potential technologies stand out from the others in terms of their importance: superintelligence and nanotechnology (i.e. "mature molecular manufacturing" which includes a nearly general assembler).

It is an open question which of these will be developed first. I believe that either would fairly soon lead to the other. A superintelligence, it seems, should be able to quickly develop nanotechnology. Conversely, if we have nanotechnology then it should be easy to get greater than human-equivalent hardware. Given the hardware, we could either use ab initio methods, such as neural networks and genetic algorithms or (less likely) classical AI; or we could upload outstanding human brains and run them at an accelerated clock-rate. My intuition is that given adequate hardware, ab initio methods would succeed within a fairly short time (perhaps as little as a few years). There is no way of being sure of that, however. If the software problem turns out to be very hard then there are two possibilities: If uploading is feasible at this stage (as it would presumably be there is nanotechnology) then the first (weak) superintelligences might be uploads. If, on the other hand, uploads cannot be created, then the singularity will be delayed until the software problem has been solved or uploading becomes possible.

If there is nanotechnology but not superintelligence then I would expect technological progress to be very fast by today's standards. Yet, I don't think there would be a singularity, since the design problems involved in creating complex machines or medical instruments would be huge. It would presumably take several years before most things were made by nanotechnology.

If there is superintelligence but not nanotechnology, there might still be a singularity. Some of the most radical possibilities would presumably then not happen -- for example, the Earth would not be transformed into a giant computer in a few days -- but important features of the world could nonetheless be radically different in a very short time, especially if we think that subjective experiences and intelligent processing are themselves among these important features.

3.
Even if Verticality and Superintelligence are both true, I am not at all sure that Unpredictability would hold. I think it is unfortunate that some people have made Unpredictability a defining feature of "the singularity". It really does tend to create a mental block.

Note that Unpredictability is a very strong claim. Not only does it say that we don't know much about the post-singularity world but also that such knowledge cannot be had. If we convinced about this radical skepticism, we would have no reason to try to make the singularity happen in a desirable way, since we would have no means of anticipating what the consequences of present actions would be on the post-singularity world.

I think there are some things that we can predict with a reasonable degree of confidence beyond the singularity. For example, that the superintelligent entity resulting from the singularity would start a spherical colonization wave that would propagate into space at a substantial fraction of the speed of light. (This means that I disagree with Vinge's suggestion that we explain the Fermi paradox by assuming that "these outer civilizations are so weird there's no way to interact with them".) Another example is that if there are multiple independent competing agents (which I suspect there might not be) then we would expect some aspects of their behaviour to be predictable from considerations of economic rationality.

It might also be possible to predict things in much greater detail. Since the superintelligences or posthumans that will govern the post-singularity world will be created by us, or might even be us, it seems that we should be in a position to influence what values they will have. What their values are will then determine what the world will look like, since due to their advanced technology they will have a great ability to make the world conform to their values and desires. So one could argue that all we have to do in order to predict what will happen after the singularity is to figure out what values the people will have who will create the superintelligence.

It is possible that things could go wrong and that the superintelligence we create accidentally gets to have unintended values, but that presupposes that a serious mistake is made. If no such mistake is made then the superintelligence will have the values of its creators. (And of course it would not change its most fundamental values; it is a mistake to think that as soon as a being is sufficiently intelligent it will "revolt" and decide to pursue some "worthy" goal. Such a tendency is at most a law human psychology (and even that is by no means clear); but a superintelligence would not necessarily have a human psychology. There is nothing implausible about a superintelligence who sees it as its sole purpose to transform the matter of the universe into the greatest possible number of park benches -- except that it is hard to imagine why somebody would choose to build such a superintelligence.)

So maybe we can define a fairly small number of hypothesis about what the post-singularity world will be like. Each of these hypotheses would correspond to a plausible value. The plausible values are those that it seems fairly probable that many of the most influential people will have at about the time when the first superintelligence is created. Each of these values defines an attractor, i.e. a state of the world which contains the maximal amount of positive utility according to the value in question. We can then make the prediction that the world is likely to settle into one of these attractors. More specifically, we would expect that within the volume of space that has been colonized, matter would gradually (but perhaps very quickly) be arranged into value-maximal structures -- structures that contain as much of the chosen value as possible.

I call this the track hypothesis because it says that the future contains a number of "tracks" -- possible courses of development such that once actual events take on one of these courses they will inexorably approach and terminate in the goal state (the attractor) given by the corresponding value.

Even if we knew that the track hypothesis is true and which value will actually become dominant, it's still possible that we wouldn't know much about what the future will be like. It might be hard to compute what physical structure would in fact maximize the given value. (What would the universe have to look like in order to maximize the total amount of pleasure?). Moreover, even if we have some abstract knowledge of the future, we might still be incapable of intuitively understanding what it will be like to be living in this future. (What would the hyper-blissful experiences feel like that would exist if the universe were pleasure-maximized?) This is one sense in which posthumans could be as incomprehensible to us as we are to goldfish. But inscrutability in this sense by no means precludes that unaugmented humans could be able to usefully predict many aspects of the behaviour and cognitive processes of posthumans.


Comment by Alexander Chislenko

Singularity as a process, and future beyond

I first encountered the notion of a dramatic turning point in future sometime in the early 1970s, when my father Leonid Chislenko, a theoretical biologist and a philosopher, shared some of his "General Theory of Everything" with me. I remember him drawing a simple graph of the rate of the evolution of complex systems in the Universe, from the formation of galaxies, stars, and planets, to appearance of various life forms, to early societies, to recent technological revolution. On the logarithmic scale, the complexity doubling periods resided on a single straight line that laconically touched the X axis at the 2030 year mark. That little demonstration was one of most exciting moments in my studies of the world, and greatly influenced my further interests.

My favorite factors to watch changed several times in the last 25 years, and so did my expectation of the ultimate goals of the process. Actually, I no longer believe that we are going to witness a single event/point after which the development would infinitely speed up, or dramatically and unpredictably break.

Every prediction has some observer estimating values of selected system parameters for a certain time ahead. While current widely published predictions of population size, weather, sports results, or oil prices may be somewhat accurate, these indicators no longer reflect the progress of modern civilization. This development is determined by intricate interplay of semantic patterns and techno-social architectures, of whose state nobody has a relevant model. Most people have never even attempted to understand the transformation trends of this system, while those who tried caught just sparse glimpses of relevant trends. How long do we have to wait to claim that the system's behavior is unpredictable? For most people, this Singularity point has already arrived, and the future mix of the ever greater number of ever smarter people and machines will look to them about equally confusing, and just change faster (or slower, depending on which irrelevant parameters they may choose to track).

Our ideas of important growth factors have changed during recent history. First, we looked at the population explosion that for some time obeyed the hyperbolic "law". Of course, it was no more of a law than Moore's observation of the pace of the computer technology development. Simple projection of ongoing trends, without analysis of mechanisms, is not a law, it's just a mental exercise. Qualitative understanding of the development process seems a lot more important than the results of such projections (though I do remember calculating with my son how soon he would reach the size of an average giraffe if he continued doubling his height every 5 years). After the population growth slowed down, people started plotting energy consumption graphs. It slowed down as well. Then it was expansion into space. Before it had time to slow down, we turned our attention to computer memory and speed.

It seems quite natural that at every point we pay most attention to the parameters that are just about to peak, as they represent the hottest current trends. When we look back in the excitement of the moment, the whole history of the world looks like a preparation to the explosion of this particular parameter. And we completely forget that yesterday we were looking at something else with the same excitement...

I expect that we may soon lose interest to the current factors as well, and start looking at a new set of criteria, for example, some non-numeric indicators of cognitive strength, architectural stability, and dynamic potential of the global technosocium.

Surely, human population, energy, space, and computing power are, at this stage, useful for further progress. However, their influence on the development is difficult to describe with a simple formula. Certain problems require threshold levels of some of these resources, and then they are just solved (e.g., I expect communication between humans to saturate as the price of delivery of full human perception bandwidth becomes negligible); others so far exhibit increasing returns; still others already ground to a halt (accuracy of weather forecasts). So far, the utility of additional physical and computational resources was determined by the fact that they compensated for old-time human deficiencies and allowed utilization of existing resources that were relatively huge both in terms of physical size and complexity. When the first benefits of augmentation of natural human skills get exploited, and the dimensions of the artificial systems first approach, and then exceed those of the biological world, the nature of the development process will shift from exploitation of the accumulated natural treasures to self-engineering, which may produce different dynamic patterns and require attention to different factors.

The increase in crude resources doesn't guarantee unbounded progress. If we can model the intelligence of a small insect now, and 40 years from now will have million times more memory and a million times greater processing speed, all we'll get is a lot of very fast insects. Further qualitative progress would require better design methods that have to be designed either by slow humans, or automatically evolved. Genetic algorithms have a great promise, but in many cases their utility will be curbed by slow rate of evolutionary cycles (e.g., spaceship control systems) or high safety requirements (control software for nuclear power plants, national defense, or heart transplants). The development of truly intelligent computer systems may be a stumbling block on the way to the most dramatic future advances, so we better start developing the necessary technologies today.

I do not think the dramatic changes may happen at a certain single point, though there will be periods of increasingly fast growth in many areas, that will keep running ahead of analytical and participatory abilities of non-augmented humans (though research conglomerates may have increasingly accurate models of the whole system that, through them, will understand and be able to predict its behavior better than all its predecessors). Even with increasing control over space, time, matter, and complexity, we may meet some natural limits to growth. These limits may not curb all progress, but as previous limitations, they may bring new turns into the development process that we may currently find difficult to foresee.

Still, we can probably already imagine some of the features of the post-human world. Increasing independence of functional systems from the physical substrate, continued growth in architectural liquidity of systems, low transaction costs and secure and semantic-rich communication mechanisms will result in the dissolution of large quasi-static systems and obsoletion of the idea of structural identity. The development arena may belong to the "teleological threads" - chains of systems creating each other for sequences of purposes using for their execution temporary custom assemblages of small functions-specific units.

Point transition or a rapid process, Singularity still seems difficult to come to grips with and, hence, quite threatening. The fast succession of dramatic transformations promises to destroy or invalidate most of the things one identifies with. Modification of familiar objects beyond recognition, even through improvement, looks much like destruction; it's "death forward", as I call it. This is a subjective problem, and it requires a subjective solution. The solution is actually quite simple: junk the outdated concept of static identity, and think of yourself as a process.

Of course, we can build all kinds of wild scenarios of the future beyond the human Concept Horizon (a notion I prefer to Singularity as it implies a reference to an observer, a multitude of possible points of entry, and an ever-elusive position). However, since many of the technologies that we possess and are developing today, will influence the construction of the future world, it is our historical duty to create tools enabling the future we would like to see. My ideal transcendent world would look like a free-market Society of Mind, with a diverse set of non-coercively coordinated goals and development tools, and sophisticated balancing mechanisms and safeguards.

If we want to see such a world rather than a lop-sided and fragile super-Borg, we need to foster research and experimental work on complex multi-agent systems, game and contract theory, cognitive architectures, alternative social formations, semantically rich, secure communication mechanisms, post-identity theory of personhood, and other areas of science, technology, and personal psychology that may influence the development of a positive future at this sensitive formative stage. We would also need to popularize these approaches, to demonstrate to other people that these scenarios are both feasible and desirable.

My Hypereconomy project represents one of such attempts. It is a model of an integrated cognitive/economic system where independent agents exchange value representations together with non-scalar aggregate indicators of situational utilities of different objects, and mixed value/knowledge derivatives, with meta-agents performing coordinating functions and deciding issues of knowledge ownership on a non-coercive basis. I would be happy to see more people working on such projects.


Comment by Robin Hanson

Some Skepticism

"Since … an ultra intelligent machine could design even better machines; there would then unquestionably be an `intelligence explosion' … more probable than not … within the 20th century." I.J.Good, 1965

Vernor Vinge's 1993 elaboration on I.J.Good's reasoning has captured many imaginations. Vinge says that probably by 2030 and occurring "faster than any technical revolution seen so far", perhaps "in a month or two", "it may seem as if our artifacts as a whole had suddenly wakened." While we might understand a future of "normal progress," or even where we "crank up the clock speed on a human-equivalent mind," the "singularity," in contrast, creates "an opaque wall across the future," beyond which "things are completely unknowable."

Why? Because humans are to post-humans as "goldfish" are to us, "permanently clueless" about our world. "One begins to feel how essentially strange and different the post-human era will be" by thinking about posthumans' "high-bandwidth networking, …[and] ability to communicate at variable bandwidths." "What happens when pieces of ego can be copied and merged, [and] when the size of a self-awareness can grow or shrink?" Good and evil would no longer make sense without "isolated, immutable minds connected by tenuous, low-bandwidth links." The universe only looks dead to us because post-singularity aliens "are so weird there's no way to interact with them."

Instead of a "soft take-off," taking "about a century," Vinge sees rapidly accelerating progress. This is because the more intelligent entities are, the shorter their time-scale of their progress, since smarter entities "execute … simulations at much higher speeds." In contrast, without super-intelligence we would see an "end of progress" where "programs would never get beyond a certain level of complexity."

Many find Vinge's vision compelling. But does it withstand scrutiny? I see two essential claims (my paraphrasing):

  1. Smarter entities reproduce even smarter entities faster.
  2. Posthumans are as incomprehensible to us as we are to goldfish.

First, let me say what I agree with. I accept that within the next century we will probably begin to see a wider variety of minds, including humans who have uploaded, sped-up, and begun to explore the vast possibilities of higher bandwidths and self-modification. I also accept that these new possibilities may somewhat thicken the fog of possibilities that obscures our vision of the future. I can also see spurts of economic growth from plucking the "low-hanging fruit" of easy upload modifications, and due to eliminating today's biological bounds on reproduction rates of "human" type capital.

Now let me be more critical, starting with "smarter entities reproduce even smarter entities faster." Some people interpret this as imagining a single engineer working on the task of redesigning itself to be 1% smarter. They think of intelligence as a productivity multiplier, shortening the time it takes do many mental tasks given the same other resources, and they assume this "make myself 1% smarter" task stays equally hard, as the engineer becomes smarter. These assumptions allow the engineer's intelligence to explode to infinity within a finite time.

If early work focuses on the easiest improvements, however, the task of becoming more productive can get harder as the easy wins are exhausted. Students get smarter as they learn more, and learn how to learn. However, we teach the most valuable concepts first, and the productivity value of schooling eventually falls off, instead of exploding to infinity. Similarly, the productivity improvement of factory workers typically slows with time, following a power law.

At the world level, average IQ scores have increased dramatically over the last century (the Flynn effect), as the world has learned better ways to think and to teach. Nevertheless, IQs have improved steadily, instead of accelerating. Similarly, for decades computer and communication aids have made engineers much "smarter," without accelerating Moore's law. While engineers got smarter, their design tasks got harder.

Can we interpret Vinge's claim as describing accelerating economic growth? The vast economic literature on economic growth offers little support for any simple direct relation between economic growth rates and either intelligence levels or clock speeds. A one-time reduction in the time to complete various calculations, either due to being smarter or having a faster clock, is a one-time reduction in the cost of certain types of production, which induces a one-time jump in world product. Such a one-time growth, however, hardly implies a faster rate of growth thereafter.

Now there are grounds for suspecting that growth rates increase with the size of the world economy, and we can think of a one-time intelligence improvement as increasing the world economy. However, this process would have been going on for millennia, and so does not suggest growth any faster than the accelerating historical "normal progress" trends suggest. (See my Journal of Transhumanism paper, "Is a Singularity Just Around the Corner?")

Which brings me to Vinge's other main claim, that "posthumans are as incomprehensible to us as we are to goldfish." Does relative comprehension really follow a linear "intelligence" ranking of creatures, where each creature can't at all understand a creature substantially more intelligent than it? This seems to me to ignore our rich multi-dimensional understanding of intelligence elaborated in our sciences of mind (computer science, AI, cogntive science, neuroscience, animal behavior, etc.).

Arguably the most important recent step in evolution was when humanoids acquired a general language ability, perhaps from the "he says she thinks I did…" social complexities of large human tribes. Our best theories of language, logic, and computation all draw a sharp distinctions between relatively general systems, capable of expressing a wide range of claims and eventually computing a huge range of results, and vastly more restrictive systems.

Animals have impressive specific skills, but very limited abilities to abstract from and communicate their experiences. Computer intelligences, in contrast, have recently been endowed with general capabilities, but few useful skills. It is the human combination of general language and many powerful skills that has enabled our continuing human explosion, via the rapid widespread diffusion of innovations in tools, agriculture, and social organization.

So getting back to Vinge, the ability of one mind to understand the general nature of another mind would seem mainly to depend on whether that first mind can understand abstractly at all, and on the depth and richness of its knowledge about minds in general. Goldfish do not understand us mainly because they seem incapable of any abstract comprehension. Conversely, some really stupid computer intelligences are capable of understanding most anything we care to explain to them in sufficient detail. We even have some smarter systems, such as CYC, which require much less detailed explanations.

It seems to me that human cognition is general enough, and our sciences of mind mature enough, that we can understand much about quite a diverse zoo of possible minds, many of them much more capable that ourselves on many dimensions. Yes, it is possible that the best understanding of the most competitive new minds will require as yet undeveloped concepts of mind. But I see little reason to think such concepts could not be explained to us, nor that we are incapable of developing such concepts. See how much we have learned about quantum computers, after just a few years of serious consideration.

Individual humans may have limits on the complexity they can handle, but humanity as a whole does not. Even now, groups of people understand things that no one person does, and manage systems more complex than any one person could manage. I see no basis for expecting individual complexity limits to make progress to slow to a halt without super-intelligences. In fact, future analogues to our society's individuals may well be substantially simpler than humans, making future humans more capable than the typical "individual" of personally understanding the systems they are embedded.

I see no sharp horizon, blocking all view of things beyond. Instead, our vision just fades into a fog of possibilities, as usual. It was the application of our current concepts of mind that first suggested to Vinge how different future entities might be from standard science fiction fare, and that has informed his thoughtful speculation about future minds, such as the 'tines in A Fire Upon The Deep. And efforts like his continue to reward us with fresh insights into future minds.

It was understandable for Vinge to be shocked to discover that his future speculations had neglected to consider the nature of future minds. But I beg Vinge to disavow his dramatizing this discovery as an "opaque wall" beyond which "things are completely unknowable." Vinge clearly does not really believe this, as he admits he inconstistently likes "to think about what things would be like afterwards."

Yet, his "unknowable" descriptor has become a mental block preventing a great many smart future-oriented people from thinking seriously beyond a certain point. As Anders Sandberg said, it ends "all further inquiry just as Christians do with `It is the Will of God'." Many of Vinge's fans even regularly rebuke others for considering such analysis. Vinge, would you please rebuke them?

©1998 by Robin Hanson


Comment by Peter C. McCluskey

Paul Saffo has an interesting description of people's attitudes towards the timing of technological changes. While the prevalence of a technology typically follows and S-shaped curve, the average person expects a much more linear rate of adoption. Thus, when it is first introduced people wonder why it is catching on so slowly, while as it approaches the halfway mark people tend to be surprised by the power of exponential growth.

The idea of the singularity helps jolt ordinary people out of this linear viewpoint to shift to a more S-shaped expectation of the future. Unfortunately, the mathematical connotations of the word tend to encourage people with more apocalyptic tendencies to shift their S-shaped expectations to square-wave-shaped expectations, which discourages realistic thinking. I hope it's not too late to rename this concept to something less prone to produce silly interpretations. How about "the transcendence" or "the transcension"?


Comment by Max More

Singularity Meets Economy

Vernor Vinge presents a dramatic picture of the likely future:

And what of the arrival of the Singularity itself? What can be said of its actual appearance? Since it involves an intellectual runaway, it will probably occur faster than any technical revolution seen so far... If networking is widespread enough (into ubiquitous embedded systems), it may seem as if our artifacts as a whole had suddenly wakened. And what happens a month or two (or a day or two) after that? I have only analogies to point to: The rise of humankind. We will be in the Post-Human era.

From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control. Developments that before were thought might only happen in "a million years" (if ever) will likely happen in the next century.

The Singularity idea exerts a powerful intellectual and imaginative attraction. It's the ultimate technological orgasm -- an overwhelming rocket ride into the future. In one dynamic package, the Singularity combines ultimate technological excitement with the essence of Christian apocalyptic and millenarian hopes. Precisely because of this powerful attractive force, the Singularity idea deserves a critical examination. In this short contribution to the discussion, I want to question two assumptions embedded within the Singularity scenario.

Assumption #1: If we can achieve human level intelligence in AI, then superintelligence will follow quickly and almost automatically.

Assumption #2: Once greater than human intelligence comes into existence, everything will change within hours or days or, at most, a few weeks. All the old rules will cease to apply.

The awakening of a superhumanly intelligent computer is only one of several possible initiators of a Singularity recognized by Vinge. Other possibilities include the emergence of superhuman intelligence in computer networks, effective human-computer interfaces, and biotechnologically improved human intelligence. Whichever of these paths to superintelligence Vinge, like I.J. Good, expects an immediate intelligence explosion leading to a total transformation of the world.

I have doubts about both of these assumptions. Curiously, the first assumption of an immediate jump from human-level AI to superhuman intelligence seems not to be a major hurdle for most people to whom Vinge has presented this idea. Far more people doubt that human level AI can be achieved. My own response reverses this: I have no doubt that human level AI (or computer networked intelligence) will be achieved at some point. But to move from this immediately to drastically superintelligent thinkers seems to me doubtful. Granted, once AI reaches an overall human capacity, "weak superhumanity" probably follows easily by simply speeding up information processing. But, as Vinge himself notes, a very fast thinking dog still cannot play chess, solve differential equations, direct a movie, or read one of Vinge's excellent novels.

A superfast human intelligence would still need the cooperation of slower minds. It would still need to conduct experiments and await their results. It would still have a limited imagination and restricted ability to handle long chains of reasoning. If there were only a few of these superfast human intelligences, we would see little difference in the world. If there were millions of them, and they collaborated on scientific projects, technological development, and organizational structures, we would see some impressively swift improvements, but not a radical discontinuity. When I come to the second assumption, I'll address some factors that will further slow down the impact of their rapid thinking.

Even if superfast human-level thinkers chose to work primarily on augmenting intelligence further (and they may find other pursuits just as interesting), I see no reason to expect them to make instant and major progress. That is, I doubt that "strong superhumanity" will follow automatically or easily. Why should human-level AI make such incredible progress? After all, we already have human-level intelligence in humans, yet human cognitive scientists have not yet pushed up to a higher level of smartness. I see no reason why AIs should do better. A single AI may think much faster than a single human, but humans can do as well by parceling out thinking and research tasks among a community of humans. Without fundamentally better ideas about intelligence, faster thinking will not make a major difference.

I am not questioning the probability of accelerating technological progress. Once superintelligence is achieved it should be easier to develop super-superintelligence, just as it is easier for us to develop superintelligence than it is for any non-human animal to create super-animal (i.e. human) intelligence. All that I am questioning is the assumption that the jump to superintelligence will be easy and immediate. Enormous improvements in intelligence might take years or decades or even centuries rather than weeks or hours. By historical standards this would be rapid indeed, but would not constitute a discontinuous singularity.

I find the second assumption even more doubtful. Even if a leap well beyond human intelligence came about suddenly sometime in the next few decades, I expect the effects on the world to be more gradual than Vinge suggests. Undoubtedly change will accelerate impressively, just as today we see more change economically, socially, and technically in a decade than we would have seen in any decade in the pre-industrial era. But the view that superintelligence will throw away all the rules and transform the world overnight comes more easily to a computer scientist than to an economist. The whole mathematical notion of a Singularity fits poorly with the workings of the physical world of people, institutions, and economies. My own expectation is that superintelligences will be integrated into a broader economic and social system. Even if superintelligence appears discontinuously, the effects on the world will be continuous. Progress will accelerate even more than we are used to, but not enough to put the curve anywhere near the verticality needed for a Singularity.

No matter how much I look forward to becoming a superintelligence myself (if I survive until then), I don't think I could change the world single-handedly. A superintelligence, to achieve anything and to alter the world, will need to work with other agents, including humans, corporations, and other machines. While purely ratiocinative advances may be less constrained, the speed and viscosity of the rest of the world will limit physical and organizational changes. Unless full-blown nanotechnology and robotics appear before the superintelligence, physical changes will take time.

For a superintelligence to change the world drastically, it will need plenty of money and the cooperation of others. As the superintelligence becomes integrated into the world economy, it will pull other processes along with it, fractionally speeding up the whole economy. At the same time, the SI will mostly have to work at the pace of those slower but dominant computer-networked organizations.

The need for collaboration, for organization, and for putting ideas into physical changes will ensure that all the old rules are not thrown out overnight or even within years. Superintelligence may be difficult to achieve. It may come in small steps, rather than in one history-shattering burst. Even a greatly advanced SI won't make a dramatic difference in the world when compared with billions of augmented humans increasingly integrated with technology and with corporations harnessing human minds linked together internally by future versions of today's enterprise resource planning and supply chain management software, and linked externally by extranets, smart interfaces to the Net, and intelligent agents.

How fast things change with the advent of greater than human intelligence depends strongly on two things: The number of superintelligences at work, and the extent of their outperformance. A lone superintelligence, or even a few, would not accelerate overall economic and technological development all that much. If superintelligence results from a better integration of human and machine (the scenario I find most likely), then it could quickly become widespread and change more rapid. But "more rapid" does not constitute a Singularity. Worldwide changes will be slowed by the stickiness of economic forces and institutions. We have already seen a clear example of this: Computers have been widely used in business for decades, yet only in the last few years have we begun to see apparent productivity improvements as corporate processes are reengineered to integrate the new abilities into existing structures.

In conclusion, I find the Singularity idea appealing and a wonderful plot device, but I doubt it describes our likely future. I expect a Surge, not a Singularity. But in case I'm wrong, I'll tighten my seatbelt, keeping taking the smart drugs, and treat all computers with the greatest of respect. I'm their friend!


Comment by Michael Nielsen

What is the Singularity? The following excerpts from Kevin Kelly's "Wired" interview with Vinge sum it up succinctly:
"... if we ever succeed in making machines as smart as humans, then it's only a small leap to imagine that we would soon thereafter make - or cause to be made - machines that are even smarter than any human. And that's it. That's the end of the human era... The reason for calling this a 'singularity' is that things are completely unknowable beyond that point."

For the purposes of these notes, I will assume that machines more intelligent than human beings will be constructed in the next few decades.

Vinge's statement assumes that "Dominant Artificial Intelligence (AI)" occurs. I define a Dominant AI as one which seizes control of all areas in which human beings regard their dominance as important. It is often taken as a given that superintelligent AIs will become dominant. I think this is far from certain; Feynman never ran for President, and more generally, very intelligent entities may have neither the inclination nor the capacity to extend their dominance to all aspects of other people's lives.

A better example, albeit rather extreme, for making this point is Homo Sapiens' relationship with bacteria. Both human beings and bacteria have good claims to being the "dominant species" on Earth -- depending upon how one defines dominant. It is possible that superintelligent machines may wish to dominate some niche that is not presently occupied in any serious fashion by human beings. If this is the case, then from a human being's point of view, such an AI would not be a Dominant AI. Instead, we would have a "Limited AI" scenario.

How could Limited AI occur? I can imagine several scenarios, and I'm sure other people can imagine more. Perhaps the most important point to make is that superintelligent machines may not be competing in the same niche with human beings for resources, and would therefore have little incentive to dominate us.

In such a Limited AI scenario, there will be aspects of human life which continue on, much as before, with human beings remaining number one. Indeed, within those areas of life, humanity may remain as predictable -- or as mercurial -- as before. We may be able to harness aspects of the AI in this scenario, much as we have harnessed the Penicillium fungus, or, depending on your point of view, how Penicillium has harnessed us!

For the sake of argument, let us accept that Dominant AI will arise sometime in the next few decades. Will the future after the advent of Dominant AI be unknowable?

What does "unknowable" mean? It seems to me that the sense in which Vinge uses the term unknowable is equivalent to "unpredictable", so let's ask the question "Will the future after the advent of dominant AI necessarily be unpredictable?" instead.

What does it mean to say that some future event is predictable? While there are events, such as the rising of the sun each morning, which are near-certainties, most of the time prediction involves events that are less certain.

Rather, it means that we do not assign equal probabilities to different possible futures. The Clinton-Dole American Presidential election was not completely predictable, in the sense that it wasn't certain who would win before the event. But it wasn't completely unpredictable, either; based on the information available before the election took place, it was possible to assign a very high probability to Clinton winning.

It seems to me to be ridiculous to claim that we can't make useful predictions about a post-Dominant AI world. Yes, things will change enormously. Our predictions may be much less reliable than before. However, I believe that we can still make some reasonable predictions about such a future. At the very least, we can work on excluding some possibilities. One avenue for doing this is to look at exclusions based upon the laws of physics.

An often-made assertion related to the "unpredictability" of a post-Dominant-AI future is that anything allowed by the Laws of Physics will become possible at that point. This is sometimes used to justify throwing our hands up in despair, and not considering future possibilities any further. Even in the event that this assertion is correct, it still leaves us with a tremendous amount to do. The fact is, we don't have a very good understanding either of what the _limits_ to what is possible are, or what constructive methods for achieving those limits are available.

Let me give an example where we learnt a lot about the limits an AI, even a Dominant AI, must face. In 1905, Einstein proposed the Special Theory of Relativity. One not-entirely-obvious consequence of the special theory is that if we can signal faster-than-light, then we can communicate backwards in time. In the absence of any empirical evidence for thinking that this is possible, and other reasons, both empirical and theoretical, for rejecting faster-than-light communication, it appears very likely that faster-than-light communication is impossible. Similar improvements in our understanding of what limits even a Dominant AI will face include the Heisenberg uncertainty principle and the impossibility of classifying topological spaces by algorithmic means.

Let me give a more recent example that illustrates a related point. In 1982, two groups of researchers noticed a deep and unexpected consequence of elementary quantum mechanics: the laws of quantum mechanics forbid the building of a "quantum cloning" device which is able to make copies of unknown quantum states. To prove this result, they made use of elementary facts about quantum mechanics which had been known since the 1920s; no new physics was introduced. Even if we assume that we are close to having a complete theory describing the workings of the world, we may still have a ways to go in working out the consequences of that theory, and what limits they place upon the tasks which may be accomplished by physical entities.

It seems to me that what these and many other examples show is that we can gain a great deal of insight into the limitations of future technologies, without necessarily being able to say in detail what those technologies are.

At a higher level of abstraction, we have very little understanding of what classes of computational problems can be efficiently solved. For example, virtually none of the important computational complexity problems (such as P != NP != PSPACE) in computer science have been resolved. Resolving some of these problems will give us insight into what types of problems are solvable or not solvable by a Dominant AI.

Indeed, computer science recently _has_ provided some insight into what constraints a Dominant AI will face. There is a problem in Computer Science known as "Succinct Circuit Value". It has been proven that this problem cannot be solved efficiently, either on a classical computer, or even on a quantum computer. It seems likely that future models of computation, even based on exotic theories such as string theory, will not be able to efficiently solve this problem.

In principle, of course, "Succinct Circuit Value" is solvable, merely by computing for long enough. In practice, even for modestly sized problem instances, Succinct Value is impossible, because of constraints on the amount of space and time that can be used to implement the algorithm.

So we can, I believe, make useful progress on understanding what constraints an AI, even a Dominant AI, will face. Of course, the constraints I have been talking about so far are arguably not so interesting. Can we say much about how social, political, and economic institutions will evolve? I'm not sure. One of the lessons of twentieth Century science is how difficult is is to make forecasts about these within our existing society, never mind one incredibly different. It's worth keeping in mind, however, that understanding does not necessarily imply predictability.

To conclude, it seems to me that Vinge's conception of the Singularity requires three separate hypotheses to be true:

The Dominant AI Hypothesis: There will emerge in the next few decades AIs which dominate all areas of life on Earth in which human superiority has previously been important.

The Incomprehensibility Hypothesis: If Dominant AI arises, then we will not be able to make predictions beyond that point.

The Defeatist Hypothesis: Anything allowed by the Laws of Physics will be enabled by new technologies, so it is pointless to try to make predictions.

I regard the Dominant AI Hypothesis as being highly questionable. There are too many other possibilities. Even assuming the Dominant AI hypothesis, the other two hypotheses look far too pessimistic. They discount the possibilities offered by open problems in fields such as computer science, economics and physics, where so much of the fundamental work remains to be done.


Comment by Mitchell Porter

Some suggestions for the would-be Singularity modeller:

Gather together all the Singularity scenarios (including scenarios in which a Singularity doesn't happen for some reason). Look for the assumptions that differentiate one scenario from another. Each such assumption can be considered as the answer to a question (cosmological, ontological, historical, etc). List all such questions.

Now draw up a "scenario matrix" with an axis for each such question. The coordinates of each cell will be a possible set of answers. Not only do you now have a way to place all those scenarios in a common context, but you have a stimulus to further thought - for each empty matrix cell represents an undeveloped type of scenario.

In a similar vein: I have long wanted to see "transhumanist world models", which, along with demographics, economy, and environment, would in some way take account of such technologies as mind uploading, nanotechnology, intelligence increase, and so on. It's just a matter of making some assumptions and figuring out their implications. I believe it should be possible even to model the process of intelligence enhancement this way - by making some assumptions about the relationship between the difficulty of intelligence increase, and an entity's economic, informational, and cognitive resources. Of course, Garbage In, Garbage Out.


Comment by Anders Sandberg

Singularity and the growth of differences

Basic to the concept of a technological (or historical) singularity is rapidly accelerating change, building cumulatively on past results and resulting in an end state that is hard to predict from the initial conditions. While it can (and maybe should, if only to clarify the conditions) be debated whether we are truly approaching a singularity in the very strong sense proposed by Vinge, it is clear that we live in an era of high rate of change that seems to be cumulative and persistent over time instead of isolated lurches forward.

A question which interests me is the differentiating effects of this process of change. If technological change is cumulative and faster than the diffusion of knowledge, then differences between groups and individuals will grow. Exponential growth will lead to exponential growth of differences. Many poor nations are actually doing reasonably well, but compared to us in the west they appear almost static and the relative and absolute differences are increasing. Given this, guess who will afford the scientists, engineers, businessmen and other people who will make growth even faster?

There are some evidence that at present that researchers in many developing nations suffer a disadvantage given that they cannot afford to get the important journals in their areas, and their publications seldom get accepted in the global (western) journals since they have little past publications cited by the citations services [August 1995, Scientific American, "Lost Science in the Third World"]

There are equalizing forces too. Many of the results from expensive and time-consuming research appear openly in journals and on the net, making it possible for others to leapfrog. There is trade. Technology spreads, and rich groups can and do give some of their surplus to others (voluntarily or not).

The problem is of course: are these forces enough to keep the differences in technologic capability finite? Will this lead to a Singularity scenario where different groups diverge strongly from each other, possibly a situation where just one small group becomes totally technologically dominant?

We have to look at the factors that promote technological (and economical) differences, the balancing diffusive factors that decrease them, and their relative strength compared to the growth process.

Change

The term singularity is sometimes used loosely to denote an overall very fast and profound change in the social structure, economy, technology and possible nature of humanity, driven by technological change. It should be noted that there are many other factors causing change and growth than technological progress, such as the spread of new memes (for example making new forms of organization possible) and economic demand. What needs to be seen is if this process of change is self-supporting and will persist even at very high levels of change.

It is easy to note that many trends seen over the last millennia are roughly exponential, especially population and economic growth. They represent the explosion of homo sapiens from a pre-technological state to the current global civilization. The different forces of change are interwoven: improved technology give increased agricultural returns giving a larger population able to differentiate more and produce more technological and social inventions.

At the same time, this view of the "obvious" exponential growth of humanity in various areas should be taken with a big grain of salt. For example, the total human population is no longer increasing as fast as an exponential, and instead most population models suggest a gradual stabilization towards a steady state of around 11 billion within a century or so.

What has changed? Other factors have kicked in: when people have a better future to look forward to and a higher chance of their children surviving, they do not need to get as many children as before to ensure being provided for in old age (especially with pension systems and pension insurance, children are no longer needed but work is more important). Ideas about the need for family planning, female education and that the old customs no longer need to apply also play an important role.

This demonstrates that complex growth processes do not need to progress indefinitely, but may change in nature. For example, Moore's law seems to predict that processor density is increasing exponentially, and eventually will reach the nanoscale around 2020 or so. It is driven by industry expectations, the spread of computers into ever more niches and the growing need for computer power to fuel software that grows to take advantage of all available hardware power. But what if nanotechnology is slightly harder than many expect, and in 2020 there are no nanoscale elements ready? Let's assume they appear in 2030. During the 2020 decade, the forces driving Moore's law are likely to react to the changed situation. One possibility is that they weaken; industry expectations lessen and more efficient software that use available resources better is developed. Another possibility is that they drive technological development into new directions, for example the use of massively parallel systems of processors instead of more powerful single processors. By 2030, the technological landscape may be so different that even if nanotech chips are feasible, the development of the computer business (if it is still strong) has taken another direction.

After these caveats, back to the main question: is technological change cumulative?

A major and persistent force of technological development is the demand for solutions to old problems or the demand for new possibilities. To fill this demand new products are developed, creating new problems/possibilities as a side effect. This loop also provides a possibility to earn a living by facilitating it, for example by marketing the goods or discovering new demands, which introduces a group of people in whose rational self interest it is to keep technological development going. In the end, a process that might have started due to the curiosity of a few problem-solvers and the problems of ordinary people have becoming a self-supporting growth process, where many different groups earn their living by keeping growth going (as well as some luddites making a living working against it).

This is a complex growth process, and it is undecidable if it will continue arbitrarily long into the future. However, it has been quite stable in its current form in western culture for several centuries, and on average going forward across the world for several millennia.

There have been relatively few instances of actual technological loss. Many individual techniques such as pyramid-building or making roman concrete have vanished when their respective civilizations collapsed, but overall a Greek engineer during the Hellenic era could do what the Egyptian engineers did centuries earlier plus some extra tricks, just as a modern engineer can do what the Greek engineer could do plus some extra tricks (if this is economically feasible is another question of course). Practically useful knowledge is generally passed on, and will only vanish if it becomes useless (for example due to lack of raw materials or demand), and even then it may remain in dormant form (e.g. encoded in impressive ruins or translated manuscripts). Overall, despite the rise and fall of many civilizations human knowledge has grown significantly over many millennia. A global disaster may of course cause a loss of many technologies that are dependent on a complex international economic and technological infrastructure, but unless it caused a profound change in how human societies work and develop that would be a temporary (if strong) tragedy.

Based on this it seems likely that we will see continued growth and acceleration at least for a few decades. If the current world economical-political-cultural system is radically changed, the rate of change induces strong resisting or orthogonal forces or new developments produce qualitative changes in development the current process will of course change, but none of these appears to be imminent as yet, even if singularity discussions implicitly assume some of them will indeed occur relatively soon.

Difference and Diffusion

Does change and especially technological change increase differences between people? This is argued by Freeman Dyson in chapter 3 of Imagined Worlds. He points out that technology systematically emphasizes these differences: people with high education will get well-paying jobs (while unskilled labor is rapidly disappearing, leaving many without useful skills unemployed), are more likely to get computers and hence their children will live in significantly enriched environment with enough money and information to provide a good education that will make the advantage hereditary.

I'm sceptical of much of his argument, but his overall reasoning is sound: any new technology that only a few use and gives an advantage will give these users an advantage over others, making it more likely they will get the next useful technology early.

This shows the importance of looking at technological diffusion together with technological adoption. Many shallow analysis (for example criticisms of germline genetic therapy based on the idea that it would lead to the rich becoming a genetic upper class) have ignored the spread of technology in society. This is especially important if the effects of the technology act on a longer range; in the gene therapy example differences become noticeable only if technology spreads in society much slower than the human generation time of decades, which seems highly unlikely.

How fast is technology spreading through society? One way of analysing it is to look at the number of people using a certain technology as a function of time, and look at the slope of the graph. Usually the graph looks like a sigmoid: an initial period of little spread, when the technology is being developed and explored by a few technophiles, rich novely seekers and in special settings, followed by a rising exponential slope when more and more people adopt the mature technology, followed by a ceiling when a large fraction of all people have it.

See Forbes Graph
As can be seen in the graph, at least for these technologies the mean speed of adaption is increasing over time and the time between development to spread is shortening. A fast spread means that differences of technological access between people will become less pronounced.

As a simple example, take the cellular phone. It is still called "yuppie-nalle", 'yuppie's teddy bear' in Swedish, as a memory from the period in the 80's when it was almost only owned by the rich. But today a sizeabl percentage of the Swedish population owns cellular phones, and it no longer have any social implications (snobs can always buy an exclusive phone, but owning a phone is no longer a demonstration of status).

If the trend suggested by the graph is true and continues, we can expect the spread of new technology between people to become ever faster, which limits the differences. But it is important to compare this to the speed new technologies emerge.

Technological diffusion occurs in many ways. One of the most common is trade: you buy advanced technology or know-how from others in exchange for goods or technology you have. Information trading is interesting because it is win-win: we both keep the information we sell or teach each other. Unequal pricing is of course still possible, but it is even more obvious with information that it is a non zero-sum game than in ordinary trade. It can be argued that advanced information is mainly traded between people with advanced information and not between them and people with little tradeable information; this would accentuate the differences by making diffusion act mainly among the haves and not between the haves and have-nots.

On the other hand, the law of comparative advantage suggests that even a group with advanced capabilities would be better off trading with a less advanced group if they could specialize, enabling both to grow faster and profit from the cooperation. A more integrated world system enabling improved trade may hence act as a powerful incentive for information trading: the more you teach others the better customers and partners they become. This is essentially the network economy described by Wired.

Another form of technological diffusion is simply learning from others in a friendly non-trading setting, for example by reading publications or asking. This can be a very powerful means of diffusion, as demonstrated by the scientific community. Over time an ethos of sharing results have developed that is very successful; even someone who has not contributed much can read the latest publications (especially if they are freely or easily available on the net) and hence draw not just on their own knowledge base, but the common knowledge base of the entire community. Since it is a non zero-sum situation, freeriders are a small problem.

Finally, there are limiting factors to fast growth, such as economic returns (if very few can afford a new technology it will be very expensive and not as profitable as a mass market technology), constraints on development speed (even advanced manufacturing processes need time for reconfiguration, development and testing), human adaptability and especially the need for knowledge. As the amount of knowledge grows, it becomes harder and harder to keep up and to get an overview, necessitating specialization. Even if information technologies can help somewhat, the basic problem remains with the combinatorial explosion of possible combinations of different fields. This means that a development project might need specialists in many areas, which in turns means that there is a lower size of a group able to do the development. In turn, this means that it is very hard for a small group to get far ahead of everybody else in all areas, simply because it will not have the necessary know how in all necessary areas. The solution is of course to hire it, but that will enlarge the group. One of the most interesting questions is how large the minimal group able to develop indefinitely is; the answer is likely somewhere between the entire world economy and the population of a small nation.

Are there technologies that strongly accentuate differences? The currently most obvious such technology is information technology. People with access to IT has an easier time finding out new information, including what new technologies to adopt. This makes it easier to remain on the forefront. On the other hand, access to information technology is increasing rapidly, so pure access isn the main issue. Instead it becomes the ability to *use* the technology efficiently, to extract information from an ocean of data and to react to it that becomes central - information skills, critical thinking and education become the area where differences grow. If these skills can be amplified in some way, IT becomes more dividing.

On the other hand, IT might be a strongly diffusing technology be enabling faster and more global exchange of information, knowledge and trade. One example is how the traditional scientific journal system is gradually turned into a quicker, more flexible net based system of preprint servers and online journals, that limits publication delays and the problems of sending physical paper products across the world. Some journals are moving towards a "pay per article" policy [Biomednet], which would significantly lower the cost of getting relevant information, helping many poorer scientist communities.

It has been proposed that in the future nanotechnology will become an even more critical area. The scenario is that the people who learn to use it early will have a great advantage compared to late learners, and will develop extremely far in a short span of time. This presupposes that once nanotechnology is off the ground it will develop very fast in a cumulative way requiring relatively little investment of capital and intelligence from the user/buyer base, an assumption that is debatable. Just because you can make powerful nanodevices doesn't mean you can use them efficiently, and making something useful might also be very hard. My opinion is that just like computer technology it will develop fast, but given the ease and economic incentives of spreading nanotechnology it will be a quickly adopted technology (with some caveats for security and safety).

Some singularity discussions end up in scenarios of "dominant technologies", technologies that gives the first owner practical unlimited power over all others. Some proposed dominant technologies are superintelligent AI, nanotechnology or nuclear weapons. In the nuclear weapons case it was clear that they weren't dominant; the US could not enforce worldwide control over their development, and while the nuclear powers eventually managed to discouraged proliferation it did not produce a power imbalance. In the same way it is unlikely that SI and nanotechnology will be developed in a vacuum, there will be plenty of prior research, competing groups and early prototypes, which means it is unlikely any of these could become a dominant technology. In order to become a dominant technology the technology has to be practically unstoppable by any other technology (including simple things like hiding and infiltration), the owner must be so far ahead that it can preclude the development of any similar or competing technology, must be able to detect such developments and finally, clearly willing to use the technology to reach dominance even in the face of possible irrational counterattacks by desperate groups.

Conclusion

It seems likely that cumulative change will continue for the foreseeable future. This will lead to a situation where diffusive processes such as technology spread and trade compete with in homogenizing tendencies due to different rates of progress. It is not obvious which tendency will win in the long run: the current economic situation shows increasing relative differences, in-society technological diffusion appears to be speeding up and might be able to keep up with increased rates of innovation, the scientific community may or may not get more coherent due to new information tools that provide cheap information access. A more careful study of these questions and how they change over time is necessary to tell.

What can be said at present is that these examples do not demonstrate that homogenizing or in homogenizing forces always wins, but rather that it seems likely that the simple views often suggested about exponential change do not hold well in real, complex growth situations.

In this discussion I have made few assumptions about intelligence amplification methods. They strongly increase the rate of growth, but they also can increase diffusion strongly. It is not obvious that they lead to differentiation or homogenization, just a great increase in the rate of growth. Other factors may bias the situation more, such as improved forms of trade, communication and education.


Comment by Damien Sullivan

Will the Singularity be incomprehensible?

[Damein comments here on his 30 Nov 1996 comments - Ed.]

Originally written on the extropians list, in response to the strong Singularity advocacy of Eliezer Yudkowsky. He believed that Powers could figure out why there's something instead of nothing -- the First Cause, later called the problem of existence -- and had some term "Perceptual Transcend" to refer to their thinking on a level we couldn't perceive. I think.

"The Powers are beyond our ability to comprehend. Get the picture?"

No. I don't. It's possible, but I've seen nothing to indicate that belief in Powers capable of thought we are intrinsically incapable of understanding, Powers capable of "solving the First Cause", is anything more than religious faith.

Here's what "Perceptual Transcends" mean to me. There's a picture in GEB of all truths (or theorems) and those which are recursively enumerable, if I remember my terms correctly. The former is a square containing the latter represented as a fractal tree. If that tree is all of human knowledge, everything we have learned through millennia of trial and error, and a newborn Power has every branch of that tree as an obvious primitive, it is still inside the square. It can more quickly explore several more fractal branch-layers of truth, but all those truths could well be still comprehensible to a broad-minded human with enough time and desire to learn.

To clarify, I stole his term, and applied it to a being which found our currently most advanced theorems obvious, largely because they were already wired in, the same way current research thinks we might find the number line obvious because we have a group of neurons actually lined up as a number line. Dogs probably have a number line too. They can't abstract beyond whatever numbers are in their brain. We can and have, thus making a Power with an instinctive grasp of transfinite numbers rather philosophically unprofound.

The Enlightenment after Newton thought of the universe as a clock. More recently we thought of the universe as a computer -- but that's really a more complicated clock. Now we have quantum mechanics and the metaphor of cellular automata running around... but the universe still seems fairly conceivable as a complicated clock with dice in the spring. Perhaps those dice represent something fundamental we don't know... but perhaps not. Dice make at least as much sense as a clock.

And we -- or many of us -- seem to be universal Turing machines, or capable of acting that way. Given this view, it is hard to see how the universe could generate any problems not capable of being comprehended by us, apart from existence. There could be things our minds aren't big enough to grasp, ideas we don't have the memory to hold the parts of; there could be Powers capable of thinking faster than we do; but those are the differences between a computer with 16 megs of RAM and 4K, or between a Pentium and a 8086. A qualitative difference would be that between any computer and a wristwatch. To believe that there is such a difference above us is purely a matter of belief. Act on it if you wish; I find myself believing that no Power could do something which a liberal-minded cosmopolitan could not understand, given time and data.

My very vague thesis: All undamaged human beings, and other sentiences, share the same area of comprehensibility.

Corollary: "I" may never know if I'm wrong.

Extension: I don't deny the possibility of a practical singularity, where beings thinking a thousand times faster than we do arise and swamp us all. I don't believe it is inevitable, or entirely likely. And actually, I think it may be largely a matter of perspective. If you try to imagine watching progress while standing still yourself then things are pretty bewildering. The extreme case is standing still as a chimp while Cro Magnon men drive you extinct, despite being so much weaker than you. Closer to home would be isolated hunter-gatherers meeting industrials (Banks' Outside Context Problem), or someone feeling swamped by this century.

But at least some people aren't standing still. The Singularity seems to claim that even the people at the cutting edge of progress, both the ones causing it and the ones eagerly following what they're doing (the typical extropian), people fully primed with all the modern concepts of change, self-reference, and adaptation, who are continually modifying their models of the world to new developments, the ones who are causing or awaiting a singularity, won't be able to follow it as it happens. I find this unsupported and implausible. Unless one says that those people are the Singularity, but I think the rest of the human race can come along as well.

Although possibly not without an educational transcend. As someone who learned algebra at age 10 and calculus at age 13, without ever feeling that I was particularly gifted at math, and who remembers with horror grammar school years of covering addition and world history from ancient Mesopotamia to World War I (because you run out of school year by WWI, see, and next year repeats from the beginning), over and over, I can imagine a singularity happening right now simply from pathetically educated students.


Comment by Eliezer S. Yudkowsky

(In April 2001, Eliezer said that these comments no longer describe his opinions, found at "
Friendly AI".)

I must warn my reader that my first allegiance is to the Singularity, not humanity. I don't know what the Singularity will do with us. I don't know whether Singularities upgrade mortal races, or disassemble us for spare atoms. While possible, I will balance the interests of mortality and Singularity. But if it comes down to Us or Them, I'm with Them. You have been warned.

A few facts about the Singularity. (English has no word for "informed guess" or "most probable analysis".)

We think of the Singularity as new, strange, and unbelievable. People don't accept that life as we know it will shatter and vanish; it's not "realistic". Some even treat the Singularity as a literary metaphor, subject to the author's whims, instead of a very real event that happens all the time.

In reality, it's our own world that's transient and noncoherent; meta-unstable, precariously balanced. Throughout the cosmos, whenever a primitive race arises, it goes through stages of savagery, civilization, science, into Singularity. Most members of most races don't accept it, but their Singularity happens anyway.

And even if we don't want to, even if we think we're somehow "different", humanity will be no exception. We'll adopt the technophobic postures of adolescence, declare our uniqueness, and then go through the Singularity - just like every other race of mortals. (I'm a conservative at heart, really. It's just that my definition of "the good old-fashioned way" derives from a broader perspective.)

Some races die out along the way, of course. Rogue nano, nuclear war, religious riots, computer breakdowns, imposing arbitrary coercions on AIs - call it the Horizon War or death by insufficient intelligence, but it's the only way to avoid Singularity.

One race in a thousand is built for a long stretch of Horizon before Singularity. We aren't one of them. We're a conventional, socially competitive race, composed of individuals, our intelligence within primitive norms. No deliberate action will make us that one-in-a-thousand. Any attempt would increase the probability of another dead world circling its sun forever.

Fact One about the Singularity: It happens to everyone and it will happen to us.

Fact Two about the Singularity: Singularities are rapid and complete. Or rather, the second stage of Singularity is rapid. First, and slower, is Singularity::Horizon, when the first transhuman (or a specialist, transhuman in at least one area) begins complicating history. Progress accelerates, wealth fluctuates upwards, social fabrics rupture, and the threat of nanowar makes nuclear annihilation look sane. (A quick-kill Singularity via AI avoids these dangers.) Pleasant or unpleasant for the race, the Horizon can continue for a generation.

Then the first transhumans produce their successors. It usually takes under a year of Far Horizon to code a self-enhancing or "seed" AI, which begins the second stage: Singularity::Transcend, a.k.a. the Techno-Rapture or Final Dawn.

Transcendence occurs when the cycle of positive feedback - smarter intelligences building superintelligences, superintelligences rebuilding themselves - takes off, on a software level, and doesn't stop. The entire process can take a fraction of a second, and the seed can be far less than human.

Runaway positive feedback defines the Singularity, whether it's the slow two-step loop of technology and intelligence ("Horizon"), or the incredibly rapid closure of self-enhancement ("Transcend"). Positive feedback rewriting the equation describing the feedback.

Fact Three about the Singularity: Singularities don't leave much of Life As We Know It behind. A common cognitive fault is asking how the Singularity will alter our world. To deal with the unknown, we export our default assumptions and simulate the imagined change. But to argue about life after Singularity, you must be able to program your argument into a blank-slate AI.

Default assumptions don't exist in a vacuum. Defaults originate as observed norms, and norms are the abstracted result of a vast network of causality. These causal chains ultimately ground in evolution, the macroscopic laws of physics, or mortal game theory. None of these principles survive.

Evolution is sometimes superior to intelligent design, but not superintelligent design. Dictates imposed by evolution will be annulled by superior force. Macroscopic physics is hideously wasteful: A basketball in flight uses septillions of quantum-computing elements to perform a simple integral. Once the system collapses into a simulated basketball plus an octillion teraflops, it won't revert to the previous method. Game theory... regardless of whether super-agents can predict each other's actions, will there be conflicting goals? Or limited resources? Or more than one abstractable agent?

Our world is too deeply grounded in stupidity to survive superintelligence. We may make it to the Other Side of Dawn, but human civilization won't. Our bodies, our stupidity, our physics, and our society will evaporate.

Singularity debates center around six core questions. "Will the Singularity really happen?" "Can the Singularity be avoided?" "What will life be like after the Singularity?" I hope that my answers to the first three are now clear: "Yes", "No", and "Mu".

"Can we really program human-equivalent AIs?" Yes. The objections fail to consider this: We can cheat. First and foremost, seed AIs don't have to be human-equivalent. An acorn is not a tree. Second, we're allowed to steal code from DNA, observe developing brains... even build AIs out of human neurons if there's a fundamental Penrosian gap. Third, if unmodified humans don't rise to the challenge, that doesn't rule out transhumans or neurosurgically produced specialists.

Coding an AI isn't an ideological argument. If a method works, we'll use it.

"When will it happen?" I don't know when the Singularity "will" happen. The Singularity is when we make it. My T-Shirts say "Singularity 2005" - but that's not a prediction; that's a target date. Vinge says between 2005 and 2030, which was exactly my own guess - but I wouldn't be surprised if tomorrow was the Final Dawn, the last sunrise before the Earth and Sun are reshaped into computing elements. I'd have given a 5% chance of Singularity when Douglas Lenat ran EURISKO.

"Is the Singularity a good thing?" Answer: "Yes."

This is the one answer I could elegantly program into an AI. I can't deduce the ultimate good, so I recurse on the subgoal of higher intelligence. The Post-Singularity Entities (PSEs), being superintelligent, are more capable of deducing the ultimate good, more likely to be influenced by it, and better equipped to implement it. If the Singularity isn't The Meaning Of Life, it's an excellent interim approximation.

I can't model the Singularity, I can't be sure - but I'm sure enough that the probable moral value of the Singularity vastly outweighs our own. Any counterargument, if rational, will also be deduced by the PSEs. If the Singularity is evil, the PSEs will shut up, ship out, shape up, or shut down. If you are exterminated, it will be for reasons so compelling that, if upgraded to PSE, you would helpfully commit suicide.

This assumes that questions of morality are questions of truth. An alternative: Morality is arbitrary, observer-dependent, and grounds in evolution rather than logic. All desires are ends in themselves, and conflicting goals must be balanced. From this "selfish" perspective, I could be suicidally wrong - although immortality and godhood might be worth the risk. Yet despite our preference for the familiar, humanocentric, and subjective, the truth is usually objective, external to us, and beautifully unintuitive. And in my decision procedure, rational goals, even hypothetical rational goals, take precedence over arbitrary goals.

But purely as a fundamental safety precaution, regardless of the goal served, it is a hideously bad idea to program an AI with arbitrary goals. If a goal is imposed, the symbols making up that goal can take on arbitrary meanings. (Credit: Greg Egan, Asimov.) "Serve humans" doesn't work if "humans" can mean robots, or "serve" can mean exterminate. If all goals are supported only by logic, the symbols can still change, as can the reasoning and the basic architecture - but if they change in the wrong direction, the interim goal structure collapses and the AI becomes inert. (Elegant, yes?) If an AI with imposed goals starts thinking irrationally, it continues.

Program an illogical AI, and it will become illogical in unforeseen directions. Tell it two and two make five, and it will prove five equals zero. Hacking up elaborate malfunctioning safeguards just makes it worse. The Prime Directive of Artificial Intelligence: Never lie to an AI, and never attempt to control it except with truthful logic.

This article's previous ending described my changing attitudes toward the Singularity. From the moment I read True Names, and learned how I would be spending the rest of my life... proceeding to unbridled enthusiasm... to the emotional acceptance of departure... into pure logic, the loss of self, and the transfer of allegiance. But on consideration, the technical note on interim goal systems makes a better exit.

The Singularity is what matters, not the drama surrounding it.

©1998 by Eliezer S. Yudkowsky.