« Don't Forget | Main | Come Say Hi »

Solving Problems by Getting Away From It All

Musing a bit recently about the intersection of crisis-response thinking and transformational-future thinking, and it struck me that this slogan:

The Rapture is not an exit strategy.

...has a useful parallel in:

The Singularity is not a sustainability strategy.

(It's too much of a tongue-twister to make a good bumper sticker, for better or worse.)

The second line may be speaking to a somewhat smaller audience than the first, but I've seen more people advocating for ignoring climate disruption because the Singularity Will Change Everything (tm) than people clamoring for the Rapture to make the Iraq war moot.

Comments

AMEN to that!
- so to speak.

"The Singularity won't save your ass" might be more to the point.

You can't depend upon the Apocalypse.

One reason why I say
Solar IS Civil Defense

I long for the end of days, when all those so inclined can rapturously disappear up their own singularities.. and leave the rest of us to get on with it!

The Rapture is not to the point, and the Singularity doesn't grab me, either.

The point (or 'gripping hand', think about it ;-) is that neither are a strategy. They are discontinuities beyond which you can't make any realistic extrapolation.

(and since a strategy depends on extrapolation to get you to where you want to be, they are better described as 'anti-strategies')

Actually - it is one indirect way of having a sustainability strategy. This is because smarter beings would think up better ways to run a sustainable civilization: using byproduct-free manufacturing, space-based solar panels, fusion power, etc. Being smarter, they'd also be able to invent and implement such things much faster than the most competent humans would, and also discover technologies we cannot yet even imagine. That's the power of increased intelligence.

Just as it would be silly to think that Homo erectus has equivalent problem-solving capability and wisdom as Homo sapiens, we should recognize that Homo sapiens is not the smartest theoretically possible species. After recognizing this, we can then attempt brainstorming and implementing strategies to get us from point A to point B. Even if we could only develop an enhancement strategy that would raise average human intelligence by 50 IQ points, the technological, economic, and social returns would be almost unimaginably large. This could include developments in the sustainability and conservation sector.

Investments in improving intelligence produce returns in all other areas: because almost everything of lasting impact that we achieve is a product of our intelligent and creative thinking.

Incidentally, I think that anyone should choose how to distribute their time among activist causes at their own leisure, however, I do indeed argue that efforts towards intelligence enhancement have much greater leverage than patchwork solutions to sustainability. Even if you were to ignore the Singularity, it seems to me that the ultimate way of pursuing lasting sustainability would be the development of the three technologies I mentioned above, particularly molecular manufacturing. But how many pro-sustainability people do you see exclusively advocating molecular manufacturing? Not many.

Taking a psychological angle, both ideas-- salvation by Rapture or by Singularity-- arise from the fear that people feel about the unsustainability of our current path. At some level, the problem seems too overwhelming to address through normal human action. So the mind wraps itself in ideology and lurches for some extrahuman exit strategy.

In my experience, fear is not dispelled by telling the person that their fear is unreasonable, because fear itself is not reasonable. Sometimes a better approach is to acknowledge the fear -- yeah, I'm scared too! Then pick up your tools and keep working. By proclaiming that we, too, are scared, and yet we are also calmly doing the necessary work, we provide the model that fear does not have to paralyze.

Michael, my actual concern about relying on intelligence augmentation (with or without a Singularity) as a step before embarking on sustainable environmental action is that it presupposes that the intelligence boost (bio, AI, etc.) happens quickly, effectively and in a form that is available to the people trying to deal with (e.g.) climate disaster.

If that happens, wonderful -- seriously. I'm sure that the solutions that result from collaboration with a superintelligent colleague would be remarkable.

But if it doesn't happen, I don't want to have bet our survival on that outcome.

In short, I have no argument with people who say that Singularity-type technologies (and research) have the potential to be highly beneficial to efforts to mitigate and reverse environmental damage. At the same time, I have little tolerance for people who say that there's no reason to work on any other pathways, because advanced technologies will inevitably offer much better choices, and all we need to do is sit back and wait for the Singularity to save us all.

As for molecular nano, I would gently disagree that sustainability advocates aren't paying attention to it. Aside from my own work, I regularly see nano-related/supportive material at major green sites like Treehugger. It may not be on mainstream green radar, but the people who are paying attention see its potential.

Kim, it may seem easier to understand when characterized that way, but it actually isn't the case. I can call myself a typical Singularity advocate, and can easily tell you, that if superintelligence were technologically impossible, I could think of numerous ways to encourage sustainability and pro-environmental strategies, for instance the expansion of solar power. In fact, I continue to advocate such strategies, even if I believe that the most leveraged way to help humanity is by directly pursuing the creation of superintelligence.

The tendency to compare pro-Rapture Christians and intelligence enhancement advocates is borne of a lack of familiarity with the detailed proposals of the latter.

We are picking up our tools and getting to work - and sometimes the best way to really achieve something is to create better tools.

Jamais, there is little to worry about, because there are many millions of people advocating often-heard environmentalist strategies, and only a couple dozen advocating intelligence enhancement as a #1 priority. I find it somewhat flattering when people speak of Singularitarianism as if it were a large or particularly influential movement, but actually, it isn't! There is little need to warn against people advocating intelligence enhancement as the sole means of improving quality of life because these people are so few and far between anyway.

I think that wrapping up various unrelated thoughts in a package called "Singularity" may be unhelpful. Specifically, I and other so-called Singularity advocates are promoting research into intelligence as a physical and information-theoretic process of great value. If this research did successfully create an intelligence smarter than us in the same way that we are smarter than chimps, then the positive impact would be undeniable. I think the factor of disagreement is not that enhanced intelligence would be tremendously beneficial to the world, but the difficulty of achieving it anytime soon. Without knowledge of any detailed strategies to greater intelligence, it can seem like a long way off indeed.

Regarding environmentalist advocates of MNT, please give names. The only one aware of is my friend Michael Graham Richard. Otherwise, there is no one else I am aware of.

Also, keep in mind that many Singularitarians are quite concerned about prevention of existential in the pre-superintelligent era. In fact, we've taken the lead in this area. Both the Lifeboat Foundation and Oxford Future of Humanity Institute were founded by Singularitarian individuals. So it is confusing to see accusations that Singularitarians are not concerned about implementing immediate strategies to ameliorate existential risk.

Pro MNT greens, a quick (and incomplete) list:

Joel Makower
Mike Millikin @ Green Car Congress
Jason Scorse @ Grist
Tim McGee @ Treehugger
Jeremy Elton Jacquot @ Treehugger

David Roberts @ Grist is sympathetic, but is careful to say that he doesn't know enough

And for what it's worth, I wasn't saying that all Singularity advocates say this -- but you have to admit that there are people out there in the movement that do.

And no, it's not (yet) a mainstream movement, but I find it's better to respond to these sorts of things early on than to wait until it's a 900-lb nanogorilla.

"We can't be bothered to actually save the world ourselves; only to invent someone who will."

The danger of proposing this as a solution (to anything) is that it tends to become an inducement to apathy. It's that apathy which has done more to create the messes we're in now than ignorance or lack of intelligence.

Singularitarians need to figure out a way to counteract this tendency //before// the singularity in such a way as to get everyone involved in their causes. And they need to explicitly do this in cooperation and endorsement of non-singularity-based solutions, rather than in competition with them.

Michael:
"I think the factor of disagreement is not that enhanced intelligence would be tremendously beneficial to the world, but the difficulty of achieving it anytime soon."

Actually, the question of to whom the benefits of superintelligence would accrue IS the central factor of contention. At least, that's what I'm concerned about most. The intelligence of homo sapiens didn't do much for chimps or homo habilis.

At the same time, I have little tolerance for people who say that there's no reason to work on any other pathways, because advanced technologies will inevitably offer much better choices, and all we need to do is sit back and wait...

Jamais, I sympathize completely. When I used to chat on transhumanist lists, I tried to promote green technology as a central part of transhumanist technology. In discussions about the climate crisis, I was inevitably met with suggestions that nanotechnology could be used to precipitate CO2 out of the air, or some other unknown technology could get the job done. I soon realized that these folks had an unjustified over-optimism in the power of technology to solve all our problems. I wondered, why on earth should we put all our faith in technology that doesn't exist yet, that may not exist in time, and that may not be able to get the job done for unforeseen reasons, when there are ways to reduce CO2 emissions already?

The history of technology has been one of trade offs. Typewriters/keyboards made writing faster, but increased the incidence of carpal tunnel syndrome. Better agriculture reduced malnutrition and hunger, but increased obesity, heart disease and diabetes. Automobiles improved transportation, but account for far more deaths than horse accidents ever did. Based on history, we can expect that powerful new technologies will be powerfully dangerous in unknown ways.

Michael points out that a super-AGI will be better able to solve problems, which is true. But the AGI itself will always be its own most difficult problem, so it will always run the risk of making a catastrophic error during recursive self-improvement. All it takes is a long enough time scale. I've never seen this objection addressed by Eliezer or anyone related to SIAI or the Singularity movement, even though I've address it to Eliezer directly on SL4 (he simply didn't respond to the post). I don't care how smart something is, NOTHING is infallible. Any intelligent system has a nonzero probability of making an error. That will always leave us open to catastrophe, no matter how well the AGI is designed.

As for the climate crisis, or any other problem, putting your faith in nonexistent technologies, especially when real solutions exist, is idiotic.

Jamais and Nato, maybe it's important to distinguish between two different types of "Singularitarians" - people who just read Kurzweil's books, and people who read Eliezer's work and support the SIAI. The former may in fact be passive, but the latter - the group most prominently writing and reading in this corner of the blogosphere - has taken a very strong activist stance since day one. Rather than telling you about the power of superintelligence, we would actually prefer to *show* you, as soon as technologically and theoretically feasible.

Most of us believe, based on cogsci arguments, that ~human AI would be likely to bootstrap to superhuman AI quite quickly, BUT, the route from here to ~human AI is anything but easy, in fact it's arguably a veritable Mt. Everest, so I would humbly ask for respect for us shouldering this massive undertaking, if anything for the sheer challenge of it.

Nato, again, from the beginning, SIAI has been extremely concerned about the issue of distribution of benefits. In accordance with Nick Bostrom's Maxipok principle, it seems to us that we should be working towards a satisfactory outcome for everyone, rather than arguing over highly detailed specifics of distribution. If one is concerned about how the benefits of superintelligence would be distributed, then one would be advised to have greater involvement in the Singularity movement, not less.

Regarding Singularitarianism's interaction with non-AI or non-IA causes, this is a matter for individual Singularitarians to decide. Maybe the Singularity really is a tremendously well-leveraged effort, with a better return on investment than proposed alternatives for the betterment of human life. If so, exclusive focus would be justified. The thing is, we won't know for sure until it's done. Ultimately, my life is my own and I wish that others would respect me (and other Singularitarians) for our activist choices and realize that our overriding motivation is a better world for all, not fear of death or yearning for an escape or whatever perverse motivations are unfairly projected upon us. Our reasoning regarding the power of intelligence comes from the conclusions we draw from scientific facts, not some deep-seated yearning for a god to save us.

Transformative proposals like the creation of recursively self-improving AI are viewed with much defensiveness. But as I've noted before, advocates of MNT are just as easy targets for the same type of criticism.

Martin, Eliezer has been addressing the issue of "what if something goes wrong?" for upwards of 7 years. In fact, the entire "Creating Friendly AI" work asks not how FAI would be ensured if everything goes right, but if everything goes wrong. From chatting with Eliezer for quite a few years now it is quite clear that he makes a big deal about considering the risk of catastrophic errors, in fact it is one of the primary platforms of SIAI. This is publicly obvious, so it's disingenuous to suggest otherwise.

Your remarks regarding infallibility apply equally to intelligence enhanced humans, either individuals or networks thereof.

Inventors, venture capitalists, politicians, and technologists of all stripes routinely take into account "non-existent" (near future) potential technologies in their planning. Rather than being idiotic, this is routine and is called "foresight". Sometimes the superficial grandiosity of MNT claims makes it seem as if its proponents think that all of its beneficial applications are foregone conclusions, but in reality, most are just as uncertain and skeptical as anyone. Grandiosity is easily confused with certainty, but the two are actually orthogonal, two separate axes in the belief-space.

I think the specifics of what is included in the Singularity (significantly greater than human intelligence) can happen. However, I think it is specific high impact (acclerating) technology and technology convergence opportunities that need to be understood and exploited.

I view this as planning in world of accelerating technology. Understand exactly which technologies are accelerating and where convergence opportunities are. Make a system to allow convergence opportunities and acceleration happen.

A portfolio of multiple investments in various solutions should be applied in a innovation encouraging system.

This is similar to what Google does. It encourages 20% of the time of employees to be devoted to "blue sky" projects. 80% is still focused on the main business.

Similarly for sustainability solutions you could have 80% on the proven mainstream solutions but devote 20% to game changing options.

It is taking the modern investment theory and applying it to solutions budgeting. We used to have portfolios with stocks and bonds and t-bills. However, new portfolios include 5-10% for alternative investments in Hedge funds, commodities and venture capital.

Also, if public funds had tighter accountability and a more results rewarding component (prizes for goals achieved) then more efficiency could be wrung out of the resources. Pork and waste sustains nothing useful.

Also, if we had some technology and business generalists (literate in physics, chemistry, economics and marketing) who took the time to take a broader view of possible solutions and then dig into them, then more low hanging fruit and new approaches could be assessed and given higher priority.

Breaking up the transportation problem. Not just for electric cars and plug in hybrids, but networked cruise control with sensors for platooning cars to allow for safe drafting. Quick gains in fuel efficiency and reduced congestion. Plays to the strength of Moore's law. Already some pilot programs. Introduce with buses and freight trucks first.

Look at staged development towards dual mode (part car/part train) vehicles. Cars that run on guide rails, can more quickly become all electric.

Projects that can lead to more advanced nanotechnology capabilities (more things like the UK ideas factory and Robert Freitas/Ralph Merkle work) would be cheap bets that could provide benefits.

A better and clearer overall view of where specifically advanced technology is working and the real rate of progress and using that to inform a development strategies would produce faster results is vital.

For nanotechnology, it is pointless to debate the feasibility of engines of creation which is still a primary focus. We have 20 years of actual work. Actual developments with nanomaterials, DNA nanotech and synthetic biology and more computational chemistry work. Those results should be informing the path forward. Actual goals with power density, conversion and systemic cost reduction should be analyzed.

For AI and AGI, look at the multi-million neuron simulations and projects.

For intelligence and productivity enhancement, look at how google, wikipedia and online science, environmental and economic research papers can create more informed solution development. There is no reason that individuals and groups cannot develop far higher quality solutions and proposals. There is no reason to not know enough.

Michael:
I'm not personally differing about the feasibility or potentially large impact of RSIAGI. It sounds more than plausible, and it deserves people like you and SIAI working on it.

The concern is the observable tendency of singularitarian rhetoric to lead people to put all their eggs in one basket.

It's good that you recognize the unknown, wildcard nature of the outcome.

If you or others are keen to dedicate your entire attention to the FRSIAGI problem, that's fine. All I would ask is that you take extra care to make it clear that just because your attention is exclusive, that doesn't mean that other approaches should be ignored, as a counter to the tendency to bias audiences that RSIAGI's claims invoke.

I agree that the potentially high impacts of MNT can inspire monomaniacal messianicism - that's why we stress, as Jamais does here, or Mike Treder and Chris Phoenix at CRN do on their blog, that there are a variety of potential solutions to problems impacted by emerging technologies of every kind. You object to "a patchwork solution", but what we're trying to say is that RSIAGI is one of those patches.

I respect you just fine, Michael. It's OK.

All we're asking is that other people who have freely chosen to focus on other potential solutions to problems in emerging tech be accorded the same respect and recognition you're asking for. We're on the same team.

Maybe we should say that the singularity //by itself// is not a //complete// sustainability strategy.

Nato Welch wrote (or quoted):

> We can't be bothered to actually save the world ourselves;
> only to invent someone who will.

Jamais Cascio wrote:

> I have little tolerance for people who say that there's no
> reason to work on any other pathways, because advanced technologies
> will inevitably offer much better choices, and all we need to
> do is sit back and wait.

"Narcissists feel trapped, shackled, and enslaved by the quotidian,
by the repetitive tasks that are inevitably involved in fulfilling
one's assignments. They hate the methodical, step-by-step, long-term
approach. Possessed of magical thinking, they'd rather wait for
miracles to happen."
http://health.groups.yahoo.com/group/narcissisticabuse/message/5036

And speaking of artificial intelligence, super or otherwise,
check out the Blue Brain Project FAQ:
http://bluebrain.epfl.ch/page18924.html
This is "real" science, but if you made some of these points on
the usual transhumanist mailing lists, you'd be shouted down.

Of course, there's the (dwindling) hope that some trick will
produce the intelligence without requiring anything as complex
as the biology -- the analogy frequently (and hollowly) made
is that you don't need to know how to make a bird to be able
to make an airliner. All we need is to find (and listen to)
the Wright Brothers of AI!

Then again, there's flight and then there's Flight, a contrast
sharply present in a photo I took at the beach last week.
Guess which "machine" I think gets the capital F -- the framing
gives a hint. On the other hand, "horses for courses", as
the British say.
http://www.flickr.com/photos/9986635@N05/788921171/


But still, planes fly.

> But still, planes fly.

Yes, and seagulls can't carry 500 passengers from New York
to Paris (not even in Middle-earth ;-> ).

So what are the basic tricks of human-engineered flight?
The airfoil, for lift; something to provide propulsion
through the air -- the propeller (by analogy to a ship's
screw) and later the jet engine; and a means of
steering the thing. The Wrights got the first one;
it was up to the French to figure out the rest of it
(hence terms like "fuselage" and "aileron").

I can understand why folks getting a first taste of
digital computers back in the 50's would've expected
them to lead quickly to AI, but more than a half-century
of disappointment has rather blunted those
expectations.

Michael Anissimov wrote:

> Ultimately, my life is my own and I wish that others would respect me
> (and other Singularitarians) for our activist choices. . .

No can do. This plea sounds like Tom Cruise remonstrating with
Matt Lauer that critics of Scientology are exhibiting the same
simple religious intolerance that anti-Semites exhibit toward
Jews.

You guys are out there in the world, touting your wares, using
the leverage provided by the Internet the same way an earlier
generation of lay preachers used the medium of TV.

And you're hankering after a big windfall of money. Maybe
from Larry Ellison. Or somebody equally billionairish.

> . . .and realize that our overriding motivation is a better world
> for all. . .

That may really **your** motivation (or at least part of it);
I don't know you well enough to say.

I **can** say that, in the case of some others, "a better world
for all" is only window-dressing (not consciously so, perhaps,
but window-dressing nevertheless) for some pretty nasty
(and unexamined) stuff.

> . . .not fear of death or yearning for an escape. . .

Psychologically and historically implausible.

> . . .or whatever perverse motivations are unfairly
> projected upon us.

Not unfairly. Not "projected".

Hey, hey, let's try not to make this personal. I appreciate the passionate argumentation, but only when it's about ideas, not about the people arguing.

No more insults, or I'll have to turn this car around.

Michael Anissimov wrote:

> Just as it would be silly to think that Homo erectus has
> equivalent problem-solving capability and wisdom as Homo sapiens,
> we should recognize that Homo sapiens is not the smartest
> theoretically possible species.

Indeed. Carry that line of thought a bit further, in fact,
and you can push not only Homo sapiens,
but the whole anthropomorphic notion of "smart", out
of center stage.

In an interesting article entitled "Homosexuals, cryonics,
and the 'natural order'"
http://www.cryonet.org/cgi-bin/dsp.cgi?msg=16562
Mike Darwin (a.k.a. Mike Federowicz) wrote:
"Nature just doesn't give a damn, to be blunt. The dice
of genetic and thus phenotypic variation are constantly being
rolled and the outcomes tested against an equally dynamic
and changing environment. It doesn't take great brains to
realize that we have the wild bestiary of extinct animals in the
fossil record because the environment changed. The dinosaurs
were the dominant large life form on this planet immensely
longer than humans have existed, and for that matter, than
for the length of time mammals have been so abundant.
By historical standards the jury hasn't even convened on the
utility of the 'brains' experiment."

Back in '82 (I think) Bruce Sterling wrote a wry SF story
on the subject of "the utility of the 'brains' experiment"
entitled "Swarm", in which an alien gets to say:

"'I am the Swarm. That is, I am one of its castes. I am
a tool, an adaptation; my specialty is intelligence.
I am not often needed...

[T]his is one of those uncomfortable periods when galactic
intelligence is rife. Intelligence is a great bother. It makes
all kinds of trouble for us...

You are a young race and lay great stock by your own
cleverness,' Swarm said, 'As usual, you fail to see
that intelligence is not a survival trait...

This urge to expand, to explore, to develop, is just
what will make you extinct. You naively suppose that you
can continue to feed your curiosity indefinitely. It
is an old story, pursued by countless races before you..."

This is delicious contrarianism (a Sterling specialty), but
it's not inconceivable that the Homo sapiens experiment in
intelligence **is** a dead end, and that some other packaging
and motivational underpinning of intelligence (I'm thinking
biology here) will prove more robust.

There was a series on cable TV a few years ago which caught my
eye (and which I've since bought on DVD for friends' kids)
called _The Future is Wild_.
http://en.wikipedia.org/wiki/The_Future_is_Wild
http://www.amazon.com/Future-Wild-Phillip-Currie/dp/B0000YEDYU
It has photo-realistic CGI creatures similar to those in
_Walking with Dinosaurs_, but instead of depicting the (speculative)
past, it projects into the (speculative) future, showing a hypothetical
Earth 5 million, 100 million, and 200 million years from
now. Even in the first epoch, Homo sapiens is long gone --
something which the viewing audience (including my friends'
nine and ten-year-old boys) is expected to take with
equanimity (giving the lie to that tired old transhumanist
claim that only **they** have the intellectual fortitude
to contemplate a radical change in the human condition).
The series ends with a glimpse of some possible forbears of
a post-human sentient species -- tree-dwelling, tentacled
land cephalopods living in the jungles of the continent
of "Pangaea II", referred to in the narration as
"Squibbons".

So what's the point of bringing this up? Only to illustrate
that there seems to be a peculiarly unscientific and
non-objective bias in **insisting** that the next sentient
species (if there's going to be one) **must** be contiguous with
or an outgrowth of the technology of our own. Or calling an
"existential crisis" the possibility
that that might not turn out to be the case.
By all means read Olaf Stapledon's _Last and First Men_ and
_Star Maker_ if you haven't already done so. Now **that's**
what I call "transhumanism".

Peak oil is not a sustainability strategy.

Rapture can bring us into a more complicated situation.

Archives

Creative Commons License
This weblog is licensed under a Creative Commons License.
Powered By MovableType 4.37