Uncertainty, Complexity, and Taking Action (revisited)
I stumbled across this transcript of a talk I gave way back in late 2008, at the "Global Catastrophic Risks" conference. I was asked to provide some closing thoughts, based on what had gone before in the meeting, so it's more off-the-cuff than a prepared statement. The site hosting the transcript seems to have gone dark, though, so I wanted to make sure that it was preserved. There was some pretty decent thinking there -- apparently, I had a functioning brain back then.
Uncertainty, Complexity, and Taking Action
Jamais Cascio gave the closing talk at GCR08, a Mountain View conference on Global Catastrophic Risks. Titled “Uncertainty, Complexity and Taking Action,” the discussion focused on the challenges inherent in planning to prevent future disasters emerging as the result of global-scale change.
The following transcript of Jamais Cascio’s GCR08 presentation “Uncertainty, Complexity, and Taking Action” has been corrected and approved by the speaker. Video and audio are also available.
Anders Sandberg: Did you know that Nick [Bostrom] usually says that there have been more papers about the reproductive habits of dung beetles than human extinction. I checked the number for him, and it’s about two orders of magnitude more papers.
Jamais Cascio: There is an interesting question there—why is that? Is it because human extinction is just too depressing? Is it because human extinction is unimaginable? There is so much uncertainty around these issues that we are encapsulating under “global catastrophic risk.”
There is an underlying question in all of this. Can we afford a catastrophe? I think the consensus answer and the reason we are here is that we can’t. If we can’t afford a catastrophe, or a series of catastrophes, the question then is, what do we do that won’t increase the likelihood of catastrophe? That actually is a hard question to answer. We have heard a number of different potential solutions—everything from global governance in some confederated form to very active businesses. We didn’t quite get the hardcore libertarian position today—that’s not a surprise at an IEET meeting—and I’m not complaining. We have a variety of answers that haven’t satisfied.
I think it really comes down to unintended consequences. We recognize that these are complex fucking systems. Pardon my language about using “complex,” but these are incredibly difficult, twisty passages all leading to being devoured by a grue. This is a global environment in which simple answers are not just limited, they are usually dangerous. Yet, simple answers are what our current institutions tend to come up with—that’s a problem.
One way this problem manifests is with silo thinking. This notion of “I’m going to focus on this particular kind of risk, this particular kind of technology, and don’t talk to me about anything else.” That is a dangerous thought, not in the politically incorrect sense, but in the sense that the kinds of solutions that you might develop in response to that kind of silo thinking are likely to be counterproductive when applied to the real world, which is, as you recall, a complex fucking system.
There is also, you’ve noticed here, an assumption of efficiency. I mean by that an assumption that all of these things work. That is not necessarily a good assumption to make. We are going to have a lot of dead ends with these technologies. Those dead ends, in and of themselves, may be dangerous, but the assumption that all the pieces work together and that we can get the global weather system up and running in less than a week…
With a sufficiently advanced tested, reliable system, no doubt. If we are in that kind of world of global competition where I have to get this up before the Chinese do, we’re not going to spend a lot of time testing the system. I’m not going to be doing all the various kinds of safety checks, longitudinal testing to make sure the whole thing is going to work as a complex fucking system. There is an assumption that all of these things are going to work just fine, when in actuality: one, they may not—they may just fall flat. Two, the kinds of failure states that emerge may end up being even worse, or at least nastier in a previously unpredictable way than what you thought you were confronting with this new system/ technology/ behavior, etc.
This is where I come back to this notion of unintended consequences—uncertainty. Everything that we need to do when looking at global catastrophic risks has to come back to developing a capacity to respond effectively to global complex uncertainty. That’s not an easy thing. I’m not standing up here and saying all we need is to get a grant request going and we’ll be fine.
This may end up being, contrary to what George was saying about the catastrophes being the focus—it’s the uncertainty that may end up being the defining focus of politics in the 21st century. I wrote recently on the difference between long-run and long-lag. We are kind of used to thinking about long-run problems: we know this thing is going to hit us in fifty years, and we’ll wait a bit because we will have developed better systems by the time it hits. We are not so good at thinking about long-lag systems: it’s going to hit us in fifty years, but the cause and proximate sources are actually right now, and if we don’t make a change right now, that fifty years out is going to hit us regardless.
Climate is kind of the big example of that. Things like ocean thermal inertia, carbon commitment, all of these kinds of fiddly forces that make it so that the big impacts of climate change may not hit us for another thirty years, but we’d damn well better do something now because we can’t wait thirty years. There is actually with ocean thermal inertia two decades of warming guaranteed, no matter what we do. We could stop putting out any carbon right this very second and we would still have two more decades of warming, probably another good degree to degree and a half centigrade.
That’s scary, because we are already close to a tipping point. We’re not really good at thinking about long-lag problems. We are not really good at thinking about some of these complex systems, so we need to develop better institutions for doing that. That institution may be narrow—the transnational coordinating institutions focusing on asteroids or geoengineering. This may end up being a good initial step, the training wheels, for the bigger picture transnational cooperation.
We might start thinking about the transnational cooperation not in terms of states, but in terms of communities. I mentioned in response to George earlier about a lot of the super-powered angry individuals, terrorist groups, etc. that in the modern world actually tend to come not from anarchic states or economically dislocated areas but in fact from community dislocated areas. Rethinking the notion of non-geographic community—“translocal community” is a term we are starting to use at the Institute for the Future—that ends up requiring a different model of governance.
You talk about getting away from wars and thinking about police actions, but police actions are 20th century… so very twen-cen. Thomas Barnett, a military thinker, has a concept that I think works reasonably well as a jumping off point. He talks about combined military intervention civilian groups as sys admin forces—system administration forces. I’m kind of a geek at heart, so I appreciate it from that regard, but also the notion that these kinds of groups go in, not to police or enforce, but to administrate the complex fucking system.
Hughes: I’M IN UR CAPITAL, REBOOTING UR GOVERNMENT?
Cascio: Exactly.
One last questions that I think plays into all of this popped into my mind during Alan’s talk. I’m not asking this because I know the answer ahead of time—I’m actually curious. When have we managed to construct speculative regulation? That is, regulatory rules that are aimed at developments that have not yet manifest. We know this technology is coming, so let’s make the rules now and get them all working before the problem hits. Have we managed to do that, because if so, that then becomes a really useful model for dealing with some of these big catastrophic risks.
Goldstein: The first Asilomar Conference on Recombinant DNA.
Cascio: Were the proposals coming out of Asilomar ever actually turned into regulatory rules?
Hughes: No, they were voluntary.
Cascio: I’m not trying to dismiss that. What would be a Bretton Woods, not around the economy but around technology? Technology is political behavior. Technology is social. We can talk about all of the wonderful gadgets, all of the wonderful prizes and powers, but ultimately the choices that we make around those technologies (what to create, what to deploy, how those deployments manifest, what kinds of capacities we add to the technologies) are political decisions.
The more that we try to divorce technology from politics, the more we try to say that technology is neutral, the more we run the risk of falling into the trap of unintended consequences. No one here did today, but it’s not hard to find people who talk about technology as neutral. I think that is a common response in the broader Western discourse.
I want to finish my observations here by saying that ultimately the choices that we make in thinking about these technologies, these choices matter. We can’t let ourselves slip into the pretense that we are just playing with ourselves socially. We are actually making choices that could decide the fate of billions of people. That’s a heavy responsibility, but this is a pretty good group of people to start on that.