« March 2007 | Main Page | May 2007 »

Monthly Archives

April 30, 2007

Responsibility to the Future

It's a troubling sign of the modern political culture that being repeatedly and horrifically wrong about important subjects doesn't seem to make one less popular as an advisor. In fields where the subjects of professional analysis are granular and readily quantifiable, making crucial mistakes over and over again is a clear pathway to unemployment. Yet when the subjects are expansive and globally important, such as the politics of war or climate, repeated errors apparently aren't worth notice.

If these mistakes were simply signs of professional buffoonery, they'd be annoying but not worth comment. But these errors in analysis come from people with a great deal of influence over both policy-makers and semi-informed voters. Moreover, they focus on a subject that I follow closely: foresight.

In the era leading up to the current war in Iraq, the United States (as well as other nations) heard a cacophony of assertions about the inevitable results of the war. Some of these assertions were on-target; some were wildly (and tragically) off-base. Sadly, the voices that were most wrong still regularly appear on television news, on the editorial pages, and featured prominently in talk radio. The voices that were the most right, conversely, remain more-or-less invisible in the popular media.

So, following a blog trend, let me just say: What Digby (or, in this case, Tristero) Said.

It's high time that those who were right all along about Iraq have a significant national voice. [...] Whether or not [the high-profile pundits like William Kristol, Ken Pollack, and Tom Friedman] now recognize they were wrong, the fact is that they were when it counted most. Time to listen to those who got it right from the start.

I've written before about the responsibility that ethical futurists have to the future.

...the first duty of an ethical futurist is to act in the interests of the stakeholders yet to come -- those who would suffer harm in the future from choices made in the present. [...] Futurists, as those people who have chosen to become navigators for society -- responsible for watching the path ahead -- have a particular responsibility for safeguard that path, and to ensure that the people making strategic choices about actions and policies have the opportunity to do so wisely.

Implicit in this responsibility is the necessity of admission when one's analysis was wrong. But missing from this is the parallel admonition to the people and organizations that listen to foresight analysis: if your chosen "navigators for society" repeatedly run you into rocks, yet repeatedly deny having done so, you have a particular responsibility to stop listening to them.

April 26, 2007

Cheeseburgers: In Florida; On the BBC

gracehung07_sm.jpgIt's tempting to just stick a cheeseburger in the Open the Future logo.

The Cheeseburger Steamroller continues its mighty advance, with two significant media hits this week: in the Florida Times-Union, out of Jacksonville; and on the BBC radio program, The World Today (RealAudio format).

For me, the BBC hit is especially cool, since I have enormous respect for the BBC World Service and have been a listener for over 15 years. It's also pretty nifty that this isn't a drive-by comment on an otherwise tangential report, but an entire 3 minute segment devoted to the cheeseburger footprint, with multiple appearances by yours truly. Reporter James Fletcher is doing a series on carbon footprints, and has so far covered transportation and offsets; with the current piece, he looks at food.

Fletcher's Footprints (RealAudio format)

The Florida Times-Union article is the first print appearance of the cheeseburger footprint, and they do an admirable job of assembling the information -- including showing the calculations for energy and carbon broken down in detail. The two quotes from me are decent selections, although I wish they'd also used one of the bits about the gravity of the carbon footprint.

Do you know your burger's carbon footprint

(Click on the image for the link to the full-size graphic, made by Grace Hung at Cornell's Nutritional Science department. Great work, Grace!)

April 25, 2007

Micro-Offsets

The argument isn't over about the utility of carbon offsets (although the various argumentative parties have since wandered off to other subjects), but my take is that, on balance, they do a little bit of good and -- more importantly -- train people to think in terms of the carbon footprints of their actions. Most of the offset providers, however, deal with the big impacts: a year's worth of energy, a trans-Atlantic flight, that sort of thing. But what about smaller purchases? How hard would it be to provide carbon offsets for one's everyday life?

Think of them as micro-offsets, in parallel to the various micro-finance projects underway across the developing world.

Micro-offsets offer a chance to do something big by doing something very small. Imagine a small amount -- say twenty-five cents to a dollar -- that could be added to the price of commonplace consumer products and services, from cheeseburgers to phone bills, in order to pay for a carbon offset. Importantly, such fees would be optional, but even a small amount would likely pay for more carbon than was actually produced by the product or service. The multitude of tiny amounts, added together, would be used to purchase traditional carbon credits, probably by an offset aggregator.

This idea is starting to appear in various forms. Chris Messina, for example, talks about "Carbon Offsetting Web 2.0," optional fees for web hosts and other web service providers, tacked on to (over-)pay for the carbon footprint of the energy used for the servers & such. Messina's idea demonstrates the utility of the micro-offset concept: by adding on to existing subscription services, consumers don't need to pay yet another bill; the price is low enough (Messina suggests around $1/month) that it's well within what a growing number of people would feel guilty enough to pay; and the offsets purchased by these fees enough to cover not just the server footprints, but ancillary carbon costs for the service providers. Imagine if Second Life -- or World of Warcraft -- made this kind of offer.

It's likely that online shopping systems of all sorts, not just subscriptions. would be simple platforms for micro-offsets; imagine every purchase at Amazon or eBay including a checkbox that reads, "Add $.50 to my bill for carbon offsets."

But if you're going to be tagging an extra amount onto the price of something to pay for its carbon, why not just institute a carbon tax?

Because no matter how useful such a scheme would be, there are few politicians willing today to stand up and say, "I want to impose a carbon tax!" The political will simply isn't there. That doesn't mean the social will isn't, however. We've seen plenty of examples of companies, communities and individual making decisions about environmental sustainability that go far beyond what Washington DC has been willing to do. Similarly, the existence -- and, hopefully, success -- of micro-offsets would be an existence proof that people are, in fact, willing to pay a bit more to shrink their carbon footprints.

Micro-offsets offer a bottom-up way to get people to pay more attention to their greenhouse gas impacts, and not just in the obvious, marketing-friendly shape of hybrid cars and compact-fluorescent bulbs.

The main downside of the basic micro-offset concept is that the offset cost is not tied to actual carbon impact; as a result, they don't provide the information transparency that would allow people to easily change behaviors. There's no reason why micro-offsets couldn't be linked to carbon impacts, of course -- once we get carbon labels that let people make informed choices. In the meantime, basic micro-offsets could still provide value. As they became more common, they would lead to both a small overall carbon footprint reduction, and a large step towards a carbon-conscious society.

Don't Pack Your Bags Just Yet

reddwarfplanet_sm.jpgYou can't swing a dead cat-5 cable on the Interwebs today without running across a link to the "new Earth" discovered around a red dwarf star called Gliese 58, about 20 light years away from Earth (not just in our back yard, but -- relatively speaking -- right behind us, reading over our shoulder, breathing stale dorito breath in our face). Let's guess the order of the blog storm: first, someone will say "it's a habitable planet!"; then, someone will say "we should move there!"; then, someone will say "you just want to trash the Earth and leave it like a cheap rental!"; finally, someone will say "hold on, folks, all they found was a planet that's likely to be "rocky" (instead of gaseous like Jupiter) in an orbit that would allow water to remain water. That's it. No signs of water, no actual proof of habitability, certainly no signs of life. Calm down."

So, jumping to the conclusion: calm down.

This is cool news, to be sure, but really only from the perspective that it supports the argument that the preconditions for Life As We Know It -- i.e., water and a stable orbit around a reasonably long-lived star -- appear to be, in fact, about as commonplace as had been conjectured.

But don't worry about Earth-haters moving there. By the time we have the technology that would make a 20 light year trip even remotely plausible (the fastest space craft yet made would still take thousands of years to get there), we probably won't be all that interested in living in a watery gravity hole anyway. Nope -- give us some nice, massive gas giants to convert to computronium!

April 24, 2007

The Early Signs of the Long Tomorrow

pbrain_sq.jpg(Or "I, for one, welcome our new cyber-mouse overlords!")

Ahoy, BoingBoing readers! I was going to update this anyway, but with the BB link, it's extra-important: this is a simulation of a cortical network with the size, link complexity and signal activity of a mouse brain, but without the structure -- so, arguably, it isn't a really a simulated mouse brain, but a functional platform upon which a mouse brain sim could run. Depending upon your perspective, this is a minor quibble or makes all the difference.

It's hard to see this as anything but a distant early warning of some pretty remarkable changes on the near horizon. IBM researchers James Frye, Rajagopal Ananthanarayanan, and Dharmendra S. Modha assembled a simulated mouse cortical hemisphere (that is, a functional half of a mouse brain) on one of the smaller BlueGene/L supercomputers. They then ran the simulation -- at ten seconds of computer processing equal to one second of brain function.

In other words: they ran a simulated mouse brain at 1/10 time.

Neurobiologically realistic, large-scale cortical and sub-cortical simulations are bound to play a key role in computational neuroscience and its applications to cognitive computing. One hemisphere of the mouse cortex has roughly 8,000,000 neurons and 8,000 synapses per neuron. Modeling at this scale imposes tremendous constraints on computation, communication, and memory capacity of any computing platform.

We have designed and implemented a massively parallel cortical simulator with (a) phenomenological spiking neuron models; (b) spike-timing dependent plasticity; and (c) axonal delays.

We deployed the simulator on a 4096-processor BlueGene/L supercomputer with 256 MB per CPU. We were able to represent 8,000,000 neurons (80% excitatory) and 6,300 synapses per neuron in the 1 TB main memory of the system. Using a synthetic pattern of neuronal interconnections, at a 1 ms resolution and an average firing rate of 1 Hz, we were able to run 1s of model time in 10s of real time!

The team published the write-up in the February 5, 2007, edition of Computer Science; a PDF is available of the one-page research report, providing a few technical details.

The human brain has some 100 billion neurons, so this mouse brain simulation is still about 1/12,500 of a simulated human brain. That may sound like a daunting challenge, until a glance at computer history makes clear that such computational capabilities will likely be possible on within 20 years, easily, if not even sooner.

But well before that point, we'll be able to run simulations of animal brains at accelerated speeds, raising a provocative test of just how important raw cognitive speed is to the emergence of artificial intelligence. Would an accelerated mouse brain simulation simply be a fast-calculating mouse, or will it have other properties and capabilities deriving from the sheer speed? Which would be smarter -- a 6,000X faster mouse brain sim, or a 1/2-speed human brain sim?

Some of that is going to depend upon how much of the simulation models actual brain structure, rather than simply the number of connections. That's likely to be crucial. The brain isn't simply a haphazard mass of neural junctions, and a functional structure simulation may well prove to be a far greater challenge than simply getting the neural connection sim working. Still, this is not an unsolvable problem, by any extent.

But this raises the question of whatt kinds of programming will be possible with these simulated brains. The IBM simulation simply showed that a functional simulation was possible; evidently, they didn't try to do anything with the cyber-mouse. It's not entirely clear what could be done with it. We're now on the brink of facing a question that had, in the past, been essentially the province of science fiction:

How does one program a simulated mind?


(Thanks to Miron for the tip!)

April 22, 2007

Earth Day Essay

I've gone ahead and contributed an essay to WorldChanging's "Earth Day" series, a brief set of scenarios based on the matrix shown above. It's very much a high-level view of potential Earth futures, and is meant more as a provocation to discussion than as a complete picture of Things To Come.

Here's a sample:

Geoengineering 101: Pass/Fail

2037: The Hephaestus 2 mission reported last week that it had managed to stabilize the wobble on the Mirror, but JustinNN.tv blurbed me a minute ago that New Tyndall Center is still showing temperature instabilities. According to Tyndall, that clinches it: we have another rogue at work. NATO ended the last one with extreme prejudice (as dramatized in last Summer's blockbuster, "Shutdown" -- I loved that Bruce Willis came out of retirement to play Gates), but this one's more subtle. My eyecrawl has some bluster from the SecGen now, saying that "this will not stand," blah blah blah. I just wish that these boy geniuses (and they're all guys, you ever notice that?) would put half as much time and effort into figuring out the Atlantic Seawall problem as they do these crazy-ass plans to fix the sky.

This is a world in which attempts to make the broad social and behavioral changes necessary to avoid climate disaster are generally too late and too limited, and the global environment starts to show early signs of collapse. The 2010s to early 2020s are characterized by millions of dead from extreme weather events, hundreds of millions of refugees, and a thousand or more coastal cities lost all over the globe. The continued trend of general technological acceleration gets diverted by 2020 into haphazard, massive projects to avert disaster. Few of these succeed -- serious climate problems hit too fast for the more responsible advocates of geoengineering to get beyond the "what if..." stage -- and the many that fail often do so in a spectacular (and legally actionable) fashion. Those that do work serve mainly to keep the Earth poised on the brink: bioengineered plants that consume enough extra CO2 and methane to keep the atmosphere stable; a very slow project to reduce the acidity of the oceans; and the Mirror, a thousands of miles in diameter solar shield at the Lagrange point between the Earth and the Sun, reducing incoming sunlight by 2% -- enough to start a gradual cooling trend.

I have to say, I really had to restrain myself from turning each of these into lengthy stories -- but the Earth Day essays all seemed relatively brief.

April 21, 2007

Saturday Topsight, April 21, 2007

heartlander.jpg• CardioBot: The Heartlander is an inch-long robot designed to crawl across the surface of a living, beating heart, in order to carry out various medical tasks. Inserting the Heartlander requires minimally-invasive surgery, potentially under local anesthetic (i.e., out-patient heart surgery!), as opposed to the current heart surgery paradigm, which relies on massive chest openings, lung deflation, and usually the stoppage of the heart. In pig tests, the Heartlander proved able to crawl across a living heart, delivering dye injections and inserting pacemaker leads at designated targets. Other potential uses include removing dead tissue and applying stem cell therapies.

It's not autonomous, and it's still fairly large, so science fiction musings about medical nanobots remain purely conjectural, but still: a crawling heart surgery robot!

• Open Source Success: Charles Babcock, in Information Week, offers up a nine-point checklist for the characteristics of a successful open source project. I've seen most of these before, in different fora, but this is a handy summation, and perfect for use as a filter for non-software open source concepts.

  • A thriving community -- A handful of lead developers, a large body of contributors, and a substantial--or at least motivated--user group offering ideas.
  • Disruptive goals -- Does something notably better than commercial code. Free isn't enough.
  • A benevolent dictator -- Leader who can inspire and guide developers, asking the right questions and letting only the right code in.
  • Transparency -- Decisions are made openly, with threads of discussion, active mailing list, and negative and positive comments aired.
  • Civility -- Strong forums police against personal attacks or niggling issues, focus on big goals.
  • Documentation -- What good's a project that can't be implemented by those outside its development?
  • Employed developers -- The key developers need to work on it full time.
  • A clear license -- Some are very business friendly, others clear as mud.
  • Commercial support -- Companies need more than e-mail support from volunteers. Is there a solid company employing people you can call?
  • It wouldn't be hard to apply this list to non-software open source products, such as a potential open source nanotechnology scheme. The application to more abstract concepts, like IFTF favorite "The Open Economy," is less straightforward, and would likely require the rewriting of the final three elements. Say...

    • Dedicated Participants -- Leading developers/creators/citizens need to be fully-engaged with the open endeavor, not just part-timers.
    • Clear Connections -- The points of connection to systems and institutions outside of the open system should be transparently demarcated, so that all participants are aware of the implications of the interaction.
    • Persistent Reliability Matters -- Reputations are built on successful relationships, not just successful one-off encounters; participants engaged with the open system need to be confident that there are reliable resources to help them with problems they might encounter.

    • A Killer Deal: Concept of the week: Assassination Markets, a prediction market wherein profits are made by knowing the date of a particular negative event, possibly (but not always) by being the entity that makes said negative event happen. The canonical example is a bet made on the date of the assassination of a given political figure by a person who then carries out that assassination as described; the possible real world example is the conjecture that a variety of short-sell orders on airline stocks made just before 9/11 originated from terrorist groups that knew of the upcoming attack.

    • A Singular Sensation: I was asked awhile ago, but now it's public: I'm on the speaker list for the upcoming Singularity Summit II, taking place in San Francisco in early September. The ostensible topic of the event is the emergence of powerful artificial intelligence, but I'm not sure yet what I'd like to talk about regarding that subject. Perhaps different scenarios of emergence; perhaps something about responsibility; perhaps something about AI as a cultural augmentation.

    Suggestions?

    April 17, 2007

    Ten-Year Forecast 2007

    Probably not going to do much posting for the next few days; I'm at the Institute for the Future's Ten-Year Forecast 2007 conference, and won't be coming up for air until Friday.

    April 15, 2007

    Nitrogen Strategies

    One of the projects I'm juggling right now is serving as facilitator and moderator of an online discussion board and wiki, run by the Monitor Institute for the Packard Foundation, on the subject of strategies for dealing with Nitrogen pollution. For the first two weeks, participation was limited to a small group of academics and government specialists; the wiki has now been opened up to the Internet community at large. Here's the invitation:

    The David and Lucile Packard Foundation invites you to be part of an online collaboration to create strategies for reducing nitrogen pollution. Please join at http://nitrogen.packard.org.

    An increasingly dangerous threat to our environment and human health, nitrogen pollution is degrading water quality and coastal ecosystems, contributing to climate change and posing a variety of health risks. Despite its rapid growth and harmful consequences, the problem of nitrogen pollution has received relatively little attention, except in areas suffering the consequences. In response to this gap, the Packard Foundation is exploring opportunities for philanthropic investments to make a significant contribution to solutions.

    Since the most robust strategies for addressing a problem as complex as nitrogen pollution can not be developed by Packard alone, the Foundation has launched a public forum for collaboration. Everyone with an interest in reducing nitrogen pollution is invited to join and work together to create effective strategies for addressing this pressing problem.

    The forum will be live and open to public participation through May 10th.

    Packard will make the full product of this forum available to the Foundation’s Trustees at its June Board meeting and the Foundation staff will use the product of the site in developing a recommended strategy for the Trustees to consider. Once the forum closes, the outcomes of this work will be available to the public, archived online and protected under a Creative Commons License.

    Thank you in advance for participating in this important collaboration.

    [Instructions for signing up in the extended entry]

    Here are a couple tips for getting registered and contributing to Nitrogen.packard.org:


    • To register or sign in for Nitrogen.packard.org, click the “Sign In” link in the upper right corner of the screen. From the registration screen, enter your username and password or click register to register a new username
    • Before you begin participating, introduce yourself to the community by clicking on the “Introduce Yourself” link on the left hand column of the home page. Once on the introductions page, click on the edit button in the upper right hand corner, and then add your introduction to the list.
    • Now you’re ready to participate! The items in the left-hand column of the home page are the different ways you can participate on the site. For instance, you can choose to edit the nitrogen/agriculture strategy by clicking on the wiki link, or you can discuss the strategies by clicking on the discussion link
    • We recommend you start by going to the wiki and reading through the strategies. Then go to the strategy that aligns with your own work, go to the bottom of that strategy, and add a paragraph describing the work that you already have underway under “Projects, Programs, and Organizations.”
    • Even more valuable, of course, would be for you to start revising the possible outcomes or strategies or for you to add an entirely new strategy that you think would be effective.
    • Finally, please contribute your thoughts to the discussion section, rate the impact and cost effectiveness of each strategy by taking the survey, and help expand and refine the stakeholder map.

    If you experience any technical difficulties registering or using the site, please be sure to email or call Tech Support: Nitrogen@packard.org; tel:1-650-917-7288.

    Metaverse Roadmap Report

    I'm more-or-less done now with my part of the report for the Metaverse Roadmap Project. Jerry Paffendorf and John Smart each wrote parts of the overall document, but my (very large) chunk is the set of scenarios describing four different manifestations of the Metaverse. Here's a taste, from the draft:

    The Augmented Reality scenario offers a world in which every item within view has a potential information shadow, a history and presence accessible via standard interfaces. Most items that can change state (be turned on or off, change appearance, etc.) can be controlled via wireless networking, and many objects that today would be "dumb" matter will, in the Augmented Reality scenario, be interactive and controllable. To the generation brought up in an Augmented Reality world, the Metaverse—this ubiquitous cloud of information—is like electricity to children of the 20th century: essentially universal, expected and conspicuous only in its absence.

    The four scenarios -- Augmented Reality, Lifelogging (hello, Participatory Panopticon!), Virtual Words, and Mirror World -- all reflect differing levels of emphasis on what I saw as the two primary spectra describing the evolution of this technology: augmentation versus simulation, and intimate technologies versus extimate technologies. Here's how they line up, in a graphic first shown at South by Southwest 07:

    MVR-fourbox.jpg

    Daniel Terdiman, at C|Net, has already seen a rough draft of the document, and reported on it to his readers: "Meet the metaverse, your new digital home" offers a very simple overview of the argument, and gathers some responses from a few folks in the virtual worlds industry.

    Terdiman and some of his commentators suggest that the stories in the report are somewhat conservative; my response is that they weren't reading closely enough. These are really quite radical futures, even if they remain grounded in the plausible. I suspect that most of the people who saw that draft of the report come from industries that expect hype, not analysis. Still, I expect that this is going to be the default reaction: the scenarios don't offer up The Matrix, so they're too conservative -- even though, if they had taken that path, the response would have been "this is impossible and/or silly."

    A public version of the report should be out Real Soon Now.

    April 13, 2007

    The Sin of Worldbuilding

    Forgive me, Warren, but I must disagree.

    Every moment of a science fiction story must represent the triumph of writing over worldbuilding.

    Worldbuilding is dull. Worldbuilding literalises the urge to invent. Worldbuilding gives an unnecessary permission for acts of writing (indeed, for acts of reading). Worldbuilding numbs the reader’s ability to fulfil their part of the bargain, because it believes that it has to do everything around here if anything is going to get done.

    Above all, worldbuilding is not technically necessary. It is the great clomping foot of nerdism. It is the attempt to exhaustively survey a place that isn’t there. A good writer would never try to do that, even with a place that is there. It isn’t possible, & if it was the results wouldn’t be readable: they would constitute not a book but the biggest library ever built, a hallowed place of dedication & lifelong study. This gives us a clue to the psychological type of the worldbuilder & the worldbuilder’s victim, & makes us very afraid.

    See, what he misses here is that Worldbuilding is its own form of art, and very much its own kind of business. Worldbuilding is what I do pretty much every gig for Institute for the Future, for Global Business Network, for Monitor Institute, and for essentially every corporate, government, or non-profit client I've worked with over the last decade. That great clomping foot of nerdism is what the clients want to see, because they can then use that as a backdrop for their own stories about their organizations.

    The art of Worldbuilding comes from knowing what to omit, from knowing what needs to be surveyed and what can be tacked up as a Potemkin Future. It becomes an intensely detailed game, figuring out what the readers want to know, covering what they need to know, teasing them with the implications of a fuller vision, and creating an effective illusion of paradigmatic completeness.

    Harrison has it wrong: it's not the biggest library ever built, it's a painting of a library that seems to go on and on, with some prop books on a table in the foreground. Make sure those prop books are interesting enough, and the reader will never try to explore the rest of the library.

    April 12, 2007

    One Revolution Per Child

    I wish that Nicolas Negroponte had never referred to it as the "one hundred dollar computer."

    Yes, yes, it's an attention-grabbing name, but noting with a smirk that the first ones will actually cost $150 has become a game for reporters. I'm particularly aghast when technology journalists do it, because they of all people should know that information technology prices always fall -- the OLPC laptop won't remain $150 for long.

    All of this comes to mind because of a new article from IEEE Spectrum magazine, "The Laptop Crusade." For the first time, I've become really excited about the potential this project holds, and not solely because of its leapfrogging possibilities.

    (Some people I really respect, like Lee Felsenstein and my friend Ethan Zuckerman, show up in the article with some astute comments; I was interviewed for the piece, as well, with the usual result that a couple of my throwaway comments got used, and the main point I tried to make nowhere to be found. So it goes.)

    I'm excited about the OLPC machine's potential because it's so clearly a revolutionary device, both in the sense of it having capabilities that nobody has ever before seen in a laptop, and in the sense it being a catalyst for out-of-control social transformation. The OLPC project will drop millions of powerful, deeply networked, information technology devices into the hands of precisely the population (children and teens) most likely to want to figure out the unanticipated uses.

    From the startlingly long-range wifi mesh networking to the "Sugar" social interface, these devices were built to treat hierarchies as damage, and route around them.

    Bletsas says his design will provide node-to-node connectivity over 600 meters. Over a flat area without buildings and with low radio noise, that connection can stretch to 1.2 km. Students can put their computers on the mesh network simply by flipping the antennas up. This turns on the Wi-Fi subsystem of the machine without waking the CPU, allowing the laptop to route packets while consuming just 350 milliwatts of power. [...]

    The mesh network feature lets students in the same classroom share a virtual whiteboard with a teacher, chat (okay, gossip) during class, or collaborate on assignments. [...]

    The OLPC team also constructed a completely new user environment, code-named Sugar, designed to break down the isolation that students might experience from staring at laptops all day. It introduces the concept of “presence”—the idea behind instant-messaging buddy lists. The user interface is aware of other students in the classroom, showing their pictures or icons on the screen, allowing students to chat or share work with others in the class.

    The system shares with the other students new tasks, like a drawing or a document, by default, though students can choose to make them private. Sugar creates a “blog” for each child—a record of the activities they engaged in during the day—which lets them add public or private diary entries.

    This is a participatory culture dream device. Using entirely open source software, the laptops are enormously friendly to "hacking" (in the exploration sense, not the criminal sense), yet can be returned to a safe configuration at the push of a button. Moreover, they're extraordinarily, wonderfully, energy-efficient: at normal use, a OLPC laptop draws 3 watts, compared to 30 watts for a typical lower-end conventional laptop; and a full charge lasts for over six hours at maximum power use, 25 hours in power conservation mode.

    Felsenstein notes that teachers will (rightly) see these laptops as a direct assault on their authority, and many will be banned from classrooms, leaving the kids to use the machines unsupervised.

    I sure hope so.

    A generation growing up believing in their capability to hack the system, work collaboratively, and make information a tool is probably one of the best things that could happen to a developing nation. Possibly not in the short run -- backlash from fearful authorities will be nasty -- but certainly in the longer term, as the first wave of OLPC children reaches adulthood.

    The revolution begins in 2008.

    April 7, 2007

    Heading Home Soon

    Looking forward to seeing the kitties and sleeping in my own bed.

    The meeting went very well; I'll have posts on it at some point soon. This week looks to be amazingly busy, though. As does the next. In fact, let's just call April done.

    Outside of work, I saw some friends I hadn't seen in quite a while, picked up Ken Macleod's newest book (The Execution Channel, which won't be out in the US until June), and unintentionally gained a better understanding of my physiology (and that's all I'll say about that).

    Most importantly, I had Janice along with me for our 15 year wedding anniversary.

    April 5, 2007

    Twit

    So on the suggestion of Stowe Boyd, I've gone ahead and signed up for Twitter. I've added a small module on the right bar here showing my most recent Twitter post. I haven't been hooked up to anyone's network yet, though.

    April 1, 2007

    Interview for Changesurfer Radio

    Trinity College Professor of Healthy Policy James Hughes, founder of the Institute for Ethics and Emerging Technologies (of which I am a Fellow), runs a weekly Internet radio show called "Changesurfer Radio," covering a variety of topics related to building a more democratic (small-d) future. On March 31, Dr. Hughes interviewed me for his show; the result is now up on the Changesurfer website.

    "Technogaian Approaches to the Climate Crisis"

    (Direct link to the MP3)

    The connection apparently had a bit of noise at the outset, and it looks like James just zapped that part of the conversation -- which is why I don't say "hi" at the beginning, and his intro seems to go on awfully long. It was a fun conversation, though, and I look forward to the next time.

    Augmented Fluid Intelligence

    Can we survive the multitasking era?

    Okay, multitasking is hardly up there with global warming, pandemic disease and asteroid strikes as a civilization threat, but it's becoming increasingly clear that multitasking reduces overall effectiveness and accuracy. Yet we're forced to juggle more and more simultaneous activities in our work, in our social networks, even in our play. As a result, simple tasks take longer, and we're far more likely to make errors. In short, as our world gets more complex and we face greater challenges, we're becoming less able to respond successfully.

    Theorist Linda Stone calls this overtaxed ability to focus "Continuous Partial Attention" -- a name that's much cooler than multitasking, you have to admit -- and she describes it as an "artificial sense of constant crisis." But in many ways, the world we're moving into is even worse than this, because we're becoming so accustomed to the constant interruption that we're starting to find it hard to focus even when we've achieved a bit of quiet. It's an induced form of ADD -- a "Continuous Partial Attention Deficit Disorder," if you will, ADD via CPA.

    Our ability to handle simultaneous complexity is governed by what cognitive scientists call "fluid intelligence," commonly defined as the ability to find meaning in confusion and to solve new problems. Fluid intelligence can be exercised, and in fact appears to be increasing. If Steven Johnson's argument in Everything Bad is Good For You is right, we're seeing this gradual increase in intelligence precisely because our cultural and social expressions are increasingly taking forms that are stimulating to our fluid intelligence.

    But this process will inevitably have limits. Eventually, we'll hit a ceiling in the ways in which we can improve our fluid intelligence naturally. At that point, we'll face a hard choice: make major changes to our work and social cultures, so as to reduce the degree of simultaneous attention-grabbing activity; or develop augmentation systems that enhance our natural fluid intelligence by recognizing, from moment to moment, what needs our actual focus, and what can be handled by proxies. The wise choice would be the first one. It should come as no surprise, then, that I suspect that we'll do the second.

    As it happens, we're already working on devices that will do just this. The problem is, these systems aren't quite done -- and at present, actually tend to make matters worse.

    If you haven't heard of Twitter, count yourself lucky. It's an application that lives somewhere in the interzone between blogging and text-messaging, and went from being nearly invisible to nearly ubiquitous in less than a week in early March. [This explosion was likely due to the combination of overlapping tech-fests (TED, SXSW, GDC) with concentrated early-adopter attendance and Twitter's complete dependence upon network effects for utility (i.e., the more people have it, the more useful it becomes). Expect other network-dependent apps to try to artifically reproduce this perfect storm next March.]

    Twitter allows you to send quick and easy messages about your various activities to people who have selected to receive them; the current joke is that where regular blogging let you give daily reports on your cat, Twitter lets you give minute-by-minute updates. During busy periods, it's quite easy to be overwhelmed by the volume of incoming messages, the vast majority of which will be of only passing, mild interest at best.

    But that leaves the tiny minority of truly useful and interesting posts, ones which have particular value due to their timely arrival. At present, finding those requires wading through the mass of "my kitty sneezed!" or "I hate this taco" messages; of course, this is exactly the kind of low-complexity activity that we'd habitually perform via Continuous Partial Attention.

    Imagine, however, if Twitter had a bot that could learn what kinds of messages you pay attention to, and which ones you discard. Perhaps some kind of Bayesian system, more complex than current spam filters, but not outrageously so. Over time, the messages that you don't really care about would start to fade out in the display, while the ones that you do want to see would get brighter -- an adaptation of the "ambient technology" concept. These bright headlines would stand out against the field of gray, drawing your attention only when you would desire it. If this worked reasonably well, you'd have reduced the overall demands on your fluid intelligence by outsourcing some of the rote filtering to a device.

    These kinds of bots -- attention filters, perhaps, or focus assistants -- are likely to become important parts of how we handle our daily lives. We don't want to have the information streams we've embraced taken away from us, and every decision to scale back how frequently we check email or stock tickers or combat results or the like raises the spectre of our competitors choosing to tough it out. As these information streams become more and more important to our professional and personal lives, the harder it will be to pull away. So rather than disconnect, we'll get help.

    We'll be moving from a world of Continuous Partial Attention to one of Continuous Augmented Attention.

    Jamais Cascio

    Contact Jamais  ÃƒÂƒÃ‚ƒÃ‚ƒÃ‚ƒÃ‚¢Ã‚€Â¢  Bio

    Co-Founder, WorldChanging.com

    Director of Impacts Analysis, Center for Responsible Nanotechnology

    Fellow, Institute for Ethics and Emerging Technologies

    Affiliate, Institute for the Future

    Archives

    Creative Commons License
    This weblog is licensed under a Creative Commons License.
    Powered By MovableType 4.37