If you wanted to build something that would last for centuries, how would you do it?
Legendary software developer Dan Bricklin recently wrote an essay entitled "Software That Lasts 200 Years," exploring what it would take to craft code that could reliably function for an extended period of time. It's a hard problem, one that would require a wholly different mindset from the traditional programming approach. But as I read the piece, it struck me that the general rules Bricklin presents for "Societal Infrastructure Software" could be applied to other systems and artifacts, as well.
Bricklin's list of eleven "needs" for Societal Infrastructure Software make a good starting checklist for long-term design. It's not a "how to" so much as a "what to think about" list. As you read it over, think about the degree to which familiar social infrastructure systems -- voting systems, energy production, health care, governance -- fulfill these demands:
Meet the functional requirements of the task. Robustness and long-term stability and security. Transparency to determine when changes are needed and that undesired functions are not being performed. Verifiable trustworthiness of all three of the above. Ease and low cost of training for effective use. Ease and low cost of maintenance. Minimization of maintenance. Ease and low cost of modification. Ease of replacement. Compatibility and ease of integration with other applications. Long-term availability of individuals able to train, maintain, modify, determine need for changes, etc.
Most of these are self-explanatory. Does it work the way it's supposed to? How well can it be understood? How easy is it to change? For me, the fourth entry, "verifiable trustworthiness," is perhaps the most important. We shouldn't have to rely on the designer's promise that the system functions as required; we should be able to see for ourselves. Yet the value of transparency is more than confirmation of function. Transparency helps to facilitate the other system design "needs," as it improves the ability to understand the system (for use, for maintenance and modification, for replacement).
Bricklin, however, is missing one critical rule for building any long-lived infrastructure system, software or otherwise:
It's not enough to make sure that the newly-designed system is compatible and integrates well with existing systems ("applications"). The combination of the new system and the old should have as few unanticipated negative results as possible, and designers should be ready and able to modify the system when they do occur. We know that it is impossible to eliminate such emergent results completely, and indeed we may well find positive results from system interaction. But we should not be forced to live with unanticipated, undesired outcomes simply because changing the functioning of a system is too hard. In a way, this rule introduces a form of the precautionary principle to system design.
The remainder of Bricklin's piece is provocative and a very useful prism for thinking about design. It will come as little surprise to WorldChanging readers that he embraces the methods and values of free/open source software development in his essay; many of you will already have recognized that the collaborative-development technique is well-suited for responding to many of the listed needs.
As we begin to better understand the longer-term implications of our choices, and as we increasingly have lives sufficiently long and healthy to witness those implications first-hand, designing for the long term takes on ever more importance. "Planned obsolescence" and "disposable design" don't make sense in a long term world. This doesn't mean designing systems and artifacts that must be used for decades or centuries, per se -- it means designing them so that they can be used for long periods, yet can also be modified and replaced as needed without undue trauma. It means understanding that a system's evolution, from its creation to its place in its "ecology" to its eventual demise, is as important as the system's actual use.