The Toolshop Story
How It Started
I first thought of the toolshop effect many years ago when I was attempting to explain the concept of software to people who knew very little about computers. Here is the core of that explanation later re-developed in the fall of 2006 for use in my masters thesis:
In a machine shop in which substantial equipment is manufactured, dies and patterns are used extensively to limit fabrication costs and improve quality (by incorporating the skill of others) and consistency; and the choice of these is crucial to the nature of the resultant product. There is also the matter of adapting acquired dies and tools for particular uses, then transmitting those adaptations and carrying knowledge to workers and other shops as new tools to use in their processes, both enhancing and constraining their capabilities.
The toolshop, a more generalized machine shop, thus becomes a metaphor for a place where process and product, product and process, mingle and feedback on one another to mediate in the creation and transmission of knowledge, technique and patterns through products used to manufacture other products. And so it goes for software engineering, except that the toolshop effect is exaggerated by software being actual language and computers being such powerful chameleons whose purpose is to become whatever software dictates. Computer languages are, significantly, self-describing and all of: artifact of engineering effort, the engine of process, and the ultimate knowledge container. As much as writing transformed human cultures according to classical views of language, written traditions in software engineering practice, with the embodiment of knowledge and skill in software tools that write and rewrite productive artifacts autonomously, have had a transformative effect unknown at any other time and in any other technical milieu. No other technical artifact operates and mediates in this manner so deeply as in the minds of software developers who are at once toolmakers, users, and product makers.
What mattered at the time I originally wrote this was the cause-and-effect relationships that only someone with a software developer's eyes could see. What I've learned about media, mediation and cognitive process since then has embellished not only my perception and explanation of the toolshop effect as a reflexive phenomenon, but, more importantly, has brought me to suggesting what the ultimate value of such thinking might be.
The subject transformed for me when I challenged myself to turn the concept into a masters thesis. I attempted to pin the blame for Microsoft's rapid ascendancy to world domination of the software industry on the toolshop effect as a sociological effect. It didn't work. I was (and am) quite certain of the connection and had hoped to find evidence in the case of the U.S. Department of Justice versus Microsoft and Sun Microsystems versus Microsoft of even indirect attempts to inhibit the toolshop effect but found no evidence whatsoever in literatures anywhere.
I was forced to abandon the work in favour of a more pedestrian but estimable task of defining the qualities of text-based dialog mediation systems. Writing it, making it work, forced me to better understand the research process, an experience that now pays significant dividends. It was especially interesting to have all my assignments graded so assiduously by my instructors throughout the course work for my degree, only to have the thesis paper pass right through without significant feedback. (I did get some nice compliments from my second reader, the school's director.) I guess it was our first opportunity to fly solo, which, of course, is all up to the learner.
Software is the Ultimate Tool
Software engineering culture plays on two deep-seated psychological phenomena to produce effects unknown in other technical milieux. The first is the intimacy of language and mind.
There is much theory (and common sense) that suggests that language and thinking are inextricable. Andy Clark writes very eloquently on the matter, explaining the role of language in the construction of mind. The following extensive quotation from Supersizing the Mind (2008) is completely irrestible because it ends by nibbling at the very edge of my own theories:
Coming to grips with our own special cognitive nature demands that we take very seriously the material reality of language: its existence as an additional, actively created, and effortfully maintained structure in our internal and external environment. From sounds in the air to inscriptions on the printed page, the material structures of language both reflect, and then systematically transform, our thinking and reasoning about the world. As a result, our cognitive relation to our own words and language (both as individuals and as a species) defies any simple logic of inner versus outer. Linguistic forms and structures are first encountered as simple objects (additional structure) in our world. But they then form a potent overlay that effectively, and iteratively, reconfigures the space for biological reason and self-control.
The cumulative complexity here is genuinely quite staggering. We do not just self-engineer better worlds to think in. We self-engineer ourselves to think and perform better in the worlds we find ourselves in. We self-engineer worlds in which to build better worlds to think in. We build better tools to think with and use these very tools to discover still better tools to think with. We tune the way we use these tools by building educational practices to train ourselves to use our best cognitive tools better. We even tune the way we tune the way we use our best cognitive tools by devising environments that help build better environments for educating ourselves in the use of our own cognitive tools (e.g., environments geared toward teacher education and training). Our mature mental routines are not merely self-engineered. They are massively, overwhelmingly, almost unimaginably self-engineered. The linguistic scaffoldings that surround us, and that we ourselves create, are both cognition enhancing in their own right and help provide the tools we use to discover and build the myriad other props and scaffoldings whose cumulative effect is to press minds like ours from the biological flux. (pp. 59-60)
Beyond the primacy of language in models of thinking in cognitive science, I suspect there is an even more powerful role for written language as the persistent fuel of progress. Its potential is shown by how the effects of writing demonstrably influenced early cultures and by how transformations of thinking by written language and recorded media interact with culture as distinct from temporal auditory and visual media. I found validation in various works on culture, communication, writing and media but none so fundamental and succinctly expressed as in the (translated) words of Jean Bottero, renowned assyriologist, in Mesopotamia: Writing, Reasoning and the Gods (1987), on the impact of the advent of writing.
The writing system [of Mesopotamia] is impressive in itself. It is also the earliest one attested in world history, and was perhaps the most shining and generous contribution of the ancient Mesopotamians to the development and the progress of our understanding, when we consider, right now, to what degree the transition into the written tradition has profoundly transformed our intelligence, by reinforcing and multiplying its capacities. (my emphasis) (p. 4)
What this says to me is that a persistent medium, especially one that conveys language explicitly, has marked effects that temporal media do not. I extend this into the fundamentally written nature of software. After all, the whole process of software and machines that run on software is about stored program devices - recorded, persistent media.
So, not only has intelligence in software engineering been accelerated by software itself, but engineering business culture has colonized other business with analogous tools and approaches. A significant goal of software engineering is to transfer certain programming responsibilities to non-engineering computer users. Through the global nervous system, the Internet, by which communication and computation converge ubiquitously, media, underpinned by written software, and their effects previously confined to software engineering culture now commonly transform all participants' intelligence. The children of soft tool culture are arguably smarter than previous generations, profound evidence of the incredible cultural leap facilitated by computers and communication technology.
All human endeavours are marked by the language they carry and depend on, but software engineering is distinct in important ways. Architecture, for example, manifests as the language of abstract, normative plans and models. Non-productively, the language only stipulates design and construction guidance needed for the arduous processes that result in buildings. You cannot build a building with a building, or use parts of buildings in other buildings. You can only re-use proven plans to guide new construction projects. Conversely, software, as language, is not abstract, and is both productive and normative. It literally does what it describes. Software engineers use it to build other software, and combine and re-combine parts instantaneously to create new outcomes. Writing software is thus the act of changing the physical world by simply using language, the intimate companion of consciousness.
The second psychological phenomenon of software engineering is the reciprocity of tools and mind - tools being extensions of mind that expose its reflexivity.
Reflexivity comes from making tools to make better tools and envisioning new possibilities, better prospects, once useful tools are in hand. The words of Horace Fries in his article Mediation in Cultural Perspective (1945), make this clear.
Some half-million to a million years ago our early sub-human ancestors found themselves walking erect and using their former forepaws as manipulative organs. With the transformation of the first finger into an opposable thumb the organic foundation was laid for the continual use and improvement of tools. Tools were used, and then they were used to make tools. Slowly but surely an accelerative process got under way; a process almost mysteriously self-propelling, as it were, in the cultural and material environment of men.
When simple tools are used, the intended consequences become readily identifiable. Eventually they become organized in more complex groups as aims. The experience of a tool can then stand as the experience of something not present, something hoped for in the future, something deliberately to move towards — though absent — or a thing to be accomplished. In short, a tool is the simplest kind of manipulative sign or symbol. When tools are used co-operatively by more than one creature, there is that marvelous experience of a common aim. (p. 449)
Daniel Chandler in The Act of Writing (1995) confirms that the mediation of tools is more than process facilitation.
In constructive models of the making of meaning, the active role of all participants is now well-established. Far more than simply Homo loquens, Homo scriptor or even Homo faber (makers, or toolmakers) we are, above all Homo significans: meaning-makers. We now need to devote more attention to exploring our modes of making meaning with the media involved, and to the subtle transformations involved in all processes of mediation. We must also acknowledge that media do not simply ‘mediate’ experience; they are the tools and materials with which we construct the worlds we inhabit. The recognition and study of processes of mediation underlines the constructedness of reality. Engagement with media may even be fundamental to the construction of consciousness. (pp. 225-226)
This, then, is the crux of my concern; but further: cognition is not confined to the braincase, but the plastic brain can adopt and adapt to any form of real-world mediation. How we interact with the physical environment is a matter of experience, and, importantly, conscious choices. That environment is strongly characterized by how we explain it to ourselves through language, a determinant of who we are and who we wish to be, and by the media we employ to create environments that suit our purposes, and through which emergent purposes manifest. To understand the industrial revolution is to realize that we did not just create machines that make things; we created an industrial society in which systems of thinking and machine-making devised machinery that made things that changed us. To understand the 21st century is to realize that software, the expressive language/tool medium, that not only constructs our reality but physically transforms our universe, has now become the fount of all change.
Linguistic Reflexivity is Key to Future Electronic Media
(added March 4, 2010.)
We need to get back to the point of this section, that Software is the Ultimate Tool beyond the mostly theoretical evidence of usefulness implied by the importance of language - especially written language - and by how the reciprocity of tools and minds works. I would suggest that the incredible speed with which computers have come to dominate our lives is evidence of software's difference, but that takes evidence I don't have right now.
To understand this point, you might start by re-reading the extensive quote from Peter Clark (2008) near the top of this section if you don't remember it clearly and then we have to get into linguistic reflexivity, which bears on the reflexivity of programming languages - nasty expressions, these, but necessary. Please read the linked concept pages and we're ready to proceed.
I believe I know how to prove that there is a lot more coin in using linguistic constructs to carry information than arbitrary data structures. I include self-describing data structures in the category of linguistic constructs because they contain instructions instead of relying totally on shared (and often private) knowledge of schemata. In this way, the language can be re-used or leveraged to carry additional information along with its processing requirements rather than static or even extensible data structures.
To see how this works, consider how we communication with one another. Ignoring for the moment that our exceptionally adept brains can process all kinds of arcane information glibly, we are frequently challenged to read the dominant message of a communication without sufficient context. And in those situations, we often get it wrong; that is, we misunderstand. For example, when we see a person's name and address, it's pretty clear what we're looking at because we're used to seeing such things. But consider a random pile of information (words and numbers) that seems to make no sense at all. Since you understand English, all the provider of this information would need to do is explain what the numbers and words were good for and you'd be able to do something with it if the information interests you. The explanations could be provided through separate information exchange (such as shared instructions) or the explanations could arrive with the information. Shared culture is a common and conventional mode of exchanging schemata: we know what another person is saying because we share the context within which the information is encoded. Then again, the information could be embedded in the explanations. The sender and receiver need only agree before hand that information will arrive in such a manner, embedded in instructions and descriptions of purpose, and reasonably accurate communication is enabled.
Well, computers are not tragically different in this respect. Exchanging information descriptions before hand puts a load on the information exchange system to always guarantee that instructions match information and that explanations arrive outside of the information channel in advance. When information arrives amid instructions, the two correspondants need only agree on how to interpret the instructions (which language) and any information can be traded. The instructions are readily encoded in a language that is commonly understood. Because languages can be self-describing, that is, it's possible to create instructions never before seen by the receiver, the information exchange process is magically transformed from strictly limited to extensible.
When systems are constructed to interface flexibly with one another, allowing their re-use in unpredictable ways, developers are employing a concept known as loose coupling - mentioned below in the context of social science - and are inducing the potential benefits of recombinant functionality. I intend to provide clear examples of this as my work progresses. Fortunately (perhaps) I'm not inventing anything, but only pointing out how and why communication strategies now building in the public domain will likely result in - gulp - the advent of Web 3.0.
How Computers Work and How Software Is Produced Really Matter
As environmentalists need to understand carbon and the culture of its use without becoming physicists and anthropologists, so must social scientists understand software and its mediating effects without becoming programmers and information architects. This is an urgent matter. How and why is software social? Is there a predictable future in this soft tool culture we now occupy? I want to create a social science mindset that facilitates research into these questions and makes the answers tractable.
I suggest that software is the industrial artifact that serves as an exemplar for profoundly enhancing our cognitive topology. However, a matter that prevents the common understanding of software engineering process is a conceptual knowledge gap. There are, in my estimation, two core concepts without which it is not possible to fully appreciate how computers function and how software engineers wield language. These are: programmability and reflexivity of programming languages. As it happens, I have explanations of these concepts geared to non-software developers and a test for judging whether the concepts are sufficiently understood. Developers easily acquire them through training or assimilation and then assume them in one another and apply them in their daily work. A significant incentive for me in doing this work is thus to reveal the core principles of software engineering by transferring the core concepts to facilitate deeper general knowledge of the power inherent in software engineering to change the world so efficiently.
That software engineering culture supports a mode of thinking unknown elsewhere is not so surprising as one might think. I liken it to speciation (such as when a small population of birds is blown across a mountain range by an incidental wind and then evolve independently of the original population) but caused by intellectual instead of physical insulation. The inherent nature of software which emerged as an outcome of electronic control systems, has nurtured this evolutionary sequence. If I were to present these notions about software engineering to software engineers, such as the reciprocity of tools and the significance of language in those tools, the common reaction would likely be, "OK, but so what?" Citizens of the land of software engineering just think differently as if they were bred to do so. They are not entirely unaware of this, and calmly occupy this closed culture marked by these unique modes of thinking and behaviour - only extraordinary when one considers the huge importance of computers, computer-mediated culture and communication, and the speed of their ascendancy to crucial importance in post-industrial society.
The Connection to Social Science
It is one matter for the cognitive topology of an individual to include extra-corpus (or extrasomatic per Gardiner, 2008, p. 164) media - objects, phenomena and constructs. More significantly, media that are shared - created to be shared proficiently - can create a collective cognition that was possible only by groups of people in close proximity in pre-literate society. This is where cognitive science and social science touch, at an isthmus joining two vast landscapes of understanding. On the social science side, the true consequences of software engineers striving to change the world manifest mysteriously. Perhaps I can make electronic media susceptible to scrutiny by social science at a completely different level than conventional modes of HCI (human computer interaction) and communication inquiry currently allow by leading guided tours into both cognitive science and computer science. There's no question that those two disciplines should be introduced to the strange concepts of social science land.
There is another matter to consider: as much as an apparent infinity of cognitive extensions are possible through electronic media, there is the prospect that these human inventions might prove limiting rather than liberating. Michael Heim discusses the problem in terms of "... enframing, where acts of freedom are modified by the techniques of communication and become incorporated into the network" (1999, p. 91), the problem being that the human becomes an extension of the mechanism, instead of the other way round. Given the zeal with which the great vendors of automation openly seek to build dependence on their products, wariness of enframing is worthwhile. Examining and understanding the dangers of enframing play in the same park as numerous modes of critical analysis that social science is wont to engage. Surely our greatest fears are realized when no person communicate at all without elaborate post-industrial contrivances. Yet how can we use new media most efficiently and avoid even the slightest dependence in the relationship?
Gardiner raises this point about confining versus liberating media (2008) in his discussion of Mediation and IA (pp. 161-185). Apparently Andy Clark (in his Natural Born Cyborgs) suggests that first, the brains of our children are different from hunter-gatherer brains because they grow amid different media experiences, and second, the best systems don't deeply enframe (my spin) but create an environment of possibilities, such as the uncontrolled Internet. "And thus it is free to organize itself. It thus joins the self-organizing systems designed by Mother Nature rather than by our limited selves" (Gardiner, 2008, p. 181). Is the prospect of self-organizing systems sufficient reassurance? (This is me asking this.) I see phenomena such as Facebook and online pornography, even global capitalism, and wonder if unleashing the human mind amid uncontrolled media is, by definition, likely to be good. No matter this fear, these factors need to be understood by sociologists. This uncontrolledness is a feature of systems that consist of small pieces loosely joined, crazily enough the very title of Weinberg's book about a unified theory of the Internet (2002) I tackle this subject/concept as loose coupling, mostly as it applies to software systems but with extensions into other contexts with a little help from Mr. Weinberger. These aspects of technology - their origins, their socially-driven nature and systematic indeterminism - in my opinion, should be significantly understood and scrutinized by sociologists who, at this point, do not seem to work at this level of comprehension.