Distributed cognition means knowledge lies not only within the individual, but also in the individual's social and physical environment.The Role of Dynamic Coupling
Two entities are closely coupled if they reciprocally interact: changes in one cause changes in the other, and the process goes back and forth in such a way that we cannot explain the state trajectory of the one without looking at the state trajectory of the other. When a person writes on paper, the two form a reciprocal system. The person causes paper changes; paper changes partially cause person changes. This reciprocal interaction allows the person to find expressions, to represent and explore ideas using the persistent state of the paper that would otherwise by impossible. There is a dynamic between the two.
The Role of Pragmatic and Epistemic Actions
The role of intentional movements taken to bring a subject physically closer to its external goals are called pragmatic actions and those intended to simplify computation, reduce error or increase precision are called epistemic actions.
Epistemic actions are everywhere. Some are connected to information. We move our hands close to something before grasping it. In woodworking the adage is measure twice, cut once. Some epistemic actions compensate for sensory limitations. A trick most of us learned as kids was to squint our eyes or make a little hole with our fingers to look through in order to see distant items more clearly. Others are interactive strategies for externalizing. Say something quietly out loud to make sure it is coming out right - this serves to confirm. Others serve as reminders. Thoughtfully place one's keys in the front of the door or in one's shoes to save relying on prospective memory alone. There are many other epistemic actions. All have personal payoffs and depend on interaction with the environment.
Ease of Adoption
A basic attribute of good design is that it reduces the variance of output. Uniformity in quality control is cardinal.
Furthermore, other things being equal, one system is better than another if new users can more quickly learn the system reaching the same performance level.
The strength or weakness of a technology will not be apparent unless we also include a careful account of its "human technology", the protocol required in interacting with the technology.
That being said, performance is assumed to vary with skill (or learnability) but nothing is said about the nature of this skill. It is necessary to observe the way people do things in ever more detail and tie these actions to cognition to get at the mechanisms of distributed cognition.
The same type of analyses can be used to begin to measure and explain the effectiveness and efficiency of an artifact’s design.
The essential question is: What drives the way people interact with artifacts?
A basic rule is any design that makes the structure and set of choices easier to appreciate, more visible, is a better design.
The study of distributed cognition is very substantially the study of the variety of subtlety of coordination. One key question which the theory of distributed cognition endeavors to answer is how the elements and components in a distributed system – people, tools, forms, equipment, maps and less obvious resources – can be coordinated well enough to allow the system to accomplish its tasks. Even coordinating mechanisms as simple as clocks or paper clips can make the difference between a successful system and an unsuccessful one. Clearly we would like methods and measures for systematically exploring coordination.
People manipulate local conditions to stay in control, to perform faster and more effectively. They annotate to cue response and reduce prospective memory; they line up items to make them easier to scan, to notice outliers and so on.
These same sorts of principles are exploited by good designers when they make artifacts that make our life easier. But all these examples of coordination are local; they affect local choice.
The Success of the Whole
In distributed systems the success of the whole depends equally on all these acts of local choice adding up, working together to move the system closer toward system goals. Decisions about the roles people will play in a system, like the decisions about the artifacts, physical layout, routine and local goals seem to be on a different level than local choice.
They have a lot to do with more global considerations about how everything fits together. Assembly lines have to be planned and laid out. Orchestral conductors need to make global choices about tempo and expressiveness. If these are not good then everyone can play their part perfectly but he overall product will be imperfect. Even good cooks, using good ingredients produce bad food if their recipe is wanting.
To study coordination at this level requires modeling and simulation, scheduling theory, and others. If we do not have living versions of different systems of coordination, how can we predict the value of changing a process?
Only by modeling and simulating can we study the temporal effects of such things changing the time and destination of resources, or the impact of changing the connectivity, reliability or speed of communication, or the pattern of messaging. Only through simulation can be begin to see how one participant’s local activity in his own activity space can have side effects on neighboring or intersecting activity spaces and so produce cascade of side effects.
It is spectacular the way people discover or learn to compensate for their own and their team’s limitations. People with advanced Parkinson's disease, for example, lose the ability to control walking because of corruption in neural processing of proprioceptive input. Yet, if they walk on floors that have large stripes or checkerboard titling they can compensate for the loss of proprioceptive input by using the rhythmic input the visual system provides. How could such successes be predicted by models?
History as an Agent
History is important because coordination in operating systems is almost always history dependent. To appreciate how hard adding history makes the problem of coordination, imagine that we set out to model and then simulate a distributed system in which individuals rely on a clock to coordinate timing. Under reasonable assumptions we may be able to show that without a clock timing would be unacceptably bad. Great result. But where did our reasonable assumptions come from?
Presumably from an idealization about the way the system in question operates right now. Yet, if we have learned anything from looking at the complexity of systems, it is that evolution can find multiple paths to the same goal. For a large class of systems, including our target system, a differently designed system which relies on, say, conveyors belts moving at fixed speed over a fixed distance can be temporally coordinated as well as system with a clock. It depends on what needs to be where and when. This diversity of solutions highlights the needs to stay close to the facts.
We can never understand the elements driving the coordination of a natural distributed system if we suppose that the system, its setup, its timing, its rules and culture of operation, are devoid of history. Parts have been adapted at every level, and the for they are in now a partial function of the form they were in before. If it were not so hard to know the aspects of a system that transfer well to the real world, business models would be more successful.
The upshot is designers must always work from the present, mindful of the inertia of users. If we create a design that is too distant from current activities, however cognitively efficient we think it is , users will either not adopt it – so it is de facto ineffective – or users will co-op it for their own purposes. The gulf between the theories we have and the designs we need remains wide.