10 Pattern Recognition


Though intelligence doesn't allow us to overcome the second law (of thermodynamics), it remains true that creatures with more acute senses and more powerful brains will see pattern where others see randomness.

Paul Johnson, Fire in the Mind

Obviously, this is an act of the imagination. Things are perceived. Of course, partly by the naked eye and partly by the mind, which fills the gaps with guesswork based on learning and experience, and thus constructs a whole out of the fragments that the eye can see.

Clausewitz, On War : 109

The "highest" form of Aids to Learning is pattern recognition, which is closely related to the building block mechanism of complex adaptive systems covered in Chapter 1. It is a nonlinear cognitive process that involves the difficult transition from the in-grained habit of deductive reductionist thought to more inductive processes, in which powers of pattern recognition are enhanced and intuition is elevated. By intuition, we mean not so much instinct, as the product of the experiential provided by training and education, as well as experience itself. A major requirement for pattern recognition capabilities is to infuse lower echelons with both the confidence and competence to engage in semi-autonomous action in accordance with Van Creveld's Rules. As pattern recognition is the analogue to the building block mechanism of cas, recognition-primed decision making is the scholarly exploitation of pattern recognition in the field of the cognitive sciences.

Gary Klein, an applied cognitive psychologist who has done work for the Air Force Human Resources Laboratory, the Army Research Institute, and the Marine Corps' Combat Development Command, has been a pioneer in pattern recognition. In 1989, he wrote: (1)

It is time to admit that the theories and ideals of decision making we have held over the past 25 years are inadequate and misleading, having produced unused decision aids, ineffective decision training programs and inappropriate doctrine...DoD often follows the lead of behavioral scientists, so it is important to alert DoD policy makers to new developments in models of decision making. (i)
The culprit is an ideal of analytical decision making which asserts that we must always generate options systematically, identify criteria for evaluating these options, assign weights to the evaluation criteria, rate each option on each criterion and tabulate the scores to find the best option. We call this a model of concurrent option comparison, the idea being that the decision maker deliberates about several options concurrently. The technical term is multiattribute utility analysis.
 
Another analytical ideal is decision analysis, a technique for evaluating an option as in a chess game. The decision maker looks at a branching tree of responses, and counter-responses and estimates the probability and utility of each possible future state in order to calculate maximum and minimum outcomes. Both of these methods, multiattribute utility analysis and decision analysis, have been used to build decision training programs and automated decision aids. (ii)
These strategies sound good, but in practice they are often disappointing. They do not work under time pressure because they take too long. Even when there is enough time, they require much work and lack flexibility for handling rapidly changing field conditions.
Imagine this situation (which we actually observed): An Army brigade planning staff engages in a 5-hour command and control exercise.
 
One requirement is to delay the enemy advance in a specific sector. The operations and training officer (S3) pinpoints a location that seems ideal for planting mines. It is a choke point in a wooded area where the road can be destroyed. A plan develops to crater the road, mine the sides off the road and direct the artillery on the enemy as he either halts or slows his advance to work around the obstacles. During the planning session, there are objections that it is impossible to have forward observers call in the artillery, and that without artillery support to take advantage of the enemy slowdown, the mines would do no good. Someone suggests using FASCAM (family of scatterable mines), but another person notes that FASCAM will not work in trees. Only after this thorough consideration and subsequent rejection of his original choice, does the S3 consider an open area also favorable for an artillery attack and select it as the point of the action.
Suppose the planners had tried to list each and every available option, every possible site all over the map, and then evaluate the strengths and weaknesses of each? There was simply not enough time in the session to do this for each possible decision. We counted 27 decisions made during the five hours, an average of one every 12 minutes. Even if this is misleading, since it does not take into account time taken by interruptions and communications. We estimate that about 20 of the decisions took less than one minute, five took less than 5 minutes and perhaps only two were examined for more than 5 minutes. Obviously, there was not enough time for each decision, using analytical concurrent option comparisons. And if we try to approach only a few choices in this way, which ones? It is even more complicated to screen decisions for deliberation. Analytical strategies just will not work in this type of setting.
 
I am not saying that people should never deliberate about several options. Clearly, there are times to use such analytical strategies. We have watched DoD design engineers wrestle with problems such as how to apply a new technology to an existing task. Here it did make sense to carefully list all the options for input displays and to systematically analyze strengths and weaknesses to get down to a small number of configurations for testing.
The point for this article is that there are different ways to make decisions, analytical ways and recognitional ways, and that we must understand the strengths and limits of both in order to improve military decision making. Too many people say that the ideal is for soldiers to think more systematically, to lay out all their options and to become, in effect, miniature operations researchers. This attitude is even built into military doctrine. For example, US Army Field Manual 101-5, Staff Organizations and Operations, advises decision makers to go through the steps of multiattribute utility analysis. (iii) Such advice may often be unworkable and sometimes may be dangerous. To understand why, we must get a clear idea of what skilled decision makers do.
 
For the past four years, my colleagues and I have been studying experienced decision makers, faced with real tasks that often have life and death consequences. We have studied tank platoon leaders, battle commanders engaged in operational planning at Fort Leavenworth, Fort Riley, Fort Hood, Fort Stewart and the National Training Center at Fort Irwin. (Prior to that, we observed Air Force and Army battle commanders at BLUE FLAG.) We studied urban fireground commanders and wildland fireground commanders (with over 20 years of experience) as they conducted actual operations. We also studied computer programmers, paramedics, maintenance officers and design engineers. Many of the decisions we examined were made under extreme time pressure. In some domains more than 85 percent of the decisions were made in less than 1 minute.
We found that concurrent option comparison hardly ever occurred. That is, experienced decision makers rarely thought about two or more options and tried to figure out which was better. In this article, I will describe the recognitional decision strategies we did find, differentiate between the situations that call for analytical or recognitional strategies and examine some of the implications for military decision making.
 
Recognitional Decision Making
When we told one commander that we were studying decision making, he replied that he never made any decisions! What he meant was that he never constructed two or more options and then struggled to choose the best one. After interviewing him, we learned that he did handle decisions all the time. After studying over 150 experienced decision makers and 450 decisions, we concluded that his approach to decision making is typical of people with years of experience and we have derived a model of this typical strategy.
Basically, proficient decision makers are able to use their experience to recognize a situation as familiar, which gives them a sense of what goals are feasible, what cues are important, what to expect next and what actions are typical in that situation. The ability to recognize the typical action means that experienced decision makers do not have to do any concurrent deliberation about options. They do not, however, just blindly carry out the actions. They first consider whether there are any potential problems and only if everything seems reasonable, do they go ahead....
 
We call this a recognition-primed decision (RPD). The officer used experience to recognize the key aspects of the situation, enabling a quick reaction. Once a decision maker identifies the typical action, there is usually a step of imagining what will happen if the action is carried out in this situation. If any pitfalls are imagined, then the officer jettisons it and thinks about the next most typical action....the experienced decision makers are not searching for the best option. They only want to find one that works, a strategy called "satisficing." [Recall George S. Patton's saying, "A good plan executed now is better than a perfect plan next week."] We have found many cases where decision makers examined several options, one after the other, without ever comparing one to another. Because there is no deliberated option comparison, experienced decision makers may feel they are relying on something mysterious called "intuition" and they may be mildly defensive about it if they are questioned carefully. One implication of our work is that this is not a mysterious process. It is a recognitional, pattern-matching process that flows from experience. It should not be discounted just because all aspects of it are not open to conscious scrutiny.
Figure 10.1 shows a schematic drawing of the RPD model. It shows that if the events contradict expectancies, the experienced decision maker may reexamine the way the situation is being understood. The basic thrust of the model is that decision makers handle decision points, where there are several options, by recognizing what the situation calls for rather than by calculating the strengths and weaknesses of the different options. The concept of recognitional decision making has been developing only in the last few years.

We have found that even with nonroutine incidents, experienced decision makers handle approximately 50 to 80 percent of decisions using recognitional strategies without any effort to contrast two or more options. If we include all decision points, routine plus nonroutine, the proportion of RPDs goes much higher, more than 90 percent. For novices, however, the rate of RPDs can dip to 40 percent. We have also found that when there is deliberation, experienced decision makers deliberate more than novices about the nature of the situation, whereas novices deliberate more than experts about which response to select. In other words, it is more typical of people with lower levels of experience to focus on careful thinking about the best option.
What about team decision making? Since many decisions are made within a network of coordinating organizations and by several people at each node in the network, we have also examined distributed decision making.
 
Teams and networks demand more justification and conflict resolution, so we expect to find more examples of concurrent option comparison; that is, contrasting two or more options. However, in our studies, this has not occurred. Earlier I described a 5-hour command and control planning session in which we tabulated 27 decisions. (iv) Only one of these showed any evidence of concurrent option comparison....Similarly, our other studies of team decision making found the team behaving much like individuals-generating a plausible option, evaluating it by imagining what could go wrong, trying to "satisfice," trying to improve the option to overcome its limitations and sometimes rejecting or tabling an option to move on in a more promising direction.
How is the RPD Model Different from Analytical Decision Making?
The RPD model describes how choices can be made without comparing options: by perceiving a situation as typical; perceiving the typical action in that type of situation; and evaluating potential barriers to carrying out the action. This recognitional approach contrasts to analytical decision making in several ways:
By contrasting recognitional and analytical decision making, we can see the strengths of each. Recognitional decision making is more important when experienced personnel are working under time pressure on concrete, contextually dependent tasks in changing environments and have a "satisficing" criterion of selecting the first option that looks like it will work. It comes into play when the unit is an individual or a cohesive team that does not reach deadlocks over conflicts. Recognitional decisions can ensure that the decision maker is poised to act. Its disadvantages are that it is hard to articulate the basis of a decision and it is difficult to reconcile conflicts. Furthermore, it cannot ensure "optimal" courses of action and that is especially important for anticipating the opponent's strategies in preparation for the worst case. Also, it is risky to let inexperienced personnel "shoot from the hip."
 
Concurrent option comparison has the opposite strengths and weaknesses. It is more helpful for novices who lack an experience base and for seasoned decision makers confronting novel conditions. It is apt to be used when there is ample time for the decision. It comes into play when the data are abstract, preventing decision makers from using concrete experiences. It makes it easy to break down new tasks and complex tasks that recognition cannot handle. It is especially important when there is a need to justify the decision to others, since justification usually requires us to list reasons and indicate their importance. Analytical decision making is more helpful when there is a conflict to be resolved, especially when the conflict involves people with different concerns. It is usually a better strategy to use when one needs an optimal solution. And finally, analytical decision making is needed when the problem involves so much computational complexity that recognitional processes are inadequate. However, its cost is more time and effort, and more of a disconnect with the experience of the decision maker....
I am not claiming that there is a right way or a wrong way to make decisions. Different conditions call for different strategies. My goal is not to reject analytical decision making, but to make clear what its strengths and weaknesses are so that it can be applied more fruitfully. For too long we have emphasized one strategy-the analytical one. That is the one required by doctrine. That is the one we have been teaching. That is the one we have been building decision aids to promote.
 
Problems with Analytical Decision Making
We create problems of credibility when we present doctrine about one right way to make decisions-the analytical strategy-and thereby force officers and soldiers to ignore doctrine in making the vast majority of time-pressured operational decisions during training exercises. It does not take them long to realize that doctrine is irrelevant in this area and to wonder whether it can be trusted in other areas.
We can create problems in efficiency when we teach analytical decision techniques to military personnel who will have little or no opportunity to use them. Worse yet, we create problems in effectiveness for personnel who try to apply these techniques and fail.
 
We create problems of competence when we build decision aids and decision support systems that assume analytical decision strategies. These systems are likely to reduce inputs to the form of abstract alphanumeric data and to restrict the operator's job to that of assessing probabilities, entering subjective utilities, providing context-free ratings and so forth. This misses the skilled operator's ability to size up situations, to notice incongruities and to think up ways to improve options. In other words, these decision aids can interfere with and frustrate the performance of skilled operators. It is no wonder that field officers reject decision aids requiring them to use lengthy analytical processes when the time available is not adequate.
Human error is often explained in terms of decision bias. (v) The concept of decision bias is that people are predisposed to make poor decisions because of several inherent tendencies, such as inaccurate use of base rates, overreliance on those data that are more readily available or appear more representative, low ability to take sample size into account and difficulty in deducing logical conclusions. The argument is often made by scientists who want to convince us that human decision makers (other than themselves) cannot be trusted, and we therefore need these scientists to develop decision aids to keep the rest of us from making grievous errors.
 
However, the decision bias argument has been recently attacked as unjustified and self-serving. (vi) The evidence that humans are inherently biased decision makers comes from experiments run under artificial laboratory conditions. Furthermore, judgment biases appear to have a very small impact outside laboratory conditions. It is easy to use the benefit of hindsight to label each accident an example of decision bias that can best be controlled by more rigorous analytical procedures.
My own impression is that experienced decision makers do an excellent job of coping with time pressure and dynamic conditions. Rather than trying to change the way they think, we should be finding ways to help them. We should be developing techniques for broadening their experience base through training, so that they can gain situation assessment more quickly and accurately. If we can give up our old single-theory analytical perspectives and appreciate the fact that there are a variety of decision strategies, we can improve operational decision making in a number of ways.
 
One opportunity is to improve strategies for effective team decision making. Staff exercises are too often a charade, where they present options to a commander who then picks the best one. Usually, however, they know which option they prefer. They present, as other options, ones that had been rejected to round out the field. This procedure can be inefficient because it divorces the situation assessment from the response selection step and gives the subordinates the more demanding job of assessing the situation. It asks the commander to make a choice rather than working with the team to modify and improve options. There may be times when it is more effective to have the commander work with the staff to examine the situation and then turn over to them the job of preparing implementation plans. If alternative viewpoints and criticisms are wanted, they should come during the assessment and initial planning, so as to strengthen the option to be implemented.
A second opportunity is to understand how commanders can present their strategic intent so that subordinates are able to improvise effectively. It is dangerous to have subordinates ignoring direction and carrying out their own plans, but it is also dangerous to have subordinates carrying out plans that no longer make sense. Improvisation arises when there is a recognition that the situation has fundamentally changed. We need to understand how commanders can recognize and exploit conditions.
 
A third opportunity is to revise training procedures. Certain specialties need training and analytical decision strategies. But generally, training can be more productive by focusing on situation assessment. Along with teaching principles and rules, we should present actual cases to develop sharper discriminations and improve ability to anticipate the pitfalls of various options. The goal of analytical decision making is to teach procedures that are so abstract and powerful that they will apply to a wide variety of cases. If this had been successful, it would have been quite efficient. However, we have learned that such rules do not exist. Instead, we need to enhance expertise by presenting trainees with a wide variety of situations and outcomes, and letting them improve their recognitional abilities. At the team level, we can be using after-action reviews to present feedback about the process of the decision making and not just on the content of the options that should have been selected.
A fourth opportunity is to improve decision support systems. We must insist that the designers of these systems have appropriate respect for the expertise of proficient operators and ensure that their systems and interfaces do not compromise this expertise. (vii) We must find ways to present operators with displays that will make situational assessment easier and more accurate. We will also want displays that will make it easier for operators to assess operations in order to discover potential problems. In other words, we want to build decision support systems that enhance recognitional as well as analytical decision strategies.

Decision making based upon pattern recognition has prompted responses such as the following, (2) which I do not read so much as an objection to nonlinearity, but as the finest testimony for the need for meshing the linear and nonlinear.

Intuitive decision making is a worthy goal, but there's an irony to it, [because] intuition is based on experience. So we can conclude that as we move down the chain of command to the level of company grade officers and noncommissioned officers (NCOs), the quality of intuition will be correspondingly degraded as the level of experience decreases. Unfortunately, the further down the chain we look, the more likely it is that leaders will find themselves in situations requiring rapid decisions. Historically, commanding generals rarely, if ever, find themselves having to make immediate decisions. At the other end of the spectrum, a sergeant commanding a squad in combat may be forced to making scores of immediate decisions everyday. So, the leader with the most highly developed intuition-the general-rarely uses that talent, while the leader whose need for intuition is greatest-the NCO-lacks the requisite experience.
I agree...that intuitive decision making can't be taught-it must be learned. Sadly though, it is improbable that even a reasonable percentage of Marines are capable of such learning. . . .Cdr Tritten noted that, "If anything, the desired Myers-Briggs Type Indicator pattern at the highest levels of the military are "NT" (intuitive thinking)." When the Myers-Briggs Type Indicator was administered at the Marine Corps Command and Staff College in the late 1980s, the results indicated that more than 90 percent of Marine Corps officers displayed the "SJ" preference (sensing-judging), the polar opposite of the preference that indicates a capacity to develop and use intuition. While people can learn to use skills that fall outside their own set of preferences (as Cdr Tritten stated) we must remember that to do so can be very challenging, like forcing oneself to breathe. In a demanding situation, such as combat, people will typically resort to their "comfort zone."
...LtGen Bernard E. Trainor wrote: " I learned a lot in those final 72 hours of TBS [The Basic School]. Most of all I learned how easy it is to become mistake prone when cold, wet, sleepless, and fatigued over a prolonged period of time. It was then that the rote repetition of things like the five paragraph combat order, the seven troop leading steps, and immediate action drills suddenly made sense. They allow an officer to engage in automatic when the brain can't handle manual. It was a lesson I appreciated the rest of my career."
The 10 percent who possess the rare characteristics described by T.E. Lawrence as the "flash of the kingfisher"....can decide intuitively under the most demanding circumstances. For the other 90 percent of us, perhaps there is some value in the structure afforded by analytical methods.

Amen to that. It's all about moving from the complexity "shuffle" to the complexity "shuttle." Some of my best students have been SJs on the Myers-Briggs scale, and I've come to the conclusion that the Myers-Briggs type may not be as salient as I had earlier thought. And I am sure that Lieutenant General Trainor can remember days when the five-paragraph combat order didn't make sense, either.

Don't forget that the difference is not a yawning chasm. Our current complexity "shuffle" is a reflection of the 80/20 rule; that is our ability to handle mild nonlinearity with linear approaches. Our immune system, the finest kind of complex adaptive system, operates around the 90/10 mode. We may not, probably cannot, get to 90, but we can do better than 80, and that difference can be a world better. . . .the difference between a "shuffle" and a "shuttle."


| Coping with the Bounds Index | Foreword | Acknowledgments | Introduction | Part One Introduction | Chapter 1 | Chapter 2 | Chapter 3 | Chapter 4 | Part Two Introduction | Chapter 5 | Chapter 6 | Chapter 7 | Chapter 8 | Chapter 9 | Chapter 10 | Conclusion | Appendix 1 | Appendix 2 | Appendix 3 | Appendix 4 | Appendix 5 | Appendix 6 | Notes |