- Home
- Keith E Stanovich
What Intelligence Tests Miss Page 4
What Intelligence Tests Miss Read online
Page 4
Some critics of the intelligence concept like to imply that intelligence tests are just parlor games that measure nothing important. Alternatively, other critics allow that there may be something to the intelligence concept but that “we’re all intelligent in our own way”—which amounts to the same thing. All of these critics are wrong. In addition, critics often imply that IQ does not predict behavior in the real world. That claim is also wrong.2 Correspondingly, however, the positions of some of the more vociferous champions of the traditional intelligence concept are not without their flaws. For example, some of these IQ advocates like to imply that IQ tests capture most of what is important in cognition. I will cite in this book dozens of studies that are a refutation of this idea. In short, research is rendering obsolete the arguments of the harshest critics of IQ tests, along with those of their counterparts—the vociferous cheerleaders for a traditional concept of IQ.
Discussions of intelligence often go off the rails at the very beginning by failing to set the concept within a general context of cognitive functioning, thus inviting the default assumption that intelligence is the central feature of the mind. I will try to preclude this natural default by outlining a model of the mind and then placing intelligence within it. Cognitive scientists have made remarkable progress in sketching out the basics of how the mind works in the last twenty years. Indeed, ten years ago, cognitive scientist Steven Pinker titled a very influential book How the Mind Works. Twenty years before his book, the use of this title would have been viewed as laughably overreaching. Now that is no longer true. Nevertheless, the generic models of the mind developed by cognitive scientists often give short shrift to a question that the public is intensely interested in—how and why do people differ from each other in their thinking? In an attempt to answer that question, I am going to present a gross model of the mind that is true to modern cognitive science but that emphasizes individual differences in ways that are somewhat new. My model builds on a current consensus view of cognition termed dual-process theory.
Type 1 and Type 2 Processing
Evidence from cognitive neuroscience and cognitive psychology is converging on the conclusion that the functioning of the brain can be characterized by two different types of cognition having somewhat different functions and different strengths and weaknesses. That there is a wide variety of evidence converging on this conclusion is indicated by the fact that theorists in a diverse set of specialty areas (including cognitive psychology, social psychology, cognitive neuroscience, and decision theory) have proposed that there are both Type 1 and Type 2 processes in the brain.3
The defining feature of Type 1 processing is its autonomy. Type 1 processes are termed autonomous because: 1) their execution is rapid, 2) their execution is mandatory when the triggering stimuli are encountered, 3) they do not put a heavy load on central processing capacity (that is, they do not require conscious attention), 4) they are not dependent on input from high-level control systems, and 5) they can operate in parallel without interfering with each other or with Type 2 processing. Type 1 processing would include behavioral regulation by the emotions; the encapsulated modules for solving specific adaptive problems that have been posited by evolutionary psychologists; processes of implicit learning; and the automatic firing of overlearned associations.4 Type 1 processing, because of its computational ease, is a common processing default. Type 1 processes are sometimes termed the adaptive unconscious in order to emphasize that Type 1 processes accomplish a host of useful things—face recognition, proprioception, language ambiguity resolution, depth perception, etc.—all of which are beyond our awareness. Heuristic processing is a term often used for Type 1 processing—processing that is fast, automatic, and computationally inexpensive, and that does not engage in extensive analysis of all the possibilities.
Type 2 processing contrasts with Type 1 processing on each of the critical properties that define the latter. Type 2 processing is relatively slow and computationally expensive—it is the focus of our awareness. Many Type 1 processes can operate at once in parallel, but only one Type 2 thought or a very few can be executing at once—Type 2 processing is thus serial processing. Type 2 processing is often language based and rule based. It is what psychologists call controlled processing, and it is the type of processing going on when we talk of things like “conscious problem solving.”
One of the most critical functions of Type 2 processing is to override Type 1 processing. This is sometimes necessary because Type 1 processing is “quick and dirty.” This so-called heuristic processing is designed to get you into the right ballpark when solving a problem or making a decision, but it is not designed for the type of fine-grained analysis called for in situations of unusual importance (financial decisions, fairness judgments, employment decisions, legal judgments, etc.). Heuristic processing depends on benign environments. In hostile environments, it can be costly.
All of the different kinds of Type 1 processing (processes of emotional regulation, Darwinian modules, associative and implicit learning processes) can produce responses that are irrational in a particular context if not overridden. In subsequent chapters, we shall discuss how humans act as cognitive misers by engaging in attribute substitution—the substitution of an easy-to-evaluate characteristic for a harder one even if the easier one is less accurate. For example, the cognitive miser will substitute the less effortful attributes of vividness or salience for the more effortful retrieval of relevant facts. But when we are evaluating important risks—such as the risk of certain activities and environments for our children—we do not want to substitute vividness for careful thought about the situation. In such situations, we want to employ Type 2 override processing to block the attribute substitution of the cognitive miser.
In order to override Type 1 processing, Type 2 processing must display at least two related capabilities. One is the capability of interrupting Type 1 processing and suppressing its response tendencies. Type 2 processing thus involves inhibitory mechanisms of the type that have been the focus of recent work on executive functioning.5
But the ability to suppress Type 1 processing gets the job only half done. Suppressing one response is not helpful unless there is a better response available to substitute for it. Where do these better responses come from? One answer is that they come from processes of hypothetical reasoning and cognitive simulation that are a unique aspect of Type 2 processing.6 When we reason hypothetically, we create temporary models of the world and test out actions (or alternative causes) in that simulated world.
In order to reason hypothetically we must, however, have one critical cognitive capability—we must be able to prevent our representations of the real world from becoming confused with representations of imaginary situations. For example, when considering an alternative goal state different from the one we currently have, we must be able to represent our current goal and the alternative goal and to keep straight which is which. Likewise, we need to be able to differentiate the representation of an action about to be taken from representations of potential alternative actions we are trying out in cognitive simulations. But the latter must not infect the former while the mental simulation is being carried out. Otherwise, we would confuse the action about to be taken with alternatives that we were just simulating.
Cognitive scientists call the confusion of representational states representational abuse, and it is a major issue for developmental psychologists who are trying to understand the emergence of pretense and pretend play in children (for example, a child saying “this banana is a phone”). Playing with the banana as a phone must take place without actual representations of banana and phone in the mind becoming confused. In a famous article, developmental psychologist Alan Leslie modeled the logic of pretense by proposing a so-called decoupling operation, which is illustrated in Figure 3.1.7 In the figure, a primary representation is one that is used to directly map the world and/or is also directly connected to a response. Leslie modeled pretense by positing a so-called second
ary representation that was a copy of the primary representation but that was decoupled from the world so that it could be manipulated—that is, be a mechanism for simulation.
As Leslie notes, the ongoing simulation leaves intact the tracking of the world by the primary representation: “Meanwhile the original primary representation, a copy of which was raised to a second order, continues with its definite and literal reference, truth, and existence relations. It is free to continue exerting whatever influence it would have on ongoing processes” (1987, p. 417). Nonetheless, dealing with secondary representations—keeping them decoupled—is costly in terms of cognitive capacity. Evolution has guaranteed the high cost of decoupling for a very good reason. As we were becoming the first creatures to rely strongly on cognitive simulation, it was especially important that we not become “unhooked” from the world too much of the time. Thus, dealing with primary representations of the world always has a special salience for us. An indication of the difficulty of decoupling is a behavior such as closing one’s eyes while engaged in deep thought (or looking up at the sky or averting one’s gaze). Such behaviors are attempts to prevent changes in our primary representations of the world from disrupting a secondary representation that is undergoing simulation.
Figure 3.1. Cognitive decoupling (adapted from leslie, 1987)
We have, in Leslie’s conception, a mechanistic account of how pretence, and mental simulation in general, are carried out without destabilizing primary representations. Other investigators have called the mental space where simulations can be carried out without contaminating the relationship between the world and primary representations a “possible world box.” The important issue for our purposes here is that decoupling secondary representations from the world and then maintaining the decoupling while simulation is carried out is a Type 2 processing operation. It is computationally taxing and greatly restricts the ability to do any other Type 2 operation. In fact, decoupling operations might well be a major contributor to a distinctive Type 2 property—its seriality.
A Temporary “Dual-Process” Model of the Mind and Individual Differences
Figure 3.2 represents a preliminary model of mind, based on what I have outlined thus far. I have said that by taking offline early representations triggered by Type 1 processing, we can often optimize our actions. Type 2 processing (slow, serial, computationally expensive) is needed to inhibit Type 1 processing and to sustain the cognitive decoupling needed to carry out processes of imagination whereby alternative responses are simulated in temporary models of the world. The figure shows the override function we have been discussing as well as the Type 2 process of simulation. Also rendered in the figure is an arrow indicating that Type 2 processes receive inputs from Type 1 computations. These so-called preattentive processes fix the content of most Type 2 processing.
Where does intelligence fit into this model? In order to answer that question, I first need to stress a point of considerable importance. A process can be a critical component of cognition, yet not be a source of individual differences (because people do not tend to vary much in the process). Such is the case with many Type 1 processes. They help us carry out a host of useful information processing operations and adaptive behaviors (depth perception, face recognition, frequency estimation, language comprehension, reading the intentions of others, threat detection, emotive responses, color perception, etc.)—yet there are not large individual differences among people on many of these processes. This accounts for some of the confusion surrounding the use of the term intelligence in cognitive science.
In a magazine article or textbook on cognitive science, the author might describe the marvelous mechanisms we have for recognizing faces and refer to this as “a remarkable aspect of human intelligence.” Likewise, a book on popular science might describe how we have mechanisms for parsing syntax when we process language and also refer to this as “a fascinating product of the evolution of the human intellect.” Finally, a textbook on evolutionary psychology might describe the remarkably intelligent mechanisms of kin recognition that operate in many animals, including humans. Such processes—face recognition, syntactic processing, detection of gaze direction, kin recognition—are all parts of the machinery of the brain. They are also sometimes described as being part of human intelligence. Yet none of these processes are ever tapped on intelligence tests. What is going on here? Is there not a contradiction?
Figure 3.2. A Preliminary dual-Process Model
In fact, there is not a contradiction at all if we understand that intelligence tests assess only those aspects of cognitive functioning on which people tend to show large differences. What this means is that intelligence tests will not routinely assess all aspects of cognitive functioning. There are many kinds of Type 1 processing that are important for us as a species, but on which there tend not to be large differences between people in the efficiency of functioning. Face recognition, syntactic processing, gaze direction detection, and kin recognition provide four examples of such domains.8 This is why such processes are not assessed on intelligence tests. Intelligence tests are a bit like the personal ads in the newspaper—they are about the things that distinguish people, not what makes them similar. That is why the personals contain entries like “enjoy listening to Miles Davis” but not “enjoy drinking when I’m thirsty.”
For this reason, intelligence tests do not focus on the autonomous Type 1 processing of the brain. Intelligence tests, instead, largely tap Type 2 processing. And they tap to a substantial extent the operation I have been emphasizing in this chapter—cognitive decoupling. Like all Type 2 processing, decoupling is a cognitively demanding operation. Decoupling operations enable hypothetical thinking. They must be continually in force during any ongoing mental simulations, and the raw ability to sustain such simulations while keeping the relevant representations decoupled is one key aspect of the brain’s computational power that is being assessed by measures of intelligence. This is becoming clear from converging work on executive function and working memory, which both display correlations with intelligence that are quite high.9 The high degree of overlap in individual differences on working memory/executive functioning tasks and individual differences in intelligence is probably due to the necessity for sustained decoupling operations on all the tasks involved. Neurophysiological studies converge with this conclusion as well.
In saying that an important aspect of intelligence is the ability to sustain cognitive decoupling, I really should be saying instead: an important aspect of fluid intelligence.10 I am referring here to the Cattell/Horn/Carroll theory of intelligence mentioned in the previous chapter. Fluid intelligence (Gf) reflects reasoning abilities operating across a variety of domains—in particular, novel ones. Crystallized intelligence (Gc) reflects declarative knowledge acquired from acculturated learning experiences. Thus, Type 2 processes are associated with Gf. I shall work Gc into the model shortly, but will first turn to an even more critical complication.
Thinking Dispositions versus Cognitive Ability
At this point, we need to back up and think about how we explain behavior in the world. We will begin with an example of a lady walking on a cliff and imagine three incidents, three stories. The three stories are all sad—the lady dies in each. The purpose of this exercise is to get us to think about how we explain the death in each story. In incident A, a woman is walking on a cliffside by the ocean, and a powerful and totally unexpected wind gust blows her off the cliff; she is crushed on the rocks below. In incident B, a woman is walking on a cliffside by the ocean and goes to step on a large rock, but the rock is not a rock at all. Instead, it is actually the side of a crevice, and she falls down the crevice and dies. In incident C, a woman attempts suicide by jumping off an ocean cliff and dies when she is crushed on the rocks below.
In all three cases, at the most basic level, when we ask ourselves for an explanation of why the woman died, the answer is the same. The same laws of physics in operation in incident A (the gravitational law
s that describe why the woman will be crushed upon impact) are also operative in incidents B and C. However, we feel that the laws of gravity and force somehow do not provide a complete explanation of what has happened in incidents B and C. This feeling is correct. The examples each call for a different level of explanation if we wish to zero in on the essential cause of death.
In incident A it is clear that nothing more than the laws of physics are needed (the laws of wind force, gravity, and crushing). Scientific explanations at this level—the physical level—are important, but for our purposes here they are relatively uninteresting. In contrast, the difference between incidents B and C is critical to the subsequent arguments in this book.
In analyzing incident B, a psychologist would be prone to say that when processing a stimulus (the crevice that looked somewhat like a rock) the woman’s information processing system malfunctioned—sending the wrong information to response decision mechanisms which then resulted in a disastrous motor response. Cognitive scientists refer to this level of analysis as the algorithmic level.11 In the realm of machine intelligence, this would be the level of the instructions in the abstract computer language used to program the computer (FORTRAN, COBOL, etc.). The cognitive psychologist works largely at this level by showing that human performance can be explained by positing certain information processing mechanisms in the brain (input coding mechanisms, perceptual registration mechanisms, short- and long-term-memory storage systems, etc.). For example, a simple letter pronunciation task might entail encoding the letter, storing it in short-term memory, comparing it with information stored in long-term memory, if a match occurs making a response decision, and then executing a motor response. In the case of the woman in incident B, the algorithmic level is the right level to explain her unfortunate demise. Her perceptual registration and classification mechanisms malfunctioned by providing incorrect information to response decision mechanisms, causing her to step into the crevice.