In 1972, the psychologist Irv Biederman asked Amos Tversky to deliver a series of talks at the State University of New York at Buffalo about his work with his colleague Daniel Kahneman. Tversky and Kahneman, both of whom were psychologists, were in the process of publishing a series of articles that would challenge classical notions of human rationality and help give birth to what is now known as behavioral economics. [1] For their work together, Kahneman was awarded the Nobel Prize in Economic Sciences in 2002, an award he almost certainly would have shared with Tversky if Tversky had not passed away six years before. Their basic insight was that under conditions of uncertainty, individuals tend to rely on rules of thumb, what they called heuristics, when making decisions. Unfortunately, these rules of thumb often lead to errors in judgment.
In all, Tyversky gave five talks at Buffalo over the course of a week. Each was aimed at a different set of academics, but the one that Biederman kept going back to was the last one, which Tyversky called, “Historical Interpretation: Judgment Under Uncertainty.” [2] Drawing on a yet to be published study by two of Tyversky and Kahneman’s students, Baruch Fischoff and Ruth Beyth, [3] Amos argued that historians were just as susceptible as anyone else to the cognitive biases that he and Kahneman had identified. Fischoff and Beyth conducted a survey prior to President Nixon’s 1972 visit to China and Russia. They asked respondents to assign probabilities to 15 possible events (e.g., Would Mao Zedong agree to meet Nixon? Would the United States and the Soviet Union create a joint space program?). After Nixon returned, Fischoff and Beyth asked the respondents to recall the probabilities they assigned to the various events. They found that for those events that did occur, the respondents consistently assigned higher probabilities after the fact than they did before, and for those events that did not occur, they consistently assigned lower probabilities. This tendency became known as hindsight bias, and in his talk Tyversky spoke about the occupational hazard of historians: “the tendency to take whatever facts they had observed (neglecting the many facts that they did not or could not observe) and make them fit neatly into a confident-sounding story:” [4]
All too often, we find ourselves unable to predict what will happen; yet after the fact we explain what did happen with a great deal of confidence. This “ability” to explain that which we cannot predict, even in the absence of any additional information, represents an important, though subtle flaw in our reasoning. It leads us to believe that there is a less uncertain world than there actually is, and that we are less bright than we actually might be. For if we can explain tomorrow what we cannot predict today, without any added information except the knowledge of the actual outcome, then this outcome must have been determined in advance and we should have been able to predict it. The fact that we couldn’t is taken as an indication of our limited intelligence rather than of the uncertainty that is in the world. All too often, we feel like kicking ourselves for failing to foresee that which later appears inevitable. For all we know, the handwriting might have been on the wall all along. The question is: was the ink invisible? [5]
Pick up a handful of biographies on historical figures -- say, Winston Churchill, Jimmy Carter, and Ronald Reagan -- and depending on the authors' biases, very different stories will be told about those figures, some positive, some negative. This, of course, is what makes history so difficult. In fact, Fischoff later wrote an article about this entitled, "For Those Condemned to Study the Past" (see footnote [4] below). So, what are historians to do? I would argue that when possible, quantify. As the sociologist Rodney Stark noted somewhat sarcastically a few years ago (Cities of God, p. 209):
In 1962, Arthur Schlesinger, Jr. -- on leave from the Harvard history department to serve as a White House intellectual for John F. Kennedy -- told an assembled audience of American scholars that "almost all important [historical] questions are important precisely because they are not susceptible to quantitative answers." Such arrogance thrilled many of his listeners, as clever nonsense often does. For others it prompted reflection on how someone so poorly trained had risen so high in the profession of history. In truth, many of the really significant historical questions demand quantitative answers. They do so because they involve statements of proportion: they turn on words such as none, few, some, many, most, all, along with never, rarely, seldom, often, usually, always, and so on.Of course, some historical events are impossible to quantify, but it is surprising just how many are. For example, the archeologist Anna Collar has drawn on social network analysis (SNA) to analyze Jewish epigraphs in order to map the diffusion of Rabbinic Judaism across the Roman Empire from 100-500 CE (Religious Networks in the Roman Empire). Similarly, Barbara Mills and her colleagues have drawn on SNA and other quantitative techniques as part of their Southwest Social Networks (SWSN) Project, which is exploring Late Pre-Hispanic Southwest. [6] Robert Woodberry has empirically demonstrated with numerous statistical models that the presence of Protestant missionaries is a strong predictor of literacy and democracy ("Missionaries and Democracy"). And economic and social historians have explored (empirically) numerous topics, such as the causes and consequences of the Protestant Reformation and why capitalism first emerged in the Christian West rather than Islamic Middle East.
In short, it can be done. History can be quantified. Not always, but probably more often than many would like to admit. The question is whether those who are "condemned to study the past" are willing to embrace quantitative approaches that can avoid, or at least minimize, errors in historical judgement.
[1] Daniel Kahneman and Amos Tversky, "Subjective Probability: A Judgment of Representativeness," Cognitive Psychology 3, no. 3 (1972); "Prospect Theory: An Analysis of Decision under Risk," Econometrica 47 (1979); Amos Tversky and Daniel Kahneman, "Belief in the Law of Small Numbers," Psychological Bulletin 76, no. 2 (1971); "Availability: A Heuristic for Judging Frequency and Probability," Cognitive Psychology 5, no. 2 (1973); "Judgment under Uncertainty: Heuristics and Biases," Science 185 (1974); "The Framing of Decisions and the Psychology of Choice," Science 211 (1981).
[2] Michael Lewis, The Undoing Project: The Friendship That Changed Our Minds (New York and London: W. W. Norton & Company, 2017), 206.
[3] Baruch Fischhoff and Ruth Beyth, "'I Knew It Would Happen' -
Remembered Probabilities of Once-Future Things," Organizational Behavior and Human Performance 13 (1975).
[4] Lewis (p. 207) notes that at least “Amos was polite about it. He did not say, as he often said, ‘It is amazing how dull history books are, given how much of what’s in them must be invented.’” (Note: Baruch Fischoff attributes this saying to Catherine Morlund – see Baruch Fischhoff, "For Those Condemned to Study the Past: Reflections on Historical Judgment," in New Directions for Methodology of Social and Behavioral Science, ed. Richard A. Shweder and Donald W. Fiske (San Francisco, CA: Jossey-Bass, 1980), 79.)
[5] Quoted in Lewis, 207-08.