Philosophy of Science, Empathy, Epistemology and Naturalism

'Modern science has certainly triumphed over other methods for acquiring knowledge. But particular features of this view are simple-minded or just inaccurate, and they don’t fit well with any philosophy of science that wants to treat the history of science as evidential for its position. In its crudest form, the particularly stubborn triumphalist story of the Experimental Method (especially among scientists and laypeople) that deserves criticism goes something like this: The systematic application of “the scientific method” exposed the fatuous spiritual and mythical visions of a superstitious and ignorant world that dominated traditional cultures.'

'Some people might see promise in empathy and take heart in any sign that we can improve our empathic skills, and those individuals might try to promote the positive effects of personal empathy and stunt the negative effects. Just because empathic perspective-taking exercises might, say, reduce bullying doesn’t mean perspective-taking is the best way to reduce childhood hunger, for example, by trying to “feel” or imagine their pain, or making that feeling a necessary condition of practical action. It is better to make sure that they just get fed. So, empathic policies need to be in place as well as empathic people.'

'My version of Naturalism simply states that our foundational, “philosophical” views are continuous with our best methods and contents of science. There, the argument is that philosophy’s survival depends on assisting scientists with the normative tasks that scientists have always engaged. By contrast, working on their own and in their typical innocence of the special sciences, philosophers have little chance of making normative recommendations that are both useful and responsible to the evidence. You can spell out Naturalism as a doctrine, and then argue that it violates all sorts of conceptual or logical constraints due to circularity or self-defeat. That is the approach most anti-naturalists take. I treat Naturalism as an empirical hypothesis whose confirmation is about as complete as doctrines like physicalism – that everything in the universe is entirely physical. Even the people who pose as physicalist critics are committed to the doctrine through their reliance on scientific practices that depend on it. The same is true of naturalism.' 


J. D. Trout joined Lewis College of Science and Letters as the John and Mae Calamos Endowed Chair in Philosophy in January 2018. Before that, Trout was a professor of philosophy and psychology at Loyola University Chicago. His research interests include the philosophy of science, epistemology, and cognitive science. Most of Trout’s work explores the foundations and practical consequences of the unique intellectual significance of science. Here he discusses his revisionary account of the scientific revolution, how we got from feeble explanations to actual science, how different science actually is compared to what went before, empathy and how it often goes off the rails, why psychology is a better ground for dealing with experiences that typically raise the issue of empathy than abstractions of political theory, philosophy or economics, analytic epistemology, the key elements of your epistemological framework that support the recommendations of what you’ve called ‘Ameliorative Psychology’, Strategic Reliabilism and why it is superior to Standard Analytic Epistemology and finally what naturalism is for him.

3:16: What made you become a philosopher?  

J.D. Trout: Like most philosophers, I think there are lots of reasons I came to be a philosopher. As a kid, I always enjoyed daydreaming, regularly exploring the consequences of things that struck me as odd or inconsistent. The oddities could have any source – something in a science book I read, or things said by a devout parishioner or a drunk relative. If I had to pick one feature of philosophy that grabbed me, it was that it permitted me to think hard about issues that interested me, so hard that I would have the pleasant experience of losing track of time, of total absorption. It is, of course, a nice bonus that issues that demand difficult, undistracted reflection are often deep and beautiful. Sometimes, of course, they are just meaningless puzzles, but nonsense can be fun too. 

3:16: One of your most recent books, Wondrous Truths, offers a revisionary account of the rise of science. So first we’d better be clear what it is that you think needs revising. Can you sketch for us what the current received explanation for the rise of science is, based on the adoption of a methodology?   

JDT: Modern science has certainly triumphed over other methods for acquiring knowledge. But particular features of this view are simple-minded or just inaccurate, and they don’t fit well with any philosophy of science that wants to treat the history of science as evidential for its position. In its crudest form, the particularly stubborn triumphalist story of the Experimental Method (especially among scientists and laypeople) that deserves criticism goes something like this: The systematic application of “the scientific method” exposed the fatuous spiritual and mythical visions of a superstitious and ignorant world that dominated traditional cultures. These popular pictures vary in the details. Some pictures credit Galileo’s experiments with the rise of modern science, or Bacon’s canons of experimentation, or Newton’s tests of the three laws of motion, but the appeal to experimentation is common to them all, securing a triumphant trajectory over the dark forces of ignorance and medieval darkness. Some accounts even credit the rise of experimentation not just with the rise of the Enlightenment, but the rise of Democracy itself. Experimentation is, after all, a tool of great equity, taming the need for the unlikely genius in explanations for progress, and replacing it instead with a methodological tool that turns anyone who wields it into an earnest discoverer of knowledge. 

The role of experimentation in the theoretical progress of scientific theories still dominates introductory textbooks in the sciences. As satisfying it is to present human history as a steady, triumphant march to the beat of the Experimental Method, however, much of its popularity is unwarranted. Experimentation can’t work magic, and it is powerless in the hands of ignorant theorists, undisciplined experimentalists, or aimless and unfocussed investigators critical of models and theories who just want to get some experimenting done. The experimental method can’t turn crude views into polished theories, any more than alchemy could turn base metals into gold. Don’t get me wrong. Experimentation is an important part of science. But it is not equally important in every science, or throughout the history of science. Even among reflective scientists, opinions range widely about the importance of experimentation as opposed to simply hitting upon accurate theoretical commitments. 

This is not especially surprising. Many great scientists, like Einstein, often displayed a comfortable contempt for experimentation, and romanticized the boundless flights of theory. But I think it is fair to say that most philosophers of science in the 20th century had a story to tell about the growth of science that was both triumphalist and incremental. They were forced to greater nuance in the wake of Kuhn’s view that scientific methodology is theory-dependent. In order to gain this nuance, constructivists softened the press of a mind-independent reality that shapes and regulates our beliefs, and focused more on the difficulties distinguishing that reality from the image of it constructed by our theories. But I think you get a more accurate view of the rise of science – and a more evidence-based view -- if you combine the power of experimental methodology that everyone appreciates, with the power of unplanned contingency or fortuitous falsehood that arrive in a package of theoretical commitments of varying but undeniable inaccuracy. Some views that need revising are held by the lay public, and some are held in the academic world. Received folklore in both worlds is always an extruded product that assumes the shape of our cognitive and social mechanisms as it passes through. 

There is a good bit of psychological research on familiarity and fluency in many of its forms as carrying positive hedonic weight. The feeling that the world is familiar is the feeling of easy understanding, a feeling less likely to trigger scrutiny than the feeling of confusion and difficulty. Accurate portrayals of a reality not shaped by this assimilation to the familiar are often too unfamiliar, too nuanced or complicated, for easy passage. Puzzles of quantum mechanisms can pose such obstacles, as can the delicate fit of arcane details of the nature and function of viruses. In order to steer toward the familiar, slogans or simple conceptions dominate – like the idea that the growth of science was caused by the rise of the experimental method – and these distortions of simplicity get a footing even in an academic world. After all, everyone is familiar with the rudiments of experimentation. A child easily learns that they are more apt to get certain kinds of parental concessions when dealing with one parent rather than another; they easily learn to manipulate factors and are watchful for outcomes. The idea that methodological success is prior to theoretical success is part of a romantic, idealized but reassuring portrayal of science as rising from the cool and neutral arbitration of rules rather than the hot cognition of theorizing. 

This narrative is a salve for those who need to hear that theoretical differences don’t have to be a divisive force. This narrative makes seductive promises. Who doesn’t want to believe that, no matter what your religion, theory, political orientation, or cultural norms, you can be brought to agreement by respect for neutral methodological canons, like experimental design? Doesn’t everyone wish they had a sharp methodological tool that cut through any nonepistemic or truth-distorting forces? After hateful political movements, religious ignorance and contempt for science, who doesn’t want to believe that the experimental or “scientific method” acts as a neutral ideological filter on one’s metaphysics? This priority of experimental method has always been the logical empiricist’s dream, and it is an admirable and charming one. But it doesn’t have much historical evidence in its favor. Only the big leap in the late 1600s to much better chemical and physical theories could dramatize the power of experimental method. 

I tell this story in detail in Wondrous Truths. I am guessing that the (scientific) realist explanation for the success of science is the one most laypeople and academics alike, hold by default. It is a familiar story. According to that view, the world is filled with things both observable and unobservable, and we can have knowledge of both. Science has progressed because we have acquired increasingly accurate information about mind-independent objects, in their observable and unobservable aspects. Put less scholastically: We know more and more about stuff that we can and can’t immediately sense. Part of this achievement rests on improved methods, but part of it also rests on having a good enough theory that the methods can get traction. If you are enthusiastic about the enlightenment possibilities of science, you hope that science can contribute not just to human knowledge, but to human well-being. After all, humans are part of the natural order, and contingent facts about our flourishing are discoverable upon earnest scientific inquiry. 

But as soon as we consider how we might discover those facts, we are beset with complications. Some people believe that other groups of people, for example, are “genetically inferior”. Where does the science that settles this dispute get its special power to resolve it? Logical positivists – many of whom, as Jews, of course were subject to this racist attack – located that power of metaphysically neutral adjudication in observation: You may forever argue about metaphysical matters, but the hope was you could always resolve disputes rationally, through controlled observation. This was the modern empiricist outlook. 

Constructivists had a less straightforward view, one that didn’t really have Enlightenment values in sight. For constructivists, features of the world were more constructed than discovered, and the idea of progress itself was a fraught notion. These two alternatives to realist sketches of scientific activity, empiricism and constructivism, are probably both more attractive to academics than to laypeople. Both trade on attachment to very specific philosophical scruples. Philosophers of science inclined toward empiricism contend that knowledge extends only to observable phenomena, and then re-prioritize accordingly features like prediction, explanation, and confirmation. Constructivists like Kuhn, and related idealists like Latour, are more suspicious of claims of progress, recognizing the common whiggish belief throughout history (the attribution of whiggism before there were Whigs -- prior to the 1700s -- one supposes, is also whiggish, but the term “anachronism” for this backward-looking attribution is fine too) that the current view is, if not the right one, the best yet. Many defenders would point out that figures like Latour were not really idealists, given how many practical concerns they had. We can’t know what is in a constructivist’s heart. And it doesn’t help when authors taunt with irony, hyperbole, and play. All we can know is that their claims are either idealist on their face, or they have idealism as a consequence. Many of the people that Kuhn’s Structures brought into the discussion were not philosophers, but historians, sociologists, and literary theorists. 

So it is not surprising that they had not supplied themselves with the kinds of conceptual resources to avoid branding constructivism as ontologically or epistemologically relativist or so idealist that it is hard to account for the intersubjective nature of consensus in developed sciences. The empiricist view rests uncomfortably with the fact that scientists increasingly unify not just observational but theoretical portions of separate domains, like geology and evolutionary biology. Geophysical evidence of “remagnetized” rock helps to date fossils, but without accurate theoretical commitments, we could not find the connection between the vexing observable strata location of the fossil and the event (say, a lightning strike, or regional heating) that produced the strata’s deformation. Professionals with special, distinctly philosophical scruples, like empiricists, may bristle at the idea of theoretical knowledge. But that is a drab fact about their philosophical training and the force of phenomenology; it often feels like you can’t know about things that aren’t directly perceptible. 

Feelings aside, the question isn’t whether theoretical knowledge is possible. The question is whether it is required to explain the apparent theoretical progress of science. From the early Carnap to the later themes of van Fraassen, from the semantics of theoretical vocabulary to formal confirmation theory, much of 20th century philosophy of science was an ingenious but ultimately abortive effort to tell a non-metaphysical (some might say “anti-metaphysical”) story of scientific success, one designed to cleanse science of its “extra-empirical” features, or to de-fang their epistemic function. The aims of this metaphysical frugality were often admirable, to identify domains that could be neutrally adjudicated, and so yield results acceptable to all rational people. But they ultimately couldn’t account for the robust success of scientific vocabulary and theory construction that are only barely connected to observation. 

Laypeople and students have been taught that scientific progress is the effect of the more or less linear accumulation of observations and made possible by The Scientific Method – the kinds of canons Francis Bacon advanced and practitioners employed during the Scientific Revolution. In short: Hold all conditions constant and manipulate the theoretically key variables one by one. Then see if the hypothesized effect takes place. A fan of the New Science, Kant seemed to use the same method, narratively, in the Transcendental Deduction. The application of these methods are best known in science, but the principles of scientific method and experimental design have commonsense correlates in everyday life, and many of the rules we learn at our parents’ knees, and only later formalize them. 

3:16: So that’s what you say doesn’t work. How did we get from feeble, languishing or misguided fields of explanation to actual science? Perhaps the move from alchemy to science is a good area to illustrate your argument. 

JDT: That’s exactly the question that every approach in the philosophy of science should address. As you put it: How did we get from there to here? That’s what Jared Diamond asks in Guns, Germs and Steel, about modern Western culture. That is what I ask about Western Science in Wondrous Truths: The Improbable Triumph of Modern Science and what Michael Strevens asks in his lovely book, The Knowledge Machine: How Irrationality Created Modern Science. There are other questions of course. But none of the existing alternatives hold up in the face of historical and methodological evidence. 

When attempting to explain progress in the history of science, philosophers of science have been hampered by a failure of imagination. Research on judgment and decision making shows that humans are not good at unassisted surveys of a full range of possible explanations for outcomes – many of which are historically contingent -- and instead tell convenient stories with a bias toward the familiar, that impose an artificial coherence on the grand details of scientific inquiry. With explanations for progress so constrained by cognitive preferences of familiarity and coherence, it is hard for the power of historical contingency to get its due, even when it is the best explanation for scientific progress. My perspective is a bit different. In order to see how the history of science is radically epistemically contingent, one needs to imagine clearly just how many factors, and kinds of factors, could have influenced the outcome but didn’t. When you do that, the actual path of history seems as uncertain as it actually is. While I don’t use this example in Wondrous Truths, I think it is useful to view any descriptively accurate history of science as just one possible outcome in a Monte Carlo simulation of histories. It may be that not every scenario is equipossible, but it is an interesting and potentially fruitful and fun task to figure out what conditions and factors were present -- funding and resources, climate, nature and extent of education, travel and subsequent exchange of ideas -- and apparently causally engaged when scientific inquiry advanced. Change one of these variables, and you might have gotten quite different outcomes. 

And while it is true that only one history actually unfolds – only one outcome of the Monte Carlo simulation, the actual one -- the written record and current understanding of those settings don’t come close to identifying the actual causal factors that had that outcome. There have been modest efforts to formalize the kinds of features I am discussing, but however promising, it is not a popular approach. The start-up costs for entry are high, especially for those with a six-year tenure clock. So the drivers of innovation – young people – are not really incentivized to engage in high-risk theorizing and model-building with the integrity of their own field as the subject matter. Building the database of the history of science is a daunting task, and the proper conditions for inclusion and exclusion of particular details, not to mention the appropriate dimensions of progress, are fraught topics. So we philosophers of science are left with a wealth of descriptive detail, but merely human capacity for computation. 

By highlighting the challenges to quantitative approaches to the history of science, we can begin to appreciate how difficult, and necessary, it is to document the contingent features of the history of science that could explain its progress. Scientists often have their own explanation, usually one framed in terms of the power of experimentation and The Scientific Method. 20th century philosophers of science also found the phenomenon of scientific success interesting and important enough to try to explain. As Glymour pointed out long ago, the instrumentalist can always tell a story, after the fact, about how observation might have been the sole arbiter of scientific advance. All you have to do is eliminate all reference to the theoretical information that provided traction for the advance, and concoct predictive calculi that would do the same job. And, social constructionists can always tell a story about how the world is not discovered but constructed, and even if at the same time scientists seem forced to abandon the very theories they “constructed” – almost as if there is a mind-independent reality of specific causes that force this revision. 

Both of these approaches have commanded attention for many decades, but I find that scientific realism is the only account that offers an answer to that question that is even passably credible. After all, there are no prominent episodes in the history of science, certainly none prominently discussed, in which experimental design is responsible for significant theoretical advances without a sufficiently accurate background theory. For every dogged application of experimentation, for every patient permutation of a vacuum pump, there was an approximately true theory, whether arrived at by a lucky hunch or a series of medium range fortunate guess about how two (or more) things are causally related. 

I developed these realist themes – unification, diverse testing, measurement, and science’s mercenary reliance on arcane expertise -- and applied many of them to the psychological and social sciences, in my first book, Measuring the Intentional World. You can easily find informal precursors of canons of experimentation, like what Mill called “methods of difference”, etc. During the height of Islamic science from 950AD to 1150AD, you can find explorations by Ibn Sina and Al-Kindi that manipulate a variable under thoughtfully controlled conditions, with a clear expectation of what to conclude based on different outcomes. For Al-Kindi, it was shining light through a tiny hole in an opaque surface to see if the light exits on the same angle it enters. If so, we can say the light’s travel is rectilinear. This was a clever instrumental demonstration. But it did not really limit much the kinds of things light could be. It could be a wave that fit the hole. It could be corpuscular. Corpuscularists of the time conceded that the atoms, individually, could be insensible. With a hopelessly errant metaphysics but method that could tweak outcomes, investigators of the time could use experimental methods to produce useful tools, but that didn’t result in theories that could guide the construction of theoretical advances. Experimental design or the experimental method is a tool, like a shovel. The shovel won’t help you grow crops if you only have access to poor soil. 

3:16: Why didn’t earlier theoretical views lead to the sustained advances of science? So in chemistry and physics there were the ideas of Democritus and then much later Paracelsus, Gassendi and Islamic scientists who had theories – why were they dead ends? Is it your claim that modern science came about from a contingent leap from, say, late alchemy – and that there was no progressive move towards enlightenment? 

JDT: That’s too strong a way of putting it. Yes, the contingent leap resulted in a progressive move toward enlightenment – taking us closer to ever-more truths of increasing significance. The progressivism and Whiggism of my realism is undeniable. So we should ask instead about why, in the history of atomism from Democritus to Al-Kindi or Descartes, the corpuscular view of reality never drove intellectual progress. It wasn’t for want of the experimental method, which is evident in much pre-modern craft. 

As I mentioned, you can find canons of experimental design at work in the Islamic world of the 11th century, and in monastic occupations with herbal treatments even earlier. So, when we say that there was a contingent leap, we don’t mean that there was no progress at all. Indeed, as I just mentioned, there was progress in the application of experimentation, and some progress in engineering or practical solutions to stubborn applied problems, like irrigation and architectural engineering. But there was little movement toward truer theories of the underlying reality. It is not as though modern humans 20,000 years ago would have had to conceive of Newtonian theory, without schooling, written language, or concentration of resources, for a discovery or development to be historically contingent. And just because the intellectual and practical stage has some furniture on the set does not undermine claims of contingency. Historically contingent developments or advances occur because historically- and culturally-specific practices or views combine with problems the culture faced to produce durable solutions that provide the basis for further theoretical growth. In the early years of the 10th century, Muhammad ibn Zakariya al-Razi (Latinized as “Rhazes”) explored the efficacy of blood-letting as a treatment for what was probably meningitis. To do so he administered the blood-letting to one group of symptomatic patients while he, in his words, “intentionally neglected” the other group. 

An appreciation of controlled experiment is common in the history of science, long before the Enlightenment and Newtonian Science. But if that fact shows anything, it shows how the scientific method – in the form of the experimental method – is an active partner in the creation of the most grotesque explanations and systematic intellectual visions when integrated with bad theories. Theories can be poor for many reasons of course, because they are made up of many moving parts. They can fail for reasons of scale, as when physical theories of local interaction are applied to large expanses of the universe, or when theories of individual psychology are applied directly to societies. Or they can fail simply because they are freighted with false metaphysical assertions that direct the content of their explanations, predictions, and integration with other theories. I refer to these latter theories crassly as “false” theories, even though they might harbor some commitments that are true. The main point is that experimental method plays a capricious role in the history of science, and the direction and magnitude of its caprice is an offense of theory, not of the scientific method. 

But the progressive move in Western science was certainly more sudden than the traditional, gradualist story suggests. The earlier atomist movements were dead ends. Theories of “minimal parts” from Ancient Greece to the Italian Renaissance varied widely. There were persistent disputes about whether these “atoms” could be further divided, whether they had a definite shape, and if so, what that shape was. With the answers to these questions not just undecided, but vacillating in unprincipled ways, it is easy to see why little could be gained by applying powerful experimental tools to such feeble theories. Put simply, the postulated variables of these theories either didn’t exist, or were assigned terribly inaccurate values. These brusque answers will likely offend historians of science and a certain sort of historically-sensitive philosopher of science. And it is no wonder. There are beautiful and subtle histories of science that describe figures like Newton, and complex moments like the Scientific Revolution. 

Training in the history of science cultivates comfort with a certain ontology-free pose that memorializes the perspective of the scientist or epoch, and marginalizes questions like whether what they believed is true. I am interested in knowing both, and when someone asks whether what Newton believed is true they deserve a clear answer that is complicated – that respects the obvious intent of asking the fascinating question -- rather than insisting that the question makes no sense. That pose is dishonest, and is most often at play when a philosopher wants to impress, to convey that the audience is lucky to have a philosopher on call. That kind of attitude can corrupt, until you end up arguing that evolutionary theory is not “a science”, or that it is somehow not the case that the decades long increase in global temperatures is substantially human-caused, full stop. I think it is tiresome when grown philosophers use dialectical sophistication to argue for skepticism. But at least this is relatively harmless in a philosophy classroom. 

But suppose a philosopher is asked whether evolutionary theory is a science. This is a natural and interesting question, with important intellectual and policy consequences. It would be perverse to act like this question is naïve just because a perfect answer would be nuanced or qualified. Even more perverse if you treated the question as simple-minded because the identity conditions of science aren’t clear, etc. The identity conditions of many, if not most, non-mathematical objects aren’t clear. In policy cases, that kind of grandstanding is reckless, particularly when it is a policy case, say, about the legitimacy or priority of teaching evolution in public schools. “Oh, but what IS science? Can anybody say they know for sure?” This is the kind of intellectual cuteness results in the Creation Story receiving equal time in a biology class – and not because we don’t know what science is, or because we can’t give multiple, credible answers to that question. So no, I don’t like to lead with scholastic complications when a good question can be given a nice, clear start. 

Begin with the brusque answer, and then signal the nuance. Part of the reason for this strategy is that the epistemic reliability of scientific judgments is typically based on arcane, technical information that is far more complicated than an untutored audience can understand. So why patronize the audience by pretending that they can? There were many false starts in the history of science, good ideas that might have taken off if everything else had lined up properly. But in the early 17th century, the fundamental mechanical commitments of late corpuscular alchemy were most of what Boyle’s law needed. Had corpuscles been pointy, as Locke imagined, you would not have gotten approximately elastic reactions in a container. And if they weren’t hard (maintaining their shape through translation, among other properties of hardness), their trajectories would not have been so predictable. A lot of features of the postulated item had to line up as a necessary condition for advance, but other factors -- many of them geographic, political, cultural, economic, etc. -- had to obtain in order to cinch the advance. So if you are testifying in court you can’t just say some field isn’t science because they don’t use “The Scientific Method” or “The Experimental Method”. That declaration is for the neophyte who wants the history of science to yield simple lessons, or the jaded consultant who is conflicted, probably wanting their expert-witness payday or their moment in the news. The truth is much more complicated, and can only be illustrated by showing what people would have to abandon in order to doubt the accuracy of the theory of evolution, or challenge the evidence for human-caused global warming. 

There are plenty of ideas that were brilliant, and never took off for contingent historical reasons. The ancient Chinese had a ceremonial toy called the South-pointing chariot that used differential gearing to keep the chariot pointing in one direction (so that the rider could orient the chariot in a direction, and the chariot would automatically sustain that direction over long distances.) This feature also meant that the chariot would not slide out treacherously on a turn, simply because the differential gearing allowed the outside wheel to turn at a different speed than the inside wheel. As useful as this idea was, it remained a curiosity of Chinese toy trivia for over two millennia. “Commercial”, duty-grade chariots did not include differential gearing. Whether because drivers were expendable or craftsman to hew the gearing were rare, differential gearing never caught on for chariots, or even for Conestoga wagons thousands of years later. Consider another example: Japan developed firearms well before the West, but they never caught on in a country that values, among other things, the dignity of the Samarai. 

3:16: Are modern scientific theories now so different in terms of scale, accuracy and integration that to even compare them with pre-scientific theories is to misunderstand just how different our current explanations are compared with every time before us? 

JDT: Yes, at least in terms of explanatory accuracy and theoretical integration. Humans have always had theories of great scale – and you can see the great philosophers, mathematicians, and religious thinkers of the day paying homage to an entire universe, forbidding and grand. And for those actually constructing models or theories of it, the models were often very elaborate. Cosmological theories of the planets and stars, theories of comets, of motion, humoural theories of health, in domains of nature that are biological, chemical and physical, the models were often intellectually impressive. They were detailed, with lots of pleasing inter-relations to explain every perceived contingency. At the same time, they were often a lurid mixture of mechanism and magic, of practical concerns and religious spookery. Some other unhinged theories remain, but mostly in politics, not in science. 

3:16:  Switching now to another issue you’ve written about, empathy has recently become a hot topic. Paul Bloom and Owen Flanagan have both argued that it isn’t always positive whilst Eric Schwitzgabel has defended it. So before asking where you stand could you first say what you mean by empathy – and why it’s something philosophers should consider?

JDT: I have a pretty standard conception of empathy. It is the ability to take the perspective of another person. Sometimes this takes the form of vicariously experiencing affective states of another. Is empathy something that philosophers should consider? I don’t think philosophers have any special insight into the nature of empathy – nothing more special to say about empathy than psychologists have already uncovered and expressed. But as in other domains, when philosophers have a psychologist’s knowledge of the theoretical mechanisms of social, cognitive, and perceptual processes, they can often construct unifying and integrative accounts of empathy’s character and function. You would expect Bloom, as a psychologist, to have that knowledge, and what makes Flanagan’s and Schwitzgebel’s work distinctive is that it, too, is well-grounded in empirical work. Of course, you can believe, with Bloom and Flanagan, that the effects of empathy are not always positive, and at the same time believe, with Schwitzgabel, that empathy has some effects that are very desirable. I think both are true. 

The problem is, occasional positive outcomes are the story of all defective – sometimes disastrous -- cognitive and affective processes: confidence, probabilistic reasoning, appraisal of future states, recollection of past events, and autobiographical memory, to name just a few. Some people might see promise in empathy and take heart in any sign that we can improve our empathic skills, and those individuals might try to promote the positive effects of personal empathy and stunt the negative effects. Just because empathic perspective-taking exercises might, say, reduce bullying doesn’t mean perspective-taking is the best way to reduce childhood hunger, for example, by trying to “feel” or imagine their pain, or making that feeling a necessary condition of practical action. It is better to make sure that they just get fed. So, empathic policies need to be in place as well as empathic people. And to that extent, insisting that an empathic process involve people currently taking the perspective of another seems priggish, not to mention potentially self-defeating. In this way, the goal of empathy is like the goal of courage, both focused on action, and both vulnerable to erosion by thought. A moral philosopher may opine that empathic action isn’t genuinely empathic unless a recognition of that feeling is the reason you are taking action. Perhaps understandable. Fair enough. But while post hoc evaluations can prompt us to wish that we had greater knowledge about the complex suite of mechanisms and processes that realize empathy, we would still have no idea whether that knowledge would produce more empathic action. 

Normative assessments serve to document how far our performance falls from a standard, but they can’t impose a psychologically specific implementation on empathic action. Empathic processes only have to contain empathic feelings in their history. But training people in empathy treating personal empathic feelings as a moderating variable or essential ingredient in prompting or formulating empathic policy may balance the normative ethicist’s ledger, but it is not only unnecessary but producing damaging delay. Discretionary deliberation and motivated (self-serving) reasoning are the enemies of empathic and courageous action. There is some empirical evidence that cognitive reflection may reduce the impulse to action. It is an empirical question whether acting from an awareness of duty makes action less likely, but it is an additional barrier to surmount. So my advice to moral actors is the same as that for epistemic agents: Reflection has costs. When there is little risk of loss and little risk of gain, you don’t need to assess the moral worth of an action. It is enough that you actually, on balance, alleviate others’ suffering, however unsupervised your motivation. So don’t go poking it with a stick. 

Philosophers have focused on the subjective, “feeling” aspect of empathy. As I argued in 2009 in The Empathy Gap, empathy requires that you be able to take someone else’s perspective (basically, a kind of theory of mind requirement) and that you be able to, in a certain sense, feel another’s pain or joy by taking their perspective. In many typical cases, this noble effort devolves into cheap voyeurism or arrogant presumption, but even so, that is not all there is to it. The subjective, first-person focus of empathy treats the psychologically local, but this feature discounts more distant, though no less important, suffering. “Psychic numbing” research by Paul Slovic and his colleagues shows that people aren’t good at assigning due weight to massive harms – like famine and genocide -- typically because their sheer magnitude makes them impossible to accurately imagine, and so impersonal. This local focus of empathy and imagination makes us vulnerable to the “identifiable victim effect” – flooding awareness with sharp images of an individual’s pain. And nevertheless the objective harm – suffering, death, etc. – is undeniable. That is just one kind of limitation of personal empathy. There are many others. Deborah Small has done important work on how our over-reliance on the personal aspects in empathy produces ineffective altruism. 

In my view, grappling with the topic of empathy is important not simply because it exercises our moral imagination, allows us to imagine how another person (often quite different from us) might feel, or motivates us to help others. That’s important, but I think developing an accurate view of empathy is most important because it plays a role in what I think of as moral maturity. It is a hallmark of maturity in all forms that you respond to objective needs and not allow subjective interests to dominate judgment about objective matters – or if you do, you do so for reasons you understand and that can be brought under your control. Think about the millions of older people who are poor and infirm in the United States, unable to easily care for themselves, and often expected to leave a home they called their own because it can’t accommodate the changes of aging. What a sad fact it would be about humans if they needed to experience an ongoing sense of sorrow for people who are ill or starving or infirm in order to try to do something about it. That kind of perspective-taking may be important for children to experience in their moral development, but surely not for normal adults. Adults surely know that being poor presents harmful and typically unearned suffering for victims, and infirmities impose undeserved burdens on people. 

These are just two classes of empathic challenges. But this sad fact, for many people and groups, is nevertheless a fact. If you are a person of influence and you don’t have an interest in, or personal contact with, people who need your help, it is likely they will continue to be ignored by you. After all, everyone has natural limits on their time, attention, and weariness with responsibility for others. But the responsibility remains; it’s not going anywhere. Why should the fate of those in need depend on other people’s time, attention, degree of personal concern, or prospective weariness? The whole idea of moral maturity is that you erect edifices more stable, less capricious, than sympathy, so that people are cared for when you are inattentive or otherwise occupied. So, while I think it is a good thing to be able to take another’s perspective, and occasionally to actually inhabit it, I don’t think it is necessary in order to act responsibly toward them. And practically, I think those sorts of philosophical exercises, and analyses, are just one more way that intellectuals of a certain kind put distance between themselves and the natural responsibilities imposed by a civilized culture. But I would go further, and claim that taking another’s perspective to share in their pain or joy, and so empathize, is a dangerous cognitive detour. 

In The Empathy Gap, I illustrated just one of the dangers of perspective-taking when empathy orients us to be concerned with others’ suffering and joy, by discussing the death of Princess Diana, and the temporary entrapment of Baby Jessica. We can certainly feel bad for the families, and consider lost potential. But of the events about which to empathize, these events rank low. Although taking perspective is a psychological strategy used in some schools to do things like reduce bullying – by making bullies and bystanders more empathic – not every unfortunate event befalling people deserves attention, even if it is accessible to perspective-taking. In unworthy cases, it simply squanders attentional resources. Bullying is a problem, and it should be addressed. But there are many ways to address the problem. Empathy-based anti-bullying campaigns may work, but that is an empirical question. This is one reason I argue that empathy is a poor guide to responsible reaction. Its objects are often too temporally, geographically and culturally, local. And its impulses vary with morally irrelevant factors like the empathizer’s attentional limits and cognitive load. 

In The Empathy Gap, I cite a lot of psychological research, now more than a decade old, about the limitations of empathy – like Slovic’s work on psychic numbing. (I have also taught a Judgment and Decision Making course for over 20 years that covers this material.) So what do you do if you want to reduce suffering or enhance well-being? Empathy may orient us toward caring, but not reliably enough about the things that matter. One goal of policy is to take care of people when we can’t (or don’t) pay attention. There is something fundamentally narcissistic about helping others because you are prompted by feelings of empathy for them. Helping people who are in a bad way through no real fault of their own is a norm of adulthood, of maturity. That’s the positive message. The negative message is that helping others shouldn’t be treated as an action prompted only by the conclusion of a moral argument. Although, for what it’s worth, it’s fine with me if there are people who work out arguments that show that certain actions are morally appropriate. Some domains of action are morally complicated in a way that reason clarifies; in an environment of excess resources like the U.S. enjoys, prohibitions on elective child labor, or support for basic needs programs, are not among the morally complicated arguments. 

What is morally suspect is that someone might need to be given a reason for them. In addition, as we can see in the pragmatics of explanation, even the mere fact that you would give a reason may imply to others that there is perceived doubt that this reason should relieve. And in addition to being unnecessary, it is strategically unwise to engage in this performative exercise. Because doubt can be used to delay final judgments against actions that undeniably harm others and offer no compensating benefit to those harmed (only greater wealth for the depraved actor(s) -- think tobacco industry), earnest and honest people often get suckered into giving reasons for positions that no longer, or nevertheless, don’t, require justification. And in the meantime, the suffering and injustice is prolonged, at great profit to some. That view may at first seem unduly dramatic, hyperbolic, or just counterintuitive, but my reasons are based on research about attempts to correct cognitive biases. 

On problems prone to the effects of overconfidence, hindsight, anchoring, and intertemporal discounting (to name just a few), people make much better, much more accurate decisions, when institutions either tell them what to do or package information in a way that limits their choices to better options. They perform worst when left on their own to engage in individual problem-solving exercises to blunt the effects of those tendencies. Why? A number of reasons. First, most people don’t know what to do to reduce the bias, so it does nothing to tell people to think hard about how to reduce their bias. Second, without the resources to quickly and reliably identify and correct systematic errors in reasoning, we become non-consenting dupes of self-interested actors and destructive institutions. Third, even if they do know how to reduce the effects of their own biases, most people lack the cognitive and motivational resources to initiate those exercises and see them through. Fourth, there are too many ongoing cognitive processes bent by bias to correct them by individual attention. It is not entirely clear how this capacity for perspective taking evolved. It is easy to see how it might be useful. We could even wonder why empathy seems to have such a short, parochial reach. 

But I don’t think there are any good theories yet that would explain empathy’s parochial reach. The most common ones seem to be mostly cheesy evolutionary psychology. Still, taking another’s perspective is very much a problem-solving or cognitive task, and once you know what matters to people, you could erect very effective social and institutional structures that promote well-being and reduce or eliminate suffering, whether or not you could sense their joy or suffering. But the second requirement, the shared feeling of joy or pain, is what motivates people to act on behalf of others. 

3:16: You say that empathy often ‘goes off the rails’ and that ‘we shouldn’t expect too much intelligence’ from it. What do you mean? 

JDT: First, how it goes off the rails. Our empathy often reacts crudely to situations. Some of us are old enough to remember how Baby Jessica, a young child who slipped down a well and got stuck while playing in her back yard, held the nation spellbound before their TVs. It is understandable that a dramatic rescue of a baby stuck in a well would summon empathy for the baby, the baby’s parents, and their community. And we were all rightly relieved when she was saved. But we should notice what doesn’t get our attention. In the two days America sat immobilized before this vigil, 137 people died due to lack of health insurance, 8 women were killed by their domestic partners, 7 children were killed by handguns, and 38 children died in car accidents. The point isn’t that these tragedies too deserve a rapt public – though more sustained attention would be nice – but that these tragedies can be anticipated and controlled by resources and persistent awareness. And they aren’t. The same could be said of the days spent crying over, or, less dramatically, attending to, Lady Diana’s death. Again, a regrettable event. But however regrettable, these tears of empathy were not spilled for a wasted life spent in squalor, without choice or opportunity. 

For potential happiness unrealized, or well-being denied, there are millions of more appropriate cases that would better capture our attention, in places like south Chicago or, for sheer destitution, Somalia, and Syria. Isn’t this kind of empathic focus – in the Baby Jessica and Lady Diana cases -- a distortion of values? It is easy to blame a media that is hungry for the sensational, but that’s the point, isn’t it? There is a weakness in empathy that is easy to exploit. It is in this sense that “we shouldn’t expect too much intelligence” from empathy. The fact that the exposure-induced empathic impulse is powerless to resist this rubbernecking demonstrates the margin by which it can go off the rails. 

3:16: Can you sketch some of the salient factors from philosophy and the science of judgment and decision making that inform your approach to empathy? Why do you argue that human psychology is a better ground for dealing with experiences that typically raise the issue of empathy than abstractions of political theory or economics? 

JDT: Economics has its idealization, which are certainly abstractions. We may rely on economics for broad policy issues – about large scale issues of economic growth, labor markets, etc. Economists developed useful fictions to satisfy the imagination when explaining why such large-scale models used to construct policies might be realized by individual actors. In order to do so, they needed to construct idealizations that are accurate, and whose causal components have characteristics that don’t deviate too radically from the values assigned by casual observation and from disciplined psychological research. The economist’s penchant for mathematical formalism might be effective in silencing opponents trained in a less formal field. The economist’s “useful” fiction of the “rational economic actor”, however, bears no resemblance to actual human actors when faced with decisions. In fact, these fictions have remained central to economic training, despite the fact that economists and psychologists who have been influential in the creation of behavioral economics and won Nobel prizes in economics for it (Daniel Kahneman and Richard Thaler) describe the ham-fisted economists’ use of those fictions with a kind of affectionate mockery (Thaler calls the economists’ atomistic and idealized rational actors “econs.”) 

But most economists know nothing of psychology, and there is no institutional impetus within orthodox economic training or any evidence of urgency to learn anything about it. Unfortunately, psychological knowledge is necessary to accurately predict and explain moral hazards, and other policy-relevant factors that economists routinely pronounce upon. Such pronouncements often have implicit normative force. The same is true of Philosophy. Psychology is in a much better position than Philosophy to make normative recommendations that are at once empirically well-informed and responsible. Psychology has begun to determine the natural contours of actual human capacities, like memory and problem-solving, and so is better able to make intellectually responsible normative recommendations about how people ought to behave if they want to improve human welfare. In The Empathy Gap, I propose and describe a number of “outside strategies”, as I called them, which don’t require sustained attention and commitment to achieve empathic goals. After all, we have a limited supply of focus and stamina to sustain these plans. 

So instead, we can impose policies that do so automatically, insuring people will complete their empathic trajectories while we all quietly sleep. Automatic investment programs are one such example. Rather than purposely moving money into investments every month and having to decide to NOT spend on tempting discretionary morsels, these firm policies, executed beyond our immediate watch, reverse our tendency to discount the future. This benefits our future self. These policies come in many forms. We might pre-commit to supporting anti-poverty measures in countries that have cultural practices we dislike and so otherwise ignore, but they have children to care for all the same. An outside strategy would evade the friction in sending that support. It is not that it is impossible for humans to empathize at a distance, but doing so would require extraordinary self-control and stamina. After all, we have our natural cognitive and emotional imperfections. These outside strategies streamline targeted well-being projects, and avoid wasteful and sometimes tragic defection from them. When you can find simple and uniform characteristics to track, it is easier to formulate effective policies. For example, Ebonya Washington’s work shows that, if you are a male legislator, how you vote (Liberal vs. Conservative) on women’s issues is a function of how many daughters you have. It is not surprising that a father with daughters becomes familiar with issues of how, say, employment and health care affect their daughters’ lives. 

But what is more important presently is how incapable they are of empathizing without it. The power of personal connection might also explain why a fiscal hawk like the former Wyoming Senator Alan Simpson, whose family was touched by mental health challenges, would advocate for government funding of mental health research and treatment. Of course, you don’t have to be a whale to write Moby Dick, as the adage goes, but a little personal familiarity with the conditions under which the object of empathy exists would be nice. In this case, you might hope that older male legislators would concede that they have little idea of what it’s like to be told by strangers that you will have to carry a fetus to term, or that you will have to arrange for child care if you want to work. Flights of empathy just aren’t up to this task, even if a policy plan to address this issue could be. If we know (as we do) that the attitudes that drive voting are moderated forcefully by situations, legislators should be hungry before they vote on SNAP funding. They should be in a “hot” state so the “impedance mismatch” in emotional states is reduced between the potential empathizers and their objects. Let the legislators contemplate denying children food when their own bellies are gnawing at them. 

It is a strange attitude toward citizenship to think that the psychological theory that structured, say, legislative voting during the Constitutional Congress is still the best one we have. We know it isn’t. We know from research on agenda effects that the mere position of an item on a voting agenda will have a potent effect on its level of funding. So alternative voting structures based on empirical research should be taken very seriously. If we know that “merely” procedural features affect the valence of the vote, or that our judgments are captive to specific visceral drives like hunger, thirst, sexual arousal, anger, tiredness in the moment, then why do we treat lunchtime (or other conditional states of the deliberator) as irrelevant to responsible voting? This “drive-coordination” of legislator and constituency when they have common needs may not guarantee proper prioritizing, but perhaps it will discourage them from discounting the severity or importance of hunger and other suffering from lack of basic needs. (Ultimately, this is an empirical question, of course. It might turn out that hunger might make legislators more impatient, casting their lot irresponsibly just to get it over with; these are empirical hypotheses that require testing.) 

These aren’t empirical tidbits that philosophers should know about – some of them already do, as a result of their political interests and activities. But the underlying empirical insights seldom constitute their philosophical treatment of the issues. Of course, such revisions merely nibble at the periphery of an electoral, legislative, and judicial process in the U.S. that increasingly oscillates between ineptitude and corruption, but they might be worth considering. 

3:16: Can empathy be preventative given that much of the time it is reactive? 

JDT: Empathy can be preventive, but so can agoraphobia; prevention is at best an instrumental virtue, and we should make sure it doesn’t have, on balance, undesirable effects. Just as caution is not its own reward, prevention is not a virtue if it carries offsetting opportunity costs. If empathy always had the effect of addressing a problem in a stable way, like stimulating people to forge a hunger policy, then its hunger preventive effect would be good. But the problem is that we really don’t have a way of assessing the balance. Sometimes painful empathizing will inspire you to create moral hazards, spending personal or private funds on immediate or short-lived correctives that paper over the real problem. So empathy can leave individual citizens to choose whether to spend their time addressing the immediate and desperate needs of a local unhoused population, or organizing a voting block to influence representatives who will endorse civilized basic needs policies for unhoused people in the longer run. 

With limited psychological time to address the basic needs of others, you might not be able to do both. Choices can have perverse or unintended consequences. We might abandon the hungry homeless tonight due to constraints on time, or continue to hand out blankets and sandwiches to needy folks in Humboldt Park, and in so doing, reduce the pressure that legislatures feel to address an obvious need. One response is to do both, and I would invite people inclined to that suggestion to try. But it might produce a moral hazard to do so, and we should recognize that charity is no substitute for a binding policy to support human well-being. 

3:16: So how do you respond to the criticisms levelled at empathy by the likes of Bloom and Flanagan? 

JDT: As you can see from the examples I used above, The Empathy Gap was unabashedly critical of placing the details of the empathic process at the center of philosophical disputes about theories of mind, and about empathy as providing specific guideposts for moral behavior. My own view is that most of the challenges presented by the short psychological and geographic reach of empathy must be solved with policy. I imagine that there are special purposes for which discretionary empathy training can be useful – perhaps in addressing bullying in children, for example. But no normal person has problems appreciating the pain of losing a loved one to handguns or a child to avoidable diseases of poverty. The distress that attends a life of hunger, of sexual abuse, harassment or violence, is plain to everyone capable of undistorted experience. Every normal person has the empathic ability to appreciate these sources of pain. 

At the same time, however, the many corrupting psychological levers of human grievance are pretty well understood by this point in human history and, however dreadful, it is also equally plain that if you use those levers reliably enough, it is child’s play to get people to act against their own interests. But our attention and memory are limited, and many are cultivated to be impatient with suffering we don’t immediately understand. We are always sequestered from some demographic or other. Often wealthy mental health advocates are separated from the impoverished mentally ill. Public education advocates are blind to the troubles of, say, temporary workers. And so on. It is not that these individuals are necessarily experiencing an empathic failure. Far from it. They are merely in a situation that limits their cognitive and emotional resources. Sometimes, of course, the isolation is deliberate. More important, most of these concern the common good, and this is the connection to policy. If our lives and our society aren’t organized to promote empathic action, then policies must be adopted, and where they aren’t, then imposed. 

3:16: Normative issues raised by psychological literature arises out of this, but is a broader interest for you. You have attempted to provide resources for analytic epistemology to help with these issues. Can you say something about the key elements of your epistemological framework that support the recommendations of what you’ve called ‘Ameliorative Psychology’? 

JDT: Sure. This is work I did together with Michael Bishop, the philosopher of science and epistemologist at Florida State. Ameliorative Psychology is the name we gave to the psychological research devoted to improving reasoning. Dawes, Meehl, Nisbet, Tversky, Kahneman, Hastie and Gigerenzer are just a few of the figures who have contributed to this movement. We begin by noting that critical thinking courses taught in philosophy departments are advertised as vehicles to improve reasoning. But typically they are free of any reference to the psychological literature, or when they do refer to the empirical literature, they don’t engage with it in any significant way. The empirical literature, unlike the philosophical publications on critical thinking, that has actually gone to the trouble of testing strategies that improve reasoning. Philosophers routinely claim that their job is to say what we ought to believe, how we ought to act. Scientists, psychologists included, merely described how beliefs are formed, and are apparently powerless to make the normative claims philosophers make. We tried to show how philosophers don’t have the market cornered on normative work, and that they often engage in extremely speculative, elaborate, and inaccurate schemes that purport to describe the way people reason. 

3:16: What is Strategic Reliabilism? How does it help to adjudicate disputes in psychology that are normative epistemological disputes about the nature of good reasoning? 

JDT: Strategic Reliabilism is a doctrine that states that robustly reliable reasoning strategies consist in the efficient allocation of cognitive resources to significant problems. Philosophers often say that epistemology is about what or how they ought to believe, and that their critical thinking courses are about how people ought to reason. But if you believe that ‘ought’ implies ‘can’, it is not responsible to tell people they ought to do stuff their processing mechanisms can’t. And if you don’t have a psychologist’s expert knowledge of that processing, you aren’t even presenting a responsible ideal to shoot for. We already know that we should try to believe what’s true, that we should form beliefs in light of the total evidence, etc. Telling people what to do when the evidence says they can’t isn’t philosophy; it’s taunting. And we know they can’t because empirical research on perception and cognition, not intuition, tells us so. And when you know little about the natural requirements of human well-being, you aren’t in a position to tell people what they ought to believe, even when your goal is truth. 

Recommendations like that must be made in light of empirical knowledge of human motivation and well-being, not just of cognition. Even the brightest and most alert person will not, say, automatically pick the best comprehensive strategies for pursuing the truth. That is a tireless and never-ending task that must navigate limits on attention, the need for rest and recreation, nutrition, social adjustment for common pursuits, and so on. The structure of pedagogy in philosophy, and perhaps philosophy itself, will have to change a lot in order for philosophy courses to make good on claims that philosophy will improve reasoning. Strategic Reliablism was Bishop’s and my name for that new pedagogy. In it, we tried to outline many of the components that have to be aligned in order to improve reasoning. Many of these standards, like robustness and efficiency, are widely regarded as non-epistemic, even extra-philosophical. Some philosophers responded that we weren’t doing epistemology. If so, all the better to leave behind the epistemology familiar to them. Inclusion of Strategic Reliabilism’s many factors, all in the service of intellectual and practical improvement, make philosophy a much richer, more useful, and exciting enterprise. 

3:16: Why is it superior to Standard Analytic Epistemology? 

JDT: A bunch of reasons. One reason that SAE is inferior is that it is based on a variety of descriptive theories of how our minds work. Those descriptive theories are crafted by philosophers, and those theories are substantially false. This should not be surprising. The mind and the world are complicated things. It would be shocking if you could presuppose a descriptive theory of the mind without the counsel of science – not to mention without its methods – and expect to have an intellectually serious epistemic enterprise. But that is what most epistemologists have done. What is remarkable is how little effort has been devoted to justifying this institutional neglect of science. But we have grown used to it. In a world in which philosophers take their intuitions, considered or not, as the touchstone of theory-construction, if not truth, it seems that the burden is on others to explain what is wrong with this assumption. This burden is a key feature of what Mike Bishop and I have described more broadly in recent work as “the epistemology of the con”.Just to be good sports, and to stay in the game, too often fully naturalistic philosophers carry this burden, even after all the lessons have been learned from the historical and social studies of science. 

So I believe the impatience of naturalistic philosophers is understandable, abandoning issues blessed by reigning epistemologists and taking up theory construction in psychology, occurring at the foundations of the cognitive science of judgment. But to oblige: Basically, the descriptive theory that underlies SAE is, while not a monolith, generally too simplistic, when it isn’t just false. Those in the grips of SAE too often give priority to the way things “seem” to them, or what we “clearly want to say” about certain kinds of scenarios. Adherence to our most jealously protected intuitions is no doubt comforting, and it fosters the feeling that, with enough work and attention, we can properly calibrate our beliefs and overcome our cognitive imperfections, including the overconfidence bias that leads us to errantly trust our intuitions. It seems to them that you can conclude there is no real sunk cost fallacy, if you can find a sunk-cost that is redeemed, and then conclude that redemption is not impossible so the sunk cost fallacy isn’t a genuine imperfection. Thus, our intuitions have been saved, no matter how cognitively costly the must more frequent, actual, failed redemptions may be. We find that epistemologists in the English-speaking world don’t show much initiative making the methods and findings of cognitive science constitutive of epistemic norms in the way that, say, philosophers of biology have made the practices and findings of biology constitutive of norms in philosophy of biology (say, when appraising proper function or taxonomy). 

The hard-won scientific findings about the natural contours of cognition provide faint constraints on the distinctly philosophical pronouncements of SAE. Philosophical accounts might hedge their claims, but their ceteris paribus clauses are self-protective, not evidentially probative. Without documenting mechanisms, the theory asserts that those mechanisms are capable of processing whatever information is necessary to meet the normative theory’s routine demands, whether that is withholding belief until you are certain X is true, until you consider the total evidence, until you consider all of the available evidence, or until you are justified, to name just a few common normative themes. We could not even begin to satisfy these normative aims unless we had perceptual and cognitive mechanism far different – much more powerful and expansive and controllable – than the ones we in fact possess. 

Of course, the last 40 years in epistemology has been a story of responsible naturalistic philosophers and psychologists pointing out the falsehoods in the implicit descriptive theory of Standard Analytic Epistemologists, and the SAEers insisting they didn’t need that part of a theory, offering an equally unestablished alternative. SAE is burdened by false descriptive theories, but its methodologies just aren’t well-suited to responsible or even courteous intellectual inquiry. It just isn’t collegial to run your research program by saying “Here is something I thought of, now prove me wrong.” There is too much that your opponent has to know in order to refute you, and nothing that you need to. But given the intuition-pumping method that has dominated SAE, a philosophical advocate claims success when their imaginable scenarios aren’t demonstrably false, or apparently inconsistent with intuitions, or with some stuff we might believe. At some point, you just have to point to the argument that this method is a recipe for keeping a corner of Philosophy busy, but it isn’t an intellectually reputable way to conduct yourself. 

Which brings me to the way Mike Bishop and I make the argument. As Michael Bishop and I tell the story in Epistemology and the Psychology of Human Judgment, Standard Analytic Epistemology (SAE) is our name for the dominant approach to the theory of knowledge in the English-speaking world. It is typified by a bundle of techniques, like the method of (often wildly counterfactual) counterexample, and reliance on cultivated “intuitions” as an innocent and defensible basis for theory-building. We can trace the persistence of SAE in contemporary epistemology to earlier approaches like versions of foundationalism (Chisholm 1981, Pollock 1974), coherentism (Bonjour 1985, Lehrer 1974), reliabilism (Dretske 1981, Goldman 1986) and contextualism (DeRose 1995, Lewis 1996). Bishop and I tried to expose the many drawbacks of this approach – how conservative “intuitive plausibility” is, how ignorant and insensitive abstractions can be to the facts of human psychology, facts that are ignored or denounced for deviation from favored philosophical standards. Standard Analytic Epistemology still dominates methodological procedures in that field, and the people committed to it perform work that exemplifies a certain brand of cognitive effort: identifying a problem, characterizing it, extracting a core feature of the view, providing counterexamples, defending their fidelity as damaging counterexamples, and proposing and defending the triumphant new twist on justification, etc. This format has the formal features of hard cognitive work. 

But people often confuse cognitive work with the prospect of improved outcomes. Unfortunately, solving problems isn’t like lifting weights, in which any work will bring improvement. Here, the framework of cognitive work is reflective equilibrium, and SAE it is too sanguine about its power to yield progress, however good at is at modelling a certain kind of cognitive effort and discernment. In fact, the observation that contemporary epistemic methodology depends on the practice of reflective equilibrium is one of the most damaging criticisms of SAE. Indeed, reflective equilibrium is only reliably capable of delivering improved outcomes when your background theory or principles (or the intuitions that the cases are supposed to pump) are already good enough to drive the improvement, rather than to simply vacillate among weak or unconstraining hypotheses. People specializing in quantitative methods like statistical modelling, meta-analysis, and program evaluation understand this point. When your theories are bad, it is really difficult to improve them with methodological techniques that are theory-dependent, like reflective equilibrium. 

That’s where our critique of epistemology in Epistemology and the Psychology of Human Judgment converges with my explanation for mature scientific success traced in Wondrous Truths. On the classic view, Nelson Goodman described reflective equilibrium as a process that involves aligning our judgments about particular instances with our judgments about general principles. As he put it, “The process of justification is the delicate one of making mutual adjustments between rules and accepted inferences; and in the agreement achieved lies the only justification needed for either” (1965, 64). There are “narrow” and “wider” variants to be found in Goodman and Rawls scholarship. Narrow reflective equilibrium is the process of bringing our normative judgments about particular cases into line with our general normative prescriptions and vice versa. Wide reflective equilibrium differs from narrow reflective equilibrium by including our best theories in the mix. So wide reflective equilibrium is the process of bringing into alignment our best theories as well as our normative judgments about particular cases and our general normative prescriptions (Rawls 1971, Daniels 1979). Some philosophers have discussed reflective equilibrium as a way of refining the accuracy of beliefs, by engaging in a process of mutual evaluation. On this version of reflective equilibrium, the indi­vidual commitments that comprise a system of belief must be reasonable in light of one another, and the system as a whole must be at least as reasonable as any available alternative in light of relevant antecedent commitments. 

You can see how unhelpful this exercise is if you hold not even a radically, but still substantially, mistaken theory. Say that, as a good medieval doctor, you held a humoral account of health, and you are earnestly trying to improve your treatments. You would satisfy the dictates of reflective equilibrium by examining each individual commitment in light of other commitments. And indeed they did that. They expected that diarrhea, due to an excess of humidity, would be attended by other imbalances of the humors, and that this belief moderated other beliefs that a blood-letting would be beneficial. Another condition produced by an excess of humidity? Stroke. But it is unlikely that you can improve or rehabilitate a theory this poor by engaging in this kind of weak and impressionistic balancing of falsehoods. Something like the internal checking of reflective equilibrium might be in some way useful if your theories were already true, but it is no explanation of how poor theories turn into good ones. In fact, we don’t have a good story of how that transformation ever happens, though we have good explanations of how we came upon a good enough theory to make it better. What Standard Analytic Epistemologists offered instead is strategically interesting. Rather than empirically defend the elevated status of their intuitions, they instead elevated their intuitions, primitive and cultivated, warts and all, to a kind of “stasis requirement”, allegiance to which was expected for participation in the epistemological franchise. 

According to the stasis requirement, if an epistemic theory forced us to radically alter our considered epistemic judgments (e.g., our epistemic judgments in reflective equilibrium”, so much the worse for the theory: “[I]t is expected to turn out that according to the criteria of justified belief we come to accept, we know, or are justified in believing, pretty much what we reflectively think we know or are entitled to believe.” (Kim 1988, 382) The statis requirement elevates a documented cognitive frailty to an unduly optimistic principle, allowing for nonnaturalistic philosophers to advocate for the easiest lesson to learn: Trust your biases. It is easy to see how the familiar and self-congratulatory nature of reflective equilibrium had an appeal, but it is hard to understand why, in light of all we know now, it has endured. Philosophers, of course, can sometimes recognize that they are prone to error, but if you don’t read empirical work you might not know that admission is not absolution: It is hard to reverse or de-fang biases even when you try to. We don’t have the cognitive bandwidth to track how often we commit them (in part because we wildly underestimate the number of judgments we make per day), we don’t employ disciplined debiasing responses, and the outcomes of any such attempts are unsupervised and unrecorded. 

The trust in reflective equilibrium that philosophers display, via the stasis requirement, is the result of poorly designed tests that are sloppily implemented by motivated reasoners. If we treat a statistical test with low power as undiscerning, you don’t make the theory better by adding further undiscriminating tests to the mix. That’s, in effect, why you don’t use t-tests as a search tool. As I said, reflective equilibrium might help to improve a theory if it is already good, but so will other methods, for example, adopting the conservative heuristic that a hypothesis is more likely to be true the more it is like those of the existing theory, or, dare I say, inferring the approximate truth of other hypotheses the theory generates. These methods, or patterns of inference, could result from bias or laziness, but they would likely be reliable once a theory is good enough, and horribly unreliable if your theory is bad. Perhaps reflective equilibrium is intended as an idealized method of improvement, trustworthy no matter what the contingencies of circumstance. Einstein may have used thought-experiments too, but merely using thought-experiments doesn’t make a philosopher naturalistic any more than using a hammer makes you a carpenter. 

For most of the history of science, science has failed because, rather than in spite of, the use of reflective equilibrium. But testing its reliability requires sophisticated tools, by prospective designs that have a memory -- not casual observation, not by a feeling that it should work, or that it makes sense to us. The main application of reflective equilibrium has been in social and political philosophy, as a model of how to deliberate about and select an appropriate moral, political, or social action, or a hopeful description of how deliberation goes when done well. If we had better theories of social action, perhaps reflective equilibrium could work its magic. But that’s not where things stand now, in philosophy and many other disciplines. Until norms in philosophical culture dislodge this practice, reflective equilibrium – despite its severe limitations as a methodological tool for improvement, will continue to be a preferred path for performing deliberation. This is not to say that SAE is altogether useless. I think the cognitive engagement and discernment SAE requires might stave off cognitive decline, for example. But exploiting it for that reason should be a personal choice, not a professional requirement, and it should be made in the context of preference rankings that include other cognitively invigorating activities like Wordle, crossword puzzles, regular social conversation and the engagements of occupation and leisure. 

3:16: You’re overall approach here and throughout your work is to construct naturalistic philosophical theories – in epistemology and in other areas. So can you end by saying what naturalism means to you in this context and what other philosophical traditions philosophical naturalism eradicates? 

JDT: In fact, I have a book that came out a few years back based on the Romanell Lectures that takes up this issue – All Talked Out: Naturalism and the Future of Philosophy. My version of Naturalism simply states that our foundational, “philosophical” views are continuous with our best methods and contents of science. There, the argument is that philosophy’s survival depends on assisting scientists with the normative tasks that scientists have always engaged. By contrast, working on their own and in their typical innocence of the special sciences, philosophers have little chance of making normative recommendations that are both useful and responsible to the evidence. You can spell out Naturalism as a doctrine, and then argue that it violates all sorts of conceptual or logical constraints due to circularity or self-defeat. That is the approach most anti-naturalists take. I treat Naturalism as an empirical hypothesis whose confirmation is about as complete as doctrines like physicalism – that everything in the universe is entirely physical. Even the people who pose as physicalist critics are committed to the doctrine through their reliance on scientific practices that depend on it. The same is true of naturalism. 

Now, in an effort to get clearer about the nature and status of naturalism, some philosophers might challenge it. On what authority is Naturalism grounded? Not on Naturalism itself, lest it be circular. Not on some other doctrine, lest the same question can be raised about the grounding of THAT doctrine, and you face a regress. The people asking for the definition are typically looking to explore or test the definition’s deductive consequences, or the argument’s underlying structure. Is the argument for naturalism circular? Is it abductive? Are all abductive arguments circular? If naturalism is openly empirical, isn’t in inconsistent with naturalism to hold a doctrine that rules out discovering it is false? The truth is, taking these arguments at face value is a sucker’s bet. I know that the rules of philosophical engagement tell us to play nicely with unconvinced members, focusing on argument rather than motivation. But those rules presuppose that those interlocutors are practically persuadable – that they are not ideologues, not in the grips of a truth-distorting view, not strategically pursuing professional advancement, or dishonest. Few philosophers thought it was time well-spent to argue with the internal “findings” of the tobacco industry that were “in the business of doubt” (as one industry executive put it), why would they believe that philosophers are practically persuadable about the prospects of Naturalistic approaches that might burn down their philosophical franchise? 

Again, I think it is a strategic mistake to take the dare. The naturalist’s concern is to use reliable methods, appropriate to the task, to find out what’s true. The chances are great that your opponent, in cases like this, is more focused on philosophically conservative projects that defend and expand the nonempirical branding of traditional philosophical analysis. I used to think that the different waves of conceptual analysis in metaphysics and philosophy of mind – perception, intention, grounding – were the result of the individual psychologies of the participants. Whatever the cause, the questions posed and methods used are not laser-focused on the goal of enlightenment, but on dispatching with an adversary so that they can sustain their illusion. Now, as an obliging naturalist, you can dutifully respond to your adversary’s charges, in the best of philosophical spirit. But that has already been done, in detail, many dozens of times in professional venues. If you love to spend your time learning new things, and connecting with other fields, that kind of careful, patient, and endless public service produces opportunity costs that are punishingly high. For the same naturalistic reasons I reject reflective equilibrium as a general model for moving analyses of knowledge forward, I also reject it in high analytic approaches to moral theorizing. 

Contemporary ethics has languished in perfunctorily professional treatments of moral issues administratively freshened regularly with a sprinkling of new terminology and examples. This procedure is in the service of proposals that are initially underqualified and, ultimately, lazily-tested. Instead, ethics should be understood as a broad foundational study of the natural requirements of human well-being, not a finger-wagging exercise delivered by a sector of the educated elite. And for creatures that are intensely social, that foundational study will include good theories about the function of welfare-enhancing institutions and social norms. Much of ethics, as currently practiced, already tries to address these issues. But it does so in the traditional way, by lurching and bumbling through policies and evaluative constraints favored by our intuitions rather than extracted by durable science. For example, though widely discussed elsewhere, there is much important scientific work on Subjective Well-Being that is either under-utilized of pre-emptorily dismissed by philosophers. So without the benefit of this scientific perspective, familiar normative constraints in Philosophy continue to impose psychologically and socially unrealistic demands on humans, ignoring scientific evidence about the limits on our attention, ongoing deliberation, and self-control. If we follow the idea that ‘ought’ implies ‘can’, then theory-building in ethics requires a deep knowledge of, and respect for, scientific psychology. 

The version of naturalism that I defend is nothing special or fancy. It will be familiar to anyone who has followed the disputes since Quine’s pronouncements in its favor. Naturalism is not the official doctrine of science in the way that Snickers is the offi­cial candy bar of the Olympics. Rather, Naturalism is a working hypothesis, an expectation vindicated by experience, a thesis extracted from contingent facts about the success of science. It is not a separate doctrine that commits its holder to a belief that goes beyond the methods and practices of science. It is a contin­gent truth, like the truth that the sun will rise tomorrow. What is distinctive about my view is that it rejects the com­mon philosophical dogma that psychology is somehow exiled from the world of normativity. 

3:16: And finally, are there five books you could recommend to the readers here at 3:AM that would take us further into your philosophical world? 

JDT: There are so many fine books, both in and out of philosophy, by authors like Steven Stich, Jared Diamond, Philip Kitcher, Michael Bishop, and Peter Godfrey-Smith to name a few figures. Let me just mention a few: 

Value in Ethics and Economics, by Elizabeth Anderson (but read anything by Anderson, really) 

Well-Being, by Michael Bishop 

House of Cards, by Robyn Dawes 

The Knowledge Machine, by Michael Strevens 

The Behavioral Foundations of Public Policy, ed. by Eldar Shafir