Interview by Richard Marshall
' I think that learning the psychological origins of our moral judgments can sometimes be rightly disturbing to our moral self-conceptions.'
'As philosophers since Hume have incessantly pointed out, you can’t squeeze a normative premise out of an empirical description. The normative stuff comes from somewhere else, not from within science. However, if we already hold some normative standards, then science can help us identify how to make use of them in our personal ethical deliberations.'
'I follow philosophers like Simone de Beauvoir and Stephen Darwall in emphasizing that ethical inquiry is essentially second personal. Ethical claims are calls that moral agents make upon other moral agents to limit their free choices in particular ways. When you make universal moral claims, you are effectively issuing demands upon the free agency of everyone.'
'Suppose you train to hold your breath for a long time. Then, one day, somebody gets trapped in an underwater cave and your skills are desperately needed – but instead of actually performing the rescue, you just stand there, showing off how long you can hold your breath. I think that’s what philosophers do sometimes.'
'Some people claim that microaggression victims should just ‘grow a thick skin’ and ignore the insult, but what this misses is that microaggressions aren’t just random insults. They are systemic; they happen to the same people (people of color, women, LGBT people, people with disabilities) over and over again. Ignoring repetitive and systemic insults means that they’ll never stop. So if we want to make the world more fair and decent for everyone, then we can’t demand that marginalized people just accept being insulted over and over again; we need to pay attention to microaggressions, because they are clues about how to break oppressive patterns.'
Regina Rini is a philosopher interested in moral agency, moral disagreement, the psychology of moral judgment, partisanship in political epistemology, and the moral status of artificial intelligence. Here she discusses why moral psychology is disturbing, debunking debunkers, science and morals, the second person perspective, whether philosophers are experts, philosophical training and intuitions, abortion, microaggression, fake news, and the importance of understanding ourselves as having a relationship to others in ethical reasoning.
3:16: What made you become a philosopher?
Regina Rini: I came to philosophy through politics and then psychology. When I moved to Washington DC for college, I was planning a career in government. I wanted to be a legislative staffer. Not a politician, but the person behind the scenes who actually negotiates laws. I had a somewhat naïve, ‘West Wing’-ish view that policy-making was about finding compromise among people of good faith. But when I started seriously studying politics, I realized so much of it is just manipulation. Campaigns are about targeted messaging at demographic slivers and strategically misrepresenting the opponent’s point of view. Worse, a lot of day-to-day governance is only positioning to win the next election. Once I understood that, I became very disillusioned about politics. What I’d naively thought of as an urgently important task - reasoning together about how to live! – turned out to be just a series of grubby power plays. And, in the way one does in wounded youth, I overgeneralized my disillusion: I became suspicious of all normative topics. Not just politics, but morality as well. I suppose that by the end of college I was one of those bitter logical positivist types.
Along the way, I wound up in psychology. Once I’d given up on taking normative reasoning seriously, I focused on exactly the causal forces that politicians manipulate. I was obsessed with the idea of phenomenal consciousness: what is it, scientifically speaking? How does our experience of thinking correspond to all the stuff going on below conscious awareness? I worked in a neuroscience lab after college, but I was more drawn to the theoretical side. So I started a Philosophy PhD specializing in the philosophy of cognitive science. It took years, really all through grad school, before I came back to taking moral theory seriously again. I ended up writing a dissertation on the cognitive science of moral judgment. And only now, a decade after finishing my PhD, I’m recently returning to politics. I’ve just started writing a book about the importance of sincere public disagreement in democratic culture (and how that’s endangered by social media). It’s been a weird, long, loop through various ways of thinking, but I guess I’m now back to worrying about the same things that got me started.
3:16: You’re interested in ethical dilemmas and the philosophical issues that arise from them. When it comes to morals, some philosophers argue that psychology debunks morals. Perhaps Nietzsche is a key figure in this move. You disagree with their arguments, but first can you sketch for us what they are claiming - how does psychology debunk morality? Why is moral psychology disturbing?
RR: Some philosophers claim that knowing the causal origin of a moral judgment – e.g. what brain area makes it happen, or what past experience influenced it – can undermine the judgment’s credibility. A very simplified example: suppose we find out that some moral judgment is correlated with high activity in emotion-linked areas of the brain, and suppose we assume that emotion tends to distort moral thinking. Then we might conclude that this moral judgment is distorted and exclude it from our reasoning.
I don’t think that is a good argument (among other things, I don’t agree that emotion necessarily distorts moral reasoning). But I do think that arguments of this type are interesting, and we can’t just rule them out categorically, as some philosophers do. In particular, I think that learning the psychological origins of our moral judgments can sometimes be rightly disturbing to our moral self-conceptions. Suppose that I value a certain way of thinking or being in the world – I’m an egalitarian, for instance. But now the psychological evidence shows that, in reality, I don’t usually live up to these ideals, either in my behavior or in my spontaneous judgments. Maybe I’m biased in certain ways that I can’t detect through first-person introspection. Learning this fact can, and probably should, be disturbing to me, because it matters whether my self-conception coheres with the reality of what I’m like. So: it’s not that psychology produces general arguments regarding abstract moral propositions, but instead that it can make a rational difference in individual people’s personal ethical deliberations.
3:16: So how does the threat of regress dubunk the debunkers?
RR: Take that toy argument from a moment ago, the one trying to debunk a judgment for being correlated with brain activity in ‘emotional’ areas. Suppose someone wants to press an argument along these lines. They will need to defend the normative premise ‘emotional processing produces distorted moral judgments’. And how will they defend that premise? They’ll need to present some other moral judgments to calibrate this one, along the lines of ‘the right thing to do in this situation is X, but emotion leads people to a different conclusion’. Now it’s very fair to ask about the psychological origins of these calibrating moral judgments. Which means we need to decide whether thesepsychological origins are distorting, which means relying on still further moral judgments, which have still other psychological origins, and those…. You see where this is going. Once we start down this way, we end up in a spiral of normative assertion and psychologizing that never reaches a dialetically satisfying endpoint. (There are a bunch of fiddly steps to this argument that I can’t reproduce here; interested readers should take a look at my ‘Debunking debunking’ paper.) The result is that putative debunking arguments are irresolvable.
So that’s a regress argument: debunking regresses back on itself endlessly. Interestingly, the Victorian ethicist Henry Sidgwick claimed that psychological arguments for general moral skepticism are vulnerable to this sort of regress, but he held out the possibility that we can use psychology to debunk specific judgments – perhaps to settle a disagreement. The upshot of my paper is that selective debunking is just as vulnerable to regress, so it’s not a good tool for handling disagreement.
3:16: You don’t use this to debunk the role of science in moral thinking though do you. So can science identify good moral reasoning?
RR: It depends. Science alone certainly can’t do that (and I think most scientists would agree). As philosophers since Hume have incessantly pointed out, you can’t squeeze a normative premise out of an empirical description. The normative stuff comes from somewhere else, not from within science. However, if we already hold some normative standards, then science can help us identify how to make use of them in our personal ethical deliberations.
Go back to the same simplistic example. Suppose you are someone who, as a personal ethical ideal, is deeply committed to avoiding the influence of emotion in your deliberations; you very much want to be a Mr. Spock type. (I think that’s a horridly sterile form of life but, hey, we’re talking about you right now.) Science isn’t the source of your holding this ethical ideal, but it might help you apply it. Suppose neuroscientists can show that a particular line of deliberation which had seemed to you all calm and collected is actually correlated with extreme activity in emotion-related brain areas. Given your prior commitment to avoiding emotion, you now have a new science-derived reason to avoid following that line of deliberation. On my view, that’s a reasonable way in which science can make a difference in ethical deliberation.
Notice, though, that this doesn’t work as a form of abstract argument. It’s all well and good for you to use science to optimize your own Spock lifestyle, but if you come over and tell me that I am wrong to follow such-and-such line of ethical deliberation because it is correlated to emotion, then I am just going to shrug. There’s no reason I should care at all about your argument. I never agreed with you that emotion is such a bad thing, and the brain science hasn’t done anything to change that. (In fact, the brain science can’t do anything to change that, per my regress argument.)
A sloganish way of putting it: science can identify instances of good moral reasoning if we already agree on the standards for what counts as ‘good’ reasoning. But if we disagree about that (as is often true in the real world) then the science is pretty much irrelevant.
3:16: How then do we make empirical psychology itself normatively significant given that there seem to be pretty evenly weighted arguments both for and against giving psychology a role in ethical thinking?
RR: The key here is to appreciate the idea of the second personal perspective. Thinking about ethics from a strictly first personal perspective – here are my judgments and my reasons for them, who cares how I get them – tends to lead to dogmatic ignorance of science. But thinking about ethics from a strictly third personal perspective – here are the causal facts about how people come to have such-and-such judgments – is normatively inert.
I follow philosophers like Simone de Beauvoir and Stephen Darwall in emphasizing that ethical inquiry is essentially second personal. Ethical claims are calls that moral agents make upon other moral agents to limit their free choices in particular ways. When you make universal moral claims, you are effectively issuing demands upon the free agency of everyone.
Once we take that perspective, then we have a grip on one role that psychology can play. Psychology might show you that some moral judgment of yours is rooted in an idiosyncratic fact about your cognitive background; you form that judgment because of a psychological disposition that is notshared with the other agents to whom you are issuing second personal demands. Now that’s not, by itself, reason to conclude your second personal demands are illegitimate – doing that would require a different sort of argument. But it might rationally motivate you to revisit how hard you want to push your idiosyncratic perspective on others.
3:16: Given that as a result of your own ‘middle way’ approach we seem to have now three positions held by equally strong peers in this domain, wouldn’t the right thing to do be to dial down confidence in your own position and become agnostic – or is there something about your argument that crushes the other two and once they see it you win!?
RR: I’m not a big fan of thinking about arguments crushing anything. Actually, I’m fairly dubious of the extent to which philosophers fetishize arguments. Don’t get me wrong: I think it’s very valuable to learn how to think in argumentative terms, both formally and informally. I’m glad philosophers get that training; I think democratic societies would be better off if more people did. But it’s possible for a form of training to be very valuable, yet also dangerous when allowed to come unaligned from its purpose. Suppose you train to hold your breath for a long time. Then, one day, somebody gets trapped in an underwater cave and your skills are desperately needed – but instead of actually performing the rescue, you just stand there, showing off how long you can hold your breath. I think that’s what philosophers do sometimes.
That’s a bit of a digression. But it’s by way of saying that I’m ultimately not in the pounding-the-baddies’-arguments-into-the-ground game. I see my project as trying to articulate something illuminating about one way of approaching human experience. Sometimes arguments are a perspicacious way of doing that, but not always. I don’t expect I’m going to convince anyone who didn’t start out with at least some sympathy for my views. Maybe, in a few cases, I’ll help people recognize a sympathy they didn’t know they had.
3:16: You’re an expert in philosophy of ethics – but what is that? How should we test for philosophical expertise, and how shouldn’t we?
RR: Philosophers tend to be experts about names and texts in their philosophical traditions; I can say a lot more about Kant’s ideas than most folks can. But the interesting question is whether that sort of book learning leads to philosophers being better at reasoning within their domains. For instance, are philosophers better than most people in thinking about ethics?
It turns out there are many ways to think about that question. One is to ask whether philosophers’ training provides them immunity to certain vulnerabilities in reasoning. Psychologists have shown that many people’s judgments about ethical dilemmas are unstable. For instance, their moral verdicts flip around depending on the order in which options are presented. But maybe philosophers’ training shores up their judgments, making them less likely to display this sort of instability? About ten years ago, people started testing that idea by running actual philosophers through the same psychological tests as non-philosophers. Unfortunately, I don’t think this is a very good way of testing. My argument is a bit fiddly, hard to summarize here. But the key point is this: philosophers are spoiled as research subjects, because they tend to already have established opinions on typical moral dilemmas (e.g. the infamous trolley problem). Non-philosophers have to think through novel dilemmas, so the test really gets at their reasoning. But philosophers can just recite their Party Line as soon as they recognize familiar thought experiments. So it’s not so helpful to compare the results of the two groups, since they aren’t really doing the same cognitive task.
For what it’s worth, I’m not very invested in how these psychological tests turn out. Though I’ve published two papers on this ‘expertise defense’ of philosophical intuitions, I’m not sure I believe there is such a thing as expertise in ethics. Or, if there is, I’m not so sure it comes about through the kind of training philosophers get. If I need ethical advice, I don’t typically ask philosophers – I’m more likely to ask people who’ve had long and varied life experiences. I think philosophers are very good at highlighting inconsistencies between ethical judgments and at articulating principles that link judgments together. But neither of those is the same as making the right ethical judgments.
3:16: Does training in philosophy enhance the reliability of your intuitions about moral judgement? What’s the role of analogy in this?
RR: Philosophers disagree about how to characterize even the subject matter of morality. Is moral philosophy a conceptual, a priori discipline like mathematics or logic, something you can figure out from your armchair? Kant thought something like this. Or is it more like medicine, where you have to go and try stuff out to find your answer? Aristotle had a view like that, and John Dewey more recently.
There is (or maybe: was, until recently) a pretty sharp disciplinary divide on this question, with entire departments falling to one side or another. For instance, when I was doing my dissertation at NYU in the ‘00s, it was taken for granted that ethics is a strictly conceptual inquiry; one member of my grad cohort once started a presentation by declaring, to no voiced objection, that “everyone agrees that ethics is an a priori discipline”. Now I didn’t agree, but I thought I was just a lone heretic. It took me a couple years even to realize that ethicists at nearby departments like Rutgers and Columbia weren’t on board with the a priorist ‘consensus’. But a lot of my early work was – unwittingly – about trying to convince my grad school profs that they were wrong about the thing I’d been taught was simply a given (I think a lot of people’s early work fits that rubric!).
So I have a paper that’s nominally about an epicycle in the expertise debate, but really about trying to leverage that debate to strike a blow against the a priorists. The short version: Jesper Ryberg wrote a sharp little paper showing that expertise in moral philosophy doesn’t seem to work in the same way as expertise in mathematics. The upshot is supposed to be doubt about moral expertise. But I pointed out that Ryberg’s argument appears to get the wrong answer when applied to expertise in domains like medicine – his argument seems to imply there’s no such thing as medical expertise either. Obviously that can’t be true, so instead it looks like we need to have different criteria for expertise in domains like mathematics and domains like medicine. And – here’s the upshot – if Ryberg is right that moral expertise doesn’t look like mathematical expertise, then it had better look like medical expertise instead! (Or, of course, you can abandon the idea of moral expertise, but my targets here usually don’t want to do that.) So the argument is a bit of a trap for the a priorists: I’m offering an out from a particular kind of skepticism about the discipline, but the cost is giving up hardline a priorism.
I wrote that paper nearly a decade ago (it took awhile longer to get published) and I’m no longer so proud of it. I think it has a bit too much of the overly clever dialectical jujitsu that analytic philosophers find engaging. When I wrote it I was still trying to prove that I belonged, I suppose, so I tried to sneak my challenge to a priorist orthodoxy in the back door of another debate. These days I’m much more willing to say it directly: I think strict a priorism in ethics is myopic and unreal. But I’m also less interested in fighting about it, and I don’t need to pretend to be defending expertise in order to make the point.
3:16: You’ve looked at particular issues so let’s turn to them now: so how do you think we should approach the morality of abortion, a very controversial topic especially in the USA. You argue that women seeking an abortion have an obligation to view the ultrasound before making their decision even though you don’t think women should be compelled to do so. What’s your reasoning here – and if they ought to, why not compel them to?
RR: It's a bit more complicated than that. I don’t think there is a free-stranding obligation to view ultrasound images. Rather, I think there is a general obligation, one we all have about all issues, to make ourselves available to sincere persuasion from people who morally disagree with us. When I say it generally like that, the point sounds cheaply pleasant; yes, of course, we should all listen to each other more. Peace and love, etc. But then when you apply it to specific highly contentious debates, the idea turns out to be pretty demanding. So I wrote about abortion as a kind of stress test for the obligation to be open to persuasion; if you can still accept that idea even after thinking through such a hard case, then you’re being honest about what openness to persuasion means.
So, a quick version of that: many Americans think abortion is wrong, and some of them think that looking at ultrasound images can help others see this (alleged) moral truth. So viewing ultrasound images is one way of living out the obligation to be open to persuasion. (Symmetrically, of course anti-abortion folks of also ought to engage with images or arguments that pro-choice advocates put forward as persuasive.) Notice that this obligation isn’t to the fetus, as a typical pro-life argument would have it, but rather to the person trying to offer moral persuasion. This means that in a society where no one is trying to persuade anyone about abortion, there’d be no obligation to view ultrasound images. And the obligation to be open to persuasion is what philosophers call an imperfect duty. It’s like charity; you don’t have to do it all the time, with every single case of disagreement, but you go wrong if you never do it at all. This allows that there will be plenty of real-world cases where people are right to avoid a particular type of persuasion, as in some stressful choices around abortion.
As to why not to compel women to view ultrasound images, as several US states have tried to do: that’s because compulsion is completely incompatible with the kind of interpersonal respect that makes moral persuasion valuable. We care about moral persuasion because that’s how we show that we value one another as rational moral thinkers. But forcing a person to look at an image (or read an argument, etc) amounts to denying respect for their ability to decide how to engage with moral reasons. It’s overbearing and cruel, and it betrays an instrumentalizing attitude toward other people. A society that genuinely values independent moral thought would never have laws like this.
3:16: Another issue is taking offense and responding to microaggression. Why don’t you think that we should just grow a thicker skin and not be so touchy, nor that we should get angry? What should we do and why – and can this be generalized to all kinds of situations where we might take offense or is it only in cases of microaggression?
RR: A microaggression is a small act of insult or indignity, relating to a person’s membership in a socially oppressed group, which seems minor on its own but plays a part in significant systemic harm. Some people claim that microaggression victims should just ‘grow a thick skin’ and ignore the insult, but what this misses is that microaggressions aren’t just random insults. They are systemic; they happen to the same people (people of color, women, LGBT people, people with disabilities) over and over again. Ignoring repetitive and systemic insults means that they’ll never stop. So if we want to make the world more fair and decent for everyone, then we can’t demand that marginalized people just accept being insulted over and over again; we need to pay attention to microaggressions, because they are clues about how to break oppressive patterns.
As for anger: I don’t think it’s wrong to be angry about microaggressions. After all, anger is a reasonable response to insult, especially incessant patterns of insult. But I do think that effective advocacy for social change sometimes requires being tactical about displaying emotions. It certainly isn’t fair, but sometimes people with every right to be angry will nevertheless be more effective if they avoid emotional expression (something I suspect every marginalized person has experienced). Of course, there are some circumstances where well-channeled anger can be extremely effective in provoking reflection and change. And sometimes marginalized people just need to vent, whatever the consequences for effective advocacy. So it’s all very complicated. In fact, I’ve written a whole book about it, The Ethics of Microaggression, which will be coming out in the autumn.
3:16: You don’t think we can deal with the notion of ‘fake news’ by changing individual epistemic practices and that there needs to be changes to institutions such as social media platforms and the like. So what do you see as the problem with fake news – is it always bad – and why isn’t it down to individuals wising up to the way contemporary information channels work? After all, given the way they work, there’s really not much chance of changing the institutions if fake news is as powerful as its opponents say it is – they’ll just spread more fake news to destroy countervailing forces won’t they?
RR: Most people use social media to keep up with loved ones and find a bit of entertainment. They aren’t there to do epistemic labor. And even if they tried, it probably wouldn’t work. In ordinary life, we judge information from others partly on the basis of the speaker’s epistemic track record (are they a liar? A fool?) Social media confronts us with so many ‘friends’ and friends-of-friends that it’s unrealistic to keep track of how reliable everyone is. By the time you find out that a piece of news was untruthful, you’ll likely have forgotten who posted it. So it’s not very helpful to suggest that people should just be more epistemically responsible. Sure, it would be nice if they were, but given the purpose of social media, and the way its crowded information channels overwhelm our traditional accountability practices, we shouldn’t expect that advice to have any effect. People are making individually reasonable judgments about the degree of effort they exert to interrogate online sources, and they are using individually reasonable heuristics for judging their friends’ reliability.
The problem is that individually reasonable choices can add up to a huge collective problem. Partisanship, for instance, is one of those individually reasonable heuristics for judging reliability – but after we iterate partisan sorting a few times, we end up with polarized, belligerent echo chambers that are primed for unaccountable rumor mongering. Fake news is designed to exploit those pathways. If we want to deal with it, we need to redesign social media platforms to make individual good behavior as easy as possible. That means offloading the costs of epistemic virtue from users – what I call ‘deliberation infrastructure’. For instance, I’ve argued that we can restore some of the epistemic value of reputation by having social networks track and display each user’s propensity to spread fake news. Rather than make people try to remember which of their friends are epistemically irresponsible, we let the algorithm keep tabs for them. There are lots of wrinkles to this proposal, but I think it points toward to the sort of strategies we need to be designing.
3:16: Many philosophers argue that we think of others as moral agents rather than as causal or statistical objects. You ask why do we do that and seem to be arguing that we try and forget or be inattentive to the facts about causality and the contingent origins of our moral judgments. Can you flesh out your view and the say why you think this doesn’t actually conceded the point that we aren’t actually moral agents after all – just as someone like Nietzsche argues. Wouldn’t it be a better approach to do as someone like Thomas Pink does when he engages in the debate between Suarez and Hobbes and tries and find an alternative power to causal power that allows agency?
RR: I’m glad you asked about this topic, because I think it’s probably the hardest question in philosophy, and I don’t really know the answer – which is why almost all my papers are, obliquely at least, different attempts to address it. My starting point is: if we’re being totally honest, humans are just causal/statistical objects. We’re not much more than complicated string-and-pulley mechanisms. But we certainly don’t think about ourselves that way. In fact, it seems almost impossible to take seriously many of our most fundamental points of view – as responsible moral agents, as people who have reasons for our beliefs – while simultaneously thinking of us as just more stuff bouncing around the universe. So it seems like we need some perspective from which we treat ourselves as not just causal/statistical, but something more.
So far, what I’ve said is only what many Kantian moral philosophers say. But I think most Kantians stop short of the really tricky part. I think that our ability to see ourselves as more than just causal/statistical stuff relies essentiallyon our relationship to other people (remember what I said about ethics being essentially second personal?). We’re all engaged in an elaborate, life-long game of play-pretend where we think and talk as if we’re more than just the universe’s wind-up toys, and the game worksbecause everyone else plays it along with us. But sometimes we’re tempted to selectively stop playing the game with some people – especially people we disagree with. Sometimes we try to get an advantage over them by suddenly treating them like causal/statistical mechanisms after all, such as when we try to debunk their moral judgments. That’s a form of disloyalty or defection. It amounts to selectively refusing to make the next move in the hey-none-of-us-here-are-the-wind-up-toys-of-the-universe game. That’s a bad way to treat people.
Notice – and I’m glad you’re giving me the chance to point out how these parts of my work fit together, since there’s never space in any one paper – that this produces a complex asymmetry in how we make use of psychology in moral practice. On my overall picture, it’s a bad thing to try to use psychology to debunk other people’s moral judgments, both because of the problem with debunking we discussed earlier and now also because it’s a defection from the not-a-wind-up-toy game. But that’s compatible with people choosing to make use of psychological information about themselves, along the lines of the self-conception point I made at the start. Psychological information can be a valuable therapeutic tool in reflecting on your own life, but it’s a nasty way to try to intervene in other people’s deliberations. Put another way: it’s rational when critically reflecting on how you conduct your own second personal interactions, but it almost never belongs within those second personal interactions.
3:16: And finally, for the readers here at 3:16, are there five books you can recommend that will take us further into your philosophical world?
RR:
The Ethics of Ambiguity by Simone de Beauvoir
The Sources of Normativity by Christine Korsgaard
Moral Aims by Cheshire Calhoun
The Age of Wonder: The Romantic Generation and the Discovery of the Beauty and Terror of Science by Richard Holmes
Middlemarch by George Eliot
Regina Rini has also done a great deal of original public philosophy which can be read here.
ABOUT THE INTERVIEWER
Richard Marshall is biding his time.
Buy his second book here or his first book here to keep him biding!
End Times Series: the index of interviewees
End Time series: the themes