Frances Egan interviewed by Richard Marshall.
Frances Egan is a mind-bombing philosopher who wonders on explanatory frameworks of science, the fits and starts of mind evolution, the links between neuroscience and meaning, the redness of tomatoes, the difference between horizon and zenith moons, fMRI interfaces with philosophy, mind/computer uploading and the consciousness of the USA. All in all, she is a deep groove hipster of the philo-mindster jive. Awesome!
3:AM:What made you a philosopher and has it been rewarding so far?
Frances Egan:I read some political philosophy on my own in high school, but I wasn’t exposed to philosophy systematically until college. I took a philosophy course in my first semester because I was looking for something different. After a brief introduction to logic we discussed the problem of evil: how could an omnipotent, benevolent god allow so much pain and suffering? I was raised Catholic but that was the end of religion for me. Nothing quite that dramatic has happened since, but thinking about fundamental questions and trying to go where the arguments lead continues to be rewarding and fun.
3:AM:You’re a philosopher of mind. Why should we listen to philosophers now that scientists are finding out about minds?
FE:Philosophy is as important now as ever, even given the rapid pace of empirical discoveries about the mind. Science does not always present its findings in a way that makes obvious the implications for our entrenched ways of thinking about things, nor is science always interested in such implications. For example, general relativity undercut the idea that two events could be simultaneous except relative to a shared frame of reference. One of the jobs of philosophers is to draw out these implications.
In addition, I see philosophy of mind, and philosophy of science more generally (I take philosophy of mind to be a sub-field of philosophy of science), as concerned with foundational issues in the sciences, questions about how the explanatory frameworks of the sciences are to be interpreted. Of course, many scientists are interested in foundational issues in their disciplines (theoretical physicists, for example, and many cognitive scientists), and these issues often bear on perennial philosophical questions. So I see the work of philosophers of science as continuous with scientists working on foundational issues in the specific disciplines.
3:AM:One of the big theories of how our minds work is that developed by Jerry Fodor. This is a theory that is realist about intentionality, that is, we explain behaviour by assuming that the behaviour of people is explained by their mental states, these mental states can represent the world and produce the reasons for the behaviour. Is that right? Before assessing this approach, can you sketch the general picture that Fodor develops?
FE:Yes, that is roughly right. Fodor, like the overwhelming majority of philosophers of mind, is an intentional realist: he holds that overt behavior is caused by mental states that represent the world as being a certain way, and so can be true or false, accurate or inaccurate, depending on whether the world really is that way. My belief that it is raining causes me to grab an umbrella as I rush out the door. If the wet road was actually caused by an early morning street-cleaning, then my belief is false and the action that it caused is inappropriate. Moreover, these mental states are roughly described by a loose system of platitudes – called ‘folk’ or ‘commonsense’ psychology – that we all know implicitly and use to predict and explain the behavior of other people.
Fodor goes much further. He thinks that the intentional mental states (beliefs, desires, hopes, fears, and so on) that cause behavior are essentially linguistic in nature. To believe that it is raining is to have a structure in one’s head – literally, a sentence – that means that it is raining. According to this language of thought picture, thinking is literally inner speech.
3:AM:So how do you assess this approach? I think you like the way it keeps commonsense psychology, but there are difficulties too for you in terms of how the whole programme fits together. Is that right?
FE:I do want to preserve commonsense psychology, but we need to be clear what we are preserving. In general, commonsense fixes on phenomena that impact our lives; it sets the preliminary agenda for science. But often it merely gestures at the phenomena and shouldn’t be taken too seriously as explanatory theory. This is true of commonsense psychology. It’s a loose collection of platitudes like ‘people tend to act out of their beliefs’ and ‘people tend to avoid what they don’t like’; hardly much of a theory, and certainly not ‘protoscience’.
The explanations of behaviour it provides are pretty shallow. It holds that our behaviour is caused by our beliefs and desires, which it identifies by their contents. My belief that it is raining and my belief that flights to Paris are really cheap right now cause different behaviours. But commonsense doesn’t tell us much more about the kind of thing beliefs really are. They may be sentences in the brain, though I doubt it. They may be more global properties of the subject, dependent on widely scattered areas of the nervous system. Commonsense just doesn’t care how beliefs and desires have their causal powers.
Fodor’s language of thoughtthesis – the idea that thinking is inner speech – goes well beyond commonsense psychology. It aims to describe the internal processes that produce behavior. There isn’t much empirical support for this view. It’s a very elegant picture – it’s the way that god, unencumbered by any real-world constraints, might have designed minds – but it’s not likely to be the way that minds developed naturally, as a product of evolution, in fits and starts.
3:AM:The idea of the mind as a computer raises the question of how we can get meaning from a system that presumes to operate only on the formal, non-semantic properties over which they are defined. If this is a genuine problem then doesn’t folk psychology get eliminated? Is it a genuine issue?
FE:It is a genuine issue. We can’t expect to find meaning by looking at the mechanical processes that constitute the operations of the computer. Analogously, we can’t find meaning by looking at the electrical and chemical activity in our brains. But then how does meaning arise? How could a physical system (whether a computer or a human), located, as I am right now, in New Jersey, really have thoughts about Paris?
The answer must depend, in part, on how the system is situated in the world, on its relational properties, though specifying what these relations are, and how they confer meaning, is a hard problem. Folk psychology presumes that our mental states do have meaning – it presumes that I can genuinely have thoughts about Paris – but it has nothing to say about how this is possible, or whether computers could really have thoughts about Paris. So whatever the answer, it won’t be grounds for eliminating folk psychology.
3:AM:So is it your belief that folk psychology can’t impose significant constraints on any account of cognitive architecture? Wouldn’t that mean that no account of the mind need worry about folk psychology then? Threats of eliminativism are illusory?
FE:Yes, exactly. Of course, we might find out that many of our commonsense views about the mind are wrong. For example, research suggests that some of our actions are under less conscious control than we think. But no empirical research is going to show that mental states never cause behavior or that they can’t be evaluated as true or false. Folk psychology has nothing to say about what semantic evaluability amounts to.
The language of thought thesis claims that beliefs and desires are relations to sentences in the head, and sentences can be straightforwardly true or false, but folk psychology doesn’t itself require anything this neat. As I suggested earlier, it may turn out that the states that realise beliefs and desires are highly distributed in the brain, perhaps more accurately described as dispositions (like, for example, fragility), in other words, they may turn out not to be the kinds of things that can literally be true or false. That’s fine; whatever beliefs are, they will still be describable by content sentences which can be true or false, and that is enough to preserve folk psychology.
So my view is that folk psychology won’t be threatened by anything we are likely to discover about the mind. Of course, if it turned out that our behaviour is actually caused remotely by scientists on Mars, then folk psychology would be in trouble. I’m betting that won’t happen.
3:AM:One of the significant disputes that naturalistic semantic theories like Fodor’s addresses is the one about whether meaning is use. Even those outside the academy will probably associate that formula with the work of the late Wittgenstein. It’s a huge element of much contemporary philosophy of language, and has been influential in various forms in a wide range of hermeneutical projects. But Fodor’s representational theory of mind and other such theories argue that the meaning is use theory can’t accommodate representational error. Is that right? Can you say what the argument is? You like the argument, don’t you?
FE:I think the idea is something like this: if the meaning of a thought is determined by how it functions in cognition then how could the thought ever be false, how could it misrepresent, since the uses that we intuitively think are false will be part of the meaning-determining base? The solution is to privilege a subset of the full range of possible uses – just those corresponding to true uses – though characterising this subset without trivialising the whole project is notoriously difficult. We can’t just stipulate that they are the true uses or we’ve argued in a circle.
A similar problem arises for the so-called ‘denotational’ theories that Fodor favours, where meaning is determined by a ‘naturalistic’ relation – say, a causal relation - holding between the mental state and the object or property it represents. So, simplifying somewhat, my thoughts about cats are caused by the presence of cats; that’s what makes them thoughts about cats. And here it seems we need to relativise to normal conditions, which privileges just those cases actually caused by cats – and not the sort of case (in bad light, etc.) where my thought “a cat!” is actually caused by a large rat – to get the possibility of representational error. But, again, it is notoriously difficult to specify, in a non-question-begging way, just what these conditions are. So I don't think the argument from the possibility of representational error favours denotationalist semantic theories over ‘meaning as use’ theories.
3:AM:You argue that the role of content in the computational-representational theory of mind is inflated, and that the use of content in computer models of mind is really quite modest. You say we should understand content as a gloss, nothing more. Is that right? So why does this matter?
FE:I said earlier that philosophers of science are concerned with how the explanatory frameworks of the sciences are to be interpreted. My work in this area is an example. Most philosophers who are interested in the mind presume that computer models of mental capacities explain how a physical system represents the external world by positing inner states that bear some ‘naturalistic’ relation (for example, a causal relation) to what they are about. The models are often cited as support for the idea that the meaning relation just is this naturalistic relation.
I argue that the role of content in computer models of mind is quite different. The models make use of content descriptions in providing explanations of processes that are not fundamentally regarded, in the models, as intentional. The real explanatory work is done by the mathematical characterisation of a cognitive process. For example, theories of vision explain how the visual system enables the organism to see the three-dimensional structure of the scene from two-dimensional retinal images by positing that the system computes a particular mathematical function defined on abstract properties of the image.
The content description is a gloss on the mathematical characterisation; it serves various heuristic purposes (for example, it connects the mathematically described process with our commonsense way of regarding mental processes as intentional) but these content descriptions could be eliminated without loss of explanatory power.
This matters because computer models of mind are trying to give a naturalistic explanation of cognition. Yet these models typically ascribe contents that go beyond anything that is determined by a purely naturalistic relation. So if they are trying to explain meaning by appeal to a naturalistic relation, as most philosophers of mind seem to assume, then they are doing a really bad job of it. And if they are presuming meaning in their models then they are not making much progress on the problem that philosophers of mind are interested in. They are trafficking in more of the same mysterious stuff.
On my view (where meaning ascriptions are just a helpful gloss) we can understand the project as making progress on explaining our cognitive capacities, including our representational capacities, naturalistically, in terms that do not make essential use of the problematic phenomenon of meaning.
3:AM:So how would this approach help us understand, say, the content of my seeing the redness of a tomato? Does this approach have implications about how we should think of things like ‘red’, for instance, whether the red is in us or in the tomato, that kind of question?
FE:Perceptual experience presents us not just with a red round patch – it presents an object, a red tomato; in other circumstances it presents a book, a tree, and so on. I suspect that the content of experience is determined by many factors, certainly by the factors identified by theories of visual processing (including, of course, properties of the light impinging on the retina), but also by the way we think of ordinary objects, something that we don’t currently have the theoretical resources to explain.
As in the computational case, content plays a complex role; it situates phenomenal experience in the broader context of everyday life. My best guess is that, when all the processes responsible for perceptual experience have been identified, content ascriptions will turn out to have an ineliminable pragmatic element; they will play something like the role of a gloss that connects the scientific account with our wider concerns. Here’s a concrete example of what I mean. Suppose you are in fact looking not at a tomato but at a ‘tomato façade’ – an empty shell cleverly designed to look, from the front, like a real tomato. Perceptual experiences have accuracy conditions – if there is nothing there, or the light is bad and the tomato is really green, then the experience misrepresents.
But what should we say about the experience of the tomato-facade? Does it misrepresent? We’re inclined to think that in some sense it does, because we expect experiences to integrate smoothly with our conceptions of how things normally are: objects have backs congruent with their facing surfaces. Considered in this context the experience is certainly misleading. One of the jobs of content is to facilitate this integration, and this may be partly a pragmatic matter that goes beyond what is strictly entailed by our theories.
My view of content has no direct implications for whether red is in the world or in us. Whether color is an objective property of the world or a subjective property of experiences is not a question that science will answer. This is not to say that optics and visual science are irrelevant to settling the question – not at all; just that it isn’t a scientific question. We know that there are no simple, objective, physical properties that correlate with the colors (red, green, blue, etc.) that we experience. But I don’t think we should conclude, as some philosophers do, that colors are in the head, that it is the experience that is really colored and not the tomato. That view does too much violence to a concept that is central to our ordinary lives.
This leaves a number of possibilities: colour is objective, but it’s a very complex physical property, perhaps some mathematical function defined on reflectance properties of surfaces, a property that has no salient description, beyond affecting our sense organs in characteristic ways. This view belongs to a family of views that David Hilbert calls ‘anthropocentric realism’. Or colour may be a disposition of surfaces to cause certain kinds of experiences in us. This view, proposed originally by John Locke, makes colour partially subjective, but it locates it out in the world where it should be.
Colour theorists have worked out various proposals in detail. All of them have some counter-intuitive consequences. Ultimately, in my opinion, it will be something of a decision as to which proposal preserves most of want we want colour concepts to do.
3:AM:Although your work is very cool and cutting edge, the issues are old and have been tormenting us for ages. One of the old questions that you tackle is the moon illusion of Berkeley. Can you say what this is and why you say this illusion survives modern attempts to explain it?
FE:The moon looks bigger when it is near the horizon than when it is higher in the sky. Various physical explanations of the illusion were proposed in antiquity. The refraction hypothesis – the idea that the earth’s atmosphere scatters the light from the horizon moon – was the most popular, but we know now that horizon and zenith moons cast the same-sized image on the retina. So something weird is going on inside the head.
It is widely thought that the perceptual psychologists Lloyd Kaufman and Irving Rock explained the illusion in the 1960s. Their solution is essentially the same as Descartes’ proposal in the 17th century: the horizon moon looks farther away because it is usually viewed behind buildings and trees which convey a sense of distance; the visual system uses this distance information and innate knowledge of geometry to unconsciously calculate a larger size for the horizon moon.
One problem with this proposal is that according to most observers the horizon moon looks both larger and closer, giving rise to the so-called ‘size-distance paradox.’ Alternative explanations have proliferated since the 60s, but none account for all the data. Bishop Berkeley’s empiricist proposal, which appeals to learned association, was thought to be conclusively refuted by Kaufman and Rock, but it is still in the running.
A number of factors account for the stubbornness of the puzzle. There may not be a single psychological mechanism underlying the illusion. Illusions typically arise from complex interactions involving both fixed or structural features of the visual system and “higher-level” or “cognitive” processes that may be influenced by learning. Explaining an illusion requires disentangling and independently specifying each contributing factor; confirming the account requires designing experiments that pull these factors apart. But the moon illusion raises some special problems.
The moon is a strange sort of perceptible object. (So are the other astronomical bodies – the sun, the constellations.) Unlike the tomato we talked about earlier, it isn’t obvious that the moon appears to be three dimensional, rather than a luminescent disc. And it isn’t obvious when we report that the horizon moon looks bigger that we are saying that it appears to be a bigger object, rather than reporting that it takes up a bigger expanse of our visual field.
This distinction matters. When I move my hand in toward my nose, my hand doesn’t appear to get bigger, though it certainly takes up more of my visual field. We don’t know whether the processes that account for the shape and size constancies of ordinary objects as they move around us (or we move around them) are at work in our perception of the moon. And distance judgments are equally suspect. Subjects report that the zenith moon looks farther away, but typically can’t say anything intelligible when asked “how far away does it look?” (Nothing can look a quarter million miles away.) So the puzzle persists, in part, because we don’t have a precise specification of what needs explaining.
3:AM: Cognitive neuroscience is thriving. But you argue that biological creatures like us have to be explained in a certain way and that both ‘bottom up’ and ‘top down’ approaches won’t work. First, can you sketch what these two approaches are that you say are no good. And who is associated with these approaches?
FE: The bottom-up approach assumes that the way to understand cognition is by first investigating small-scale cellular processes in the brain and then extrapolating (‘scaling up’) from these processes to complex brain processes, eventually reaching something that is identifiably cognitive. Proponents include Hubel and Weisel, and Horace Barlow. This approach has been caricatured as the search for the ‘grandmother neuron’. The problem is that a complex system can rarely be understood as a simple extrapolation from the properties of its basic constituents.
According to the top-down approach the theorist of cognition should begin with a well-developed psychological theory of a cognitive capacity, typically an intentional theory, and then look for implementing mechanisms. On this view, psychology is autonomous and imposes strong a prioriconstraints on accounts of biological mechanisms. What is left for neuroscience is just to provide the details of how the psychological theory is implemented in us. Proponents of the top-down approach typically claim that the brain is an information processor with a ‘rules and representations’ architecture.
This architecture posits structures that explicitly represent just the information that the psychological theory attributes to the organism in the exercise of the capacity. In positing these ‘intentional internals’ the top-down approach can be seen as a scientific development of the commonsense picture. Proponents include Jerry Fodor and Zenon Pylyshyn, Randy Gallistel, and to some extent David Marr.
3:AM: So your alternative is what you call a ‘neural dynamic systems’ approach. Can you tell us how it approaches the problem?
FE: The approach that Robert Matthews and I call ‘neural dynamic systems’ understands cognitive neuroscience as a largely autonomous explanatory project, aiming to characterise the processes responsible for brain-based cognition in its own terms and not hostage to constraints imposed by intentional psychology. Fundamental to the approach is the idea that complex systems such as the brain can only be understood by finding a mathematical characterisation of how their behaviour emerges from the interaction of their parts over time.
A promising example of this approach is the work of Karl Friston and other ‘dynamic causal modelling’ (DCM) theorists. DCM tries to build a realistic model of neural function by modelling the causal interactions among anatomically defined cortical regions, based mainly on fMRI data. These cortical structures undergo time-dependent changes in activation and connectivity in response to both sensory stimuli and specific contextual inputs such as attention. DCM describes the dynamics of these changes by means of dynamical equations.
The success of dynamical approaches in other sciences, e.g. physics, biology, ecology, economics, and so on, suggest that a similar approach in neuroscience might pay off. If theorists were to develop a set of dynamical equations that describe the behaviour of the brain in the course of various cognitive tasks, then what many take to be the ‘Holy Grail’ of cognitive neuroscience – the specification of the relationship between neural structure and function – might be within reach.
3:AM: So how does this overcome the problems the other two approaches don’t?
FE: For a number of reasons neuroscientists have had trouble specifying structure-function relationships for identifiable cortical areas. Functional responses in the cortex are highly context-sensitive. Cortical areas don’t operate in isolation; they are connected to many other areas by anatomical long-range ‘association fibres’. The upshot is that the nice mapping between function and anatomical structures that proponents of bottom-up and top-down approaches have hoped for hasn’t been forthcoming.
The behaviour of a particular area cannot be predicted from local micro-structure alone, as bottom-up theorists hoped. And the prospect of finding neural structures that implement functional or intentional states characterised independently at a higher level of theory, as top-down theorists propose, looks bleak, given the apparently dynamical character of neural processes and the fundamentally non-dynamical or static character of the structures and processes characterised at the cognitive level. Adopting a dynamic systems approach to neural processes, not tightly constrained by cognitive-level theory, looks more promising for uncovering how cognitive function depends causally on structure.
The rap against the neural dynamic systems approach is that it won’t provide genuine explanations of cognition, in particular, that it won’t explain why the outputs of cognitive processes are typically rational. Only by positing intentional internals - representations and computations over their contents – top-down cognitivists claim, can a theory explain activity that looks anything like cognition. But in the absence of independent evidence that the brain has such intentional internals – independent, that is, of our commonsense way of characterising mental states, in terms of their content – positing intentional internals isn’t really explanatory.
There is a certain heuristic pay-off in positing internal structures that explicitly represent just the knowledge that psychological theories attribute to organisms in their exercise of a cognitive capacity. The picture that results is breathtakingly simple; it meshes nicely not only with intentional psychology but also with commonsense. Unfortunately, there is no more reason to think that explanatory transparency is a guide to truth in the cognitive sciences than in any other natural domain.
3:AM: Fodor criticises those people like Pinker, who think a computerised representational theory of mind can explain everything – but he also thinks it’s the best model we have at the moment. He kind of thinks we’re stuck. He thinks much of the neuroscience he reads about isn’t helping. What’s your view – is he right to doubt that the approach will explain everything even if nuanced in the way you suggest? Are you pessimistic or optimistic?
FE: Fodor’s pessimism is due in part to his support of the top-down approach to the mind. He thinks that intentional psychology is autonomous and that neuroscience will explain, at best, only how representations and the rule-governed processes defined over them are implemented in neural matter. And he is right that much of cognitive neuroscience isn’t advancing the project as he conceives it. There is very little understanding of the neural or computational underpinnings of so-called ‘central processes’ – high-level cognitive processes such as deductive reasoning, planning, problem solving – precisely the areas where intentional (content-based) psychology is most highly developed. The capacities that are better understood computationally and neurally, for example, early vision and motor control, are the domains in which intentional psychology has had little to say.
I am optimistic that cognitive neuroscience will eventually explain the computational and neural basis of relatively low-level sensory and motor processes. It is already doing pretty well here. Central processes, those characterised by commonsense in belief-desire terms, will be much less tractable, in part for the reason that Fodor notes: their functional roles are much more complex (including, often, accessibility to consciousness). But here, as elsewhere, we should resist the temptation to take our intentional characterisations of these capacities – characterisations rooted in commonsense – as anything more than a pretheoretic gesturing at a phenomenon to be explained, and certainly not a serious proposal for understanding the mind.
3:AM: Pete Mandik looks forward to the day when his mind is uploaded into a computer and he lives forever there. Eric Olson says that’s impossible because we are animals and our identities are fixed by our bodies. What do you think?
FE: Could our minds be uploaded onto a computer? No. This idea presumes that the mind is just information. If that’s right it should be possible to store the information on a different machine. But some of the information concerns how to extract information from the world with the particular sensory systems that humans have. And some of it concerns how to engage with the world with the particular motor systems that we have. And some of it concerns how the various cognitive systems share the computational resources fixed by our neural architecture. And so on.
After the information that is essentially tied to our particular form of embodiment is stripped away, it isn’t clear that what remains would deserve to be called a mind. It certainly wouldn’t bear much resemblance to our minds. The upshot is that the mind can’t be divorced from the body. In fact, the idea that the mind is just information is Cartesian dualism in contemporary clothes.
3:AM: Eric Schwitzgebel has been looking at some zany cool ideas that a representational theory of mind suggests that the United States is conscious. Does it?
FE: Schwitzgebel argues that materialist accounts of consciousness suggest that the United States is conscious. So while the representational theory of mind is a materialist theory, it is vulnerable to Schwitzgebel’s argument only if it purports to explain consciousness. Fodor thinks that phenomenal consciousness is outside its explanatory scope, and I am inclined to agree.
With that qualification, then, let’s look at Schwitzgebel’s argument. Here’s the gist: if materialism is true, then the reason that we are phenomenally conscious and tables are not is that our matter is organised in the right way and the table’s isn’t. The ‘right way’ is the sort of organization that gives rise to the ability to monitor one’s own states and adjust one’s subsequent behavior in light of this feedback.
A materialist should be willing to grant that relatively stupid animals such as rabbits have this ability, and that aliens who don't have neurons at all could have it. So any reasonable materialist should grant that rabbits are phenomenally conscious and that aliens could be. And any reasonable person who accepts that should be willing to accept that group entities could have it. There doesn’t seem to be anything in the nature of collectives, per se, that would rule out their having the organization necessary (and sufficient) for phenomenal consciousness. So, Schwitgebel concludes, the United States, considered as the aggregate of its 300 million citizens, is probably conscious.
So what do I think of the argument? I agree with Schwitgebel that, for all we know, group entities could be conscious. We don’t know what it is about ourselves that produces consciousness, so it would be foolish to flat out deny that aggregates of conscious beings could be conscious. And, of course, if the US is conscious, we – the conscious beings that make it up – wouldn’t think its thoughts or feel its pains. (Otherwise there would be a lot of neurons in pain.)
Nonetheless I don’t think that materialism suggests that the US is in fact conscious. In the first place, I think Schwitzgebel credits the US with much too sophisticated an ability for self-monitoring and behavioral control. So I doubt that it in fact has the organization required for consciousness. Of course, maybe I just can’t see the forest for the trees. (After all, if Schwitzgebel is right my perspective isn’t much better than the lowly neuron’s.) Maybe from the vantage point of outer space its behavior looks strikingly coherent and intelligent. But the more general point is this: no remotely plausible materialist explanation of consciousness is in the offing.
The idea that consciousness depends on a particular kind of functional organization is not much more than a dream at this point. And in the absence of a concrete proposal, claims that this or that collective entity is conscious are just idle speculation. When we have a proposal on the table I am prepared to be surprised.
3:AM: And when you’re not mind bombing the mind, are there any books, films, art, music that have been enlightening? I wondered if sci-fi would be of interest given your work, or is that just me being kind of imposing a stereotype?
FE: This may sound like bit of a cliché but I love the movie Blade Runner. Ridley Scott did a fantastic job of distilling what is philosophically interesting in Philip Dick’s book Do Androids Dream of Electric Sheep?- technical and ethical issues concerning the possibility of building an artificial person – while leaving out the problematic, mystical stuff. And the movie is visually so beautiful. But I am not generally a big fan of sci-fi. I guess it’s a matter of temperament. I find the inconsistencies in a lot of time travel stuff, for example, distracting and irritating.
To be honest, I don’t see much connection between my philosophical work and the arts and literature. I wish I had more time to read fiction; except when I’m on vacation I restrict myself to short stories (Alice Munro, Haruki Murakami, and Jhumpa Lahiri are some of my favourites). I have eclectic tastes in music: I love jazz, chamber music, and quite a bit of hard rock. But I don’t find any of this enlightening for my work; it’s just relaxation and recreation. In that vein, I also enjoy racing dinghy sailboats year-round.
3:AM:And finally, for the fellow mind bombers here at 3:AM, could you recommend five books that’ll get them further into your world?
FE:Robert Matthews’ The Measure of Mindcriticises the representationalised view of beliefs and desires and develops a measurement-theoretic alternative. Fred Dretske’s Explaining Behavior: Reasons in a World of Causes is a classic and very readable. Michael Strevens’ Depth is a detailed account of when and how scientific explanations lead to genuine understanding. Margolis, Samuels, and Stich’s edited collection Oxford Handbook of Philosophy of Cognitive Science has ‘state of the art’ papers on the most important current issues. Still relevant is Wilfred Sellars’ Empiricism and the Philosophy of Mind. Written in the late 1950s, it argues that perceptual experience is thoroughly conceptual, develops an early version of the language of thought thesis, and introduces the idea of commonsense psychology as an explanatory theory.
ABOUT THE INTERVIEWER
Richard Marshallis still biding his time.