Interview by Richard Marshall.
Emma Borg's interests lie mainly in Philosophy of Language (particularly the semantics/pragmatics divide) and Philosophy of Mind and Cognitive Science (particularly issues of concepts and social cognition). Here she discusses why we need a theory of meaning, Davidson, why this isn't linguistics, whether philosophy of language is an applied discipline, whether there are grounds for selecting a correct theory of meaning, intentions and meaning, mirror neutrons, minimalism, the importance of Grice, local pragmatic effects, saying, implying, asserting, and internalism and externalism.
3:AM: What made you become a philosopher?
Emma Borg:I was a child who asked a lot of ‘why?’ questions, so there was probably always a fair chance I’d end up liking philosophy, but I also remember two events which, in retrospect, maybe sparked an interest in philosophical issues. First, I remember I used to have a pretty much constant internal monologue giving a running commentary on life and I guess I just assumed that that was what it was like for everyone, but then one time chatting with school friends I discovered that they didn’t have the same kind of constant internal narration (or at least, they said they didn’t) and that came as a real shock to me. So I guess that might be where my fascination with the mystery of other people’s minds started. Second, I recall that we were once given (as a detention I think) a pretty philosophical essay question: ‘If society gets the criminals it deserves, why aren’t all prisons palaces?’. It’s not a great question (since the antecedent seems fairly dubious) but I remember coming back to think about it long after detention finished.
However philosophy wasn’t my first choice after school. I initially got in to King’s College London to read English but I didn’t like it at all –there just didn’t seem to be any wrong answers, it all just seemed like a matter of opinion. So then I thought about trying Art History but was told I’d need to study German and Italian in the first year and I didn’t fancy that, so the only other A-level I had taken was Ethics and Religious Studies, so finally I went to the KCL Philosophy Department and asked if I could try Philosophy. They said yes, and as soon as I went to lectures I fell in love with the subject. It was just the right combination of the creativity and writing of an arts subject with the rigour and objectivity of a science, where you got to try to ask and answer the most fundamental questions possible. And somehow I’ve been allowed to carry on with the subject for way longer than I originally expected.
3:AM:Why do we needa theory of meaning? What does it do for us? And why is this philosophy not linguistics?
EB:Asking why we need a theory of meaningseems like an obvious and good place to start, but unfortunately deciding what work a theory like this should do for us is not at all an uncontroversial issue. One possible answer is that we might want such a theory to tell us what meanings are – where maybe we think of meanings as kinds of things (so that, if we were to list things, it might go ‘dogs, cats, people, meanings…’). This is sort of what you get with Fregean senses – abstract, third-realm things that sentences refer to and which are responsible for making those sentences mean what they do. Quine talked about this kind of idea as the ‘myth of the museum’, as if meanings could be exhibits in a museum that could be pointed at and catalogued.
As both Quineand Davidsonpointed out, thinking about meaning in this kind of way doesn’t ultimately seem very helpful if what we want to explain is how ordinary speakers actually manage to use and understand natural language. The problem emerges when we realise that language has these amazing properties – it’s what we call ‘productive’ and ‘systematic’. To get a sense of this, here’s a sentence (let’s call it ‘S’) that I’m going to guess you’ve never actually come across before: (S) “Every evening my pet chinchilla likes to eat seven blueberries”. Even though you’ve probably never come across this sentence before, I can predict that you won’t have much trouble understanding it.
On the museum model of meaning, the explanation of your understanding would be that you have, stored somewhere in your mind, a massive list which contains all the possible sentences of English paired with the right meaning entity. When you understand S you run through this list till you find S and recover its meaning. However, this picture doesn’t seem very compelling – for a start, utterance understanding seems pretty much instantaneous, whereas running through a massive list surely takes time. More worryingly, it doesn’t seem that the storage capacity of our brains could be big enough to hold a discrete entry for every possible natural language sentence because it turns out there are an indefinitely huge number of them. For instance, we can create novel sentences at will by iterating expressions like ‘the father of’ (so we move from ‘the father of Aristotle was Greek’ to ‘the father of the father of Aristotle was Greek’), or conjoining independent sentences together using words like ‘and’, or doing a bunch of other repetitive things (this is to highlight the productivity of language). So we’d need indefinite storage space in the mind for a list of all possible natural language sentences. Furthermore, this list-model misses the intuitive point that how we understand novel sentences is by understanding something about how they are made up – namely, understanding the meanings of the parts of the sentence (roughly, the meanings of the words it contains) and the way they are put together. So, if someone understands the sentence ‘Marge loves Homer’ we can predict that they will also understand the sentence ‘Homer loves Marge’ (that’s to highlight the systematicity of language understanding).
So Davidson’sbig insights were, first, that a theory of meaning should be an account of the information which would enable a speaker to understand and use a language (rather than an account of the kind of thing meanings are) and, second, that (given the productivity and systematicity of language) a theory of meaning for natural language must be ‘compositional’ – it should give an account of the meaning of a whole sentence in terms of the meanings of the parts and the way they are put together. I belong to this Davidsonian tradition, so my take on the work a theory should do for you is similar: a theory of meaning for a language should capture the information that we have in our minds that enables us to understand language.
But I’ve also pushed for what I call ‘minimal semantics’ and part of why I call my approach ‘minimal’ is that I think we should adopt a minimal job description for a theory of meaning: a theory of meaning should explain how the meanings of complex expressions are determined given the meanings of their component expressions, together with those expressions’ mode of composition, but it shouldn’t have to explain other things people have sometimes wanted from a theory of meaning (like telling us about what things there are in the world – i.e. a role for a theory of meaning in settling metaphysical questions, something Strawson advocated, or how we know about certain bits of the world – i.e. taking a theory of meaning to have epistemic repercussions in the way Russell or Evans expected). Perhaps more controversially, I’ve also suggested that a theory of meaning should not be required to explain our incredible communicative capacities. That is to say, a theory of meaning – a semantic theory – should capture the literal content of expressions and explain productivity and systematicity, but there will still be lots of contents we communicate when we speak which go beyond what a semantic theory can explain. I’m guessing we’ll return to this below, as it’s a pretty core element of my research.
Finally, why is this philosophy and not linguistics? Well, a first point to note is that I think of myself as an interdisciplinary researcher – I read (and learn a lot from) stuff by linguists and psychologists and others, and I think philosophy benefits from having a wide evidence base and often should be informed by developments in these other areas. Secondly, there are a lot of linguists who do very much the same kind of thing I do (like Deirdre Wilson and Robyn Carston, to name but two), so I also think that asking about which discipline something belongs to in these borderline areas is a bit invidious – if the work is good, I guess I don’t really think it matters which department someone sits in. Thirdly, though, having said all that, I still do think of what I do as philosophy and not as linguistics and it does matter to me to be amongst philosophers. There is a general way of thinking about things – of asking questions and looking for answers – that is, I think, special to philosophy and it’s what I love. When I started at the University of Reading the department had a strong ethics orientation, but all the ethicists were willing and very able to interrogate my work because the way of going about things (at least in analytic philosophy) is broadly the same, whatever the precise topic. Also the kinds of questions I’m interested in come before many of the questions in linguistics really get going. So, for instance, the above question of what we need a theory of meaning for isn’t something you’ll find much discussed in linguistics.
3:AM:Is philosophy of language an applied discipline? Are there grounds for selecting a correct Theory of Meaning?
EB:I’ve just written something on this in The Blackwell Companion to Applied Philosophy where I argue that philosophy of language (at least as I construe it) is an applied discipline. This is for two reasons: first, and most obviously, some philosophy of language has a particularly applied flavour as it deals with some of the most charged or contentious forms of language. Work on slurs and racial terms, the understanding of pornography in terms of its silencing effects, understanding political propaganda, the difference between lying and merely misleading, all these kinds of topics have the sort of social repercussions that are the hallmark of an applied discipline but they are also ones where the contribution of philosophy of language is clearly central.
Second, and maybe less obviously, I also think of philosophy of language as applied in a more general sense because (as mentioned above) of the kind of evidence base it takes. So my kind of philosophy of language is open to evidence from neuroscience, psychology and linguistics in a way which, I take it, makes it an applied discipline. So I think that one of the grounds for selecting a correct theory of meaning is ‘is this the theory that captures what we actually have in our mind/brains that allows us to understand language?’.
As a caveat, I should say that I do think we need to be extremely cautious in interpreting data from these other fields, thinking through, first, how robust data is, and, second, exactly what it tells us (so, for instance, I’ve spent a reasonable amount of time recently arguing against what I take to be overblown claims in philosophy of mind made on the back of the discovery in neuroscience of so-called ‘mirror neurons’), but still I like the idea that what we as philosophers of language claim should, in principle, be open to empirical assessment. One of the things I said in Minimal Semanticswas that my kind of approach to semantic theorising looks kind of appealing because it fits with a modular view of our linguistic cognitive capacities. Now I don’t know whether modularity will turn out to be the right way to think about the mind or not, and I take it that that is a question which other, even more applied disciplines will help us to answer. So ultimately I think the minimal approach will be something that can be assessed in light of empirical evidence, though until we have a better account of just how to move between descriptions of neural activity and descriptions of intensional contents, showing how brain studies relate to philosophical theories is always going to be tricky.
3:AM:Are intentions and conventions necessary requirements for any semantics? Would that mean that a non-intentional computing system generating what looked and sounded like English sentences on a forever inaccessible and lifeless planet would not actually be generating language? Why is that better than saying that it was producing language that no one was able to access?
EB:I do think intentions are necessary for semantics: if there weren’t intentional beings like us with a practice of using the visual pattern ‘d’’o’’g’ to stand for or refer to dogs then that visual form just wouldn’t mean dog. If the wind on this lifeless and inaccessible planet stirred the sand to leave an impression that looked like the English word ‘dog’, it wouldn’t really be an instance of the word (unless we came along and interpreted it in that way), it would just be a shape. Your idea of a computer generating sentence-like things seems a bit more systematic than my wind case, but still I don’t think it would manage to get us meaning (the worry here is kind of reminiscent of Searle’s Chinese Room thought-experiment, where what we might get is the syntax of language but not the semantics). So say every time a certain stimuli was present (say, a dog) your machine was led to produce a certain output (say, the written form ‘dog’), then we’d have evidence for a causal relation between dogs and tokenings of the form ‘dog’. But if the system producing the forms was non-intentional, I think this still wouldn’t give us meaning (compare the relationship between the level of petrol/gas in a tank and the display on a gauge) until someone came along and took the outputted sign to be a representation of the distal cause. It’s that practice – that use of signs to stand for things – that brings meaning into the picture.
On the other hand, I should say that the precise role accorded to intentions and conventions is a moot point. Stephen Neale organised a great workshop on this topic last year – he’s someone who thinks intentions have an absolutely crucial role to play. Whereas I’m less committed: I think intentions and conventions matter for getting the ball rolling – no language without intentional practices – but still I think sentences have meanings which are independent from the intentional states of current users. So if I say ‘Current political relations between the US and North Korea are strained’, this sentence has a literal content which is recoverable without accessing the thoughts I had which led to my utterance of this sentence. The way I put this in Pursuing Meaning is that, according to minimal semantics, understanding a language is a matter of word-reading and not mind-reading. So although past practices matter and current practices can, over time lead to semantic change, what counts (according to me) for the semantic content of a sentence at a particular point in time is not the intentional states of the speaker.
3:AM:Do mirror neurons help us to understand intentionality, in particular, the access to the goals of others and mind reading and in so doing help us understand our ability to understand what others mean?
EB:Mirror neurons are neurons which fire both when an agent performs an act (say picking up a cup) and when an agent sees that action performed by another. The existence of mirror neurons is clearly fascinating and the question of why the brain has neurons which behave in this way is compelling, but still I think the debate around mirror neurons is a classic example of how not to move from discoveries in neuroscience to claims in philosophy. So if you look at the early literature surrounding mirror neurons you’ll find the claim that mirror neurons provide the neural basis of our access to the mental states of others and that their discovery will do for mindreading what the discovery of DNA did for biology.
I’ve been very sceptical about these claims, because ultimately mirror neurons respond to actions and actions underdetermine intentions (crudely, I might pick up the cup because I want to drink from it or because I want to look at it, but my actions might be identical). Advocates of mirror neurons think this objection isn’t right because at least some mirror neurons have a less direct relationship to action (so some mirror neurons seem sensitive to the goal of an action rather than the precise way an act is performed), but I don’t think their response works and so I’ve argued mirror neurons can’t help us to understand intentionality.
Instead (as I’ve argued recently) I think what they get us is a kind of statistical behaviour-tracking. My current thinking is that mirror neurons may be a way the brain stores information about which actions are most likely to follow a perceived action in a given context (so they ‘tell us’ to expect a movement of a grasped cup towards the face in context which makes drinking likely). But this isn’t mindreading, it’s the kind of statistical behaviour-tracking which a non-intentional system could engage in. However it may still be something which is useful in allowing us to create and enjoy the kind of complex social world that we do (I talk about this more here). Despite the usefulness of this kind of behaviour-tracking, though, I don’t think it can help in understanding what others mean, because that (at least where we want to know about what someone is communicating rather than literally expressing) will require proper, full-blown access to intentional states, not merely an encoded knowledge of statistically likely next actions.
3:AM:The approach you’ve favoured for some time is a minimalist approach to a theory of meaning. Could you sketch what this kind of minimalism claims? Why don’t you think that a semantic theory should answer epistemic or metaphysical questions or explain communicative skills?
EB:Minimalism claims that every well-formed declarative sentence yields a truth-evaluable content (a proposition) which is fully determined by that sentence’s syntactic structure and lexical content: the meaning of a sentence is exhausted by the meaning of its parts and the way they’re put together. It’s only this propositional content which, according to minimalism, a semantic theory should concern itself with. Minimalism lies in opposition to what is usually called ‘contextualism’, which claims that semantic content includes more than just lexico-syntactic content since it also includes rich information drawn from the context of utterance. To give an example: a minimalist would claim that the sentence ‘Jill took out her key and opened the door’ expresses a complete minimal proposition – Jill took out her key and opened the door – which gives the semantic content. The sentence is thus made true by Jill opening the door in any way at all (e.g. kicking it down rather than using the key). While a contextualist would claim that the semantic content (in the right context) is some more contextually enriched content, such as Jill took out her key and opened the door with her key. This might seem like a tiny local debate, but some big things turn on it (like how we should model the cognitive capacities underpinning language understanding, and what we should take speakers to be literally committed to by their utterances).
Minimalism faces two main problems. First, some well-formed sentences don’t seem to yield full propositions; that is to say, to make claims that can be assessed as true or false (consider ‘Jill is ready’ – ready for what? – or ‘Paracetamol is better’ – better than what?). Secondly, even if we ignore the first problem and maintain that (at least in general) well-formed sentences could yield complete propositions just on the basis of their words and structure, still there doesn’t seem to be any explanatory role for such minimal propositions to play, because what we really care about is what speakers communicate. Who cares that the sentence ‘There is nothing to eat’ could express the proposition that there is nothing to eat (full-stop, as it were, i.e. in some universal domain) when what speakers who use this sentence communicate is always some much more contextually informative or appropriate content such as there is nothing I want to eat in the house. A large part of my adult life has been spent trying to show how these two objections to minimalism aren’t really any good.
Assuming my responses work and there are such things as minimal contents and there is robust explanatory work for these contents to do then this gives us a quite particular model of what semantics is: it’s about capturing the literal meaning of sentences – the minimal propositions sentences express – and thus what someone is strictly committed to by the sentences she utters. Minimalism respects the need for a theory of meaning to be compositional, in Davidson’s sense, and thus it would stand a chance of showing how we are able to understand an indefinite number of novel sentences. I think that would be a significant explanatory achievement. But of course it rather pales in comparison with the other things that people sometimes think a semantic theory ought to do for us, like capture epistemic or metaphysical facts. However I think a degree of modesty on behalf of semanticists here is appropriate – language matters but it is quite unclear to me why we should think that semantics alone should be capable of explaining these further facts. I just don’t think telling us about the contents of the world or how we know about them is something that a semantic theory could be capable of (at least while it respects the need to capture putatively more boring facts like productivity, etc.).
3:AM:Can you sketch why you think a minimalist account of Grice’s distinction between what is said and what is implicated is better than rival accounts?
EB:Minimalism claims that semantic content is context-independent, save where there is something in the lexico-syntactic elements of the sentence which explicitly requires a contextual input. So, for instance, the referent of an utterance of ‘I’ makes it into the semantic proposition because ‘I’ has a lexical entry which specifies the need for this element. (This clearly raises a further question, which theorists in this area have spent a lot of time discussing – namely, exactly which expressions are lexically context-sensitive? But I won’t go into that minefield here.) This allows minimalism to follow Grice’s account of what is said by a sentence very closely: what is said by a sentence is the content fixed by lexico-syntactic elements, together with a set of relatively well-behaved contextual processes such as reference determination and disambiguation. Everything else, then, is pragmatically conveyed content according to the minimalist: when you hear an utterance of ‘Fred got married and had children’ as conveying Fred got married and then had children’ the minimalist claims that you are picking up on pragmatically conveyed content not purely semantic information. I think that, unlike some other accounts, this gives us a robust account of what qualifies as semantic and what counts as pragmatic. So, following on from the previous question, I’ve tried to argue that we just can’t expect a semantic theory to capture communicated content because communicated content isn’t well-behaved, or tractable, enough to yield to the kind of systematic treatment a semantic theory requires.
Say I utter the sentence ‘John’s cart is red’. I might succeed in communicating any of a vast number of propositions – from the minimal one right through to one’s which depend massively on context, e.g. The cart John is pushing is the same colour as a Ferrari. I argue that there is no legitimate way to select where to draw the line in this huge range of contextually more fitting propositions, treating some as semantically conveyed and others as merely pragmatically conveyed. So everyone agrees (I think) that a content mentioning a Ferrari could only ever be an implicature of an utterance of ‘John’s cart is red’, but what about a content which specifies the relationship between John and the cart? Why should spelling out ‘John’s cart’ as the cart John is pushing count as semantically rather than pragmatically conveyed? And if it is treated as part of the semantics, what should we say about the cart John is currently pushing outside the mall, or the cart John is pushing really hard outside the mall on the sidewalk near the Potomac river? Are these semantically or pragmatically conveyed contents? I’ve argued that there is no good answer to these kinds of questions – once you let in a little bit of syntactically unmarked pragmatic content into your semantics, the flood-gates are opened and there is no way to stop it all coursing in. And the result is a non-tractable kind of semantics. So, I’d claim that the minimalist model of what is said versus what is implied is preferable to the contextualist version because it is genuinely capable of keeping control of semantic content.
3:AM:How should we understand the notion of ‘what is said’ when trying to work out whether pragmatic effects are global or local?
EB:A local pragmatic effect is one which operates at the level of a word or phrase, before a full sentence meaning is constructed. On the other hand, an effect is global if it operates on a full sentence meaning. So, for instance, hearing ‘some’ in ‘Some of the class are here’ as meaning ‘some but not all’ seems like a local effect (it occurs reliably whenever a typical English speaker hears ‘some’ regardless of the sentence in which the term is embedded), whereas hearing ‘Some of the class are here’ as meaning that we can head to the coach now is a global effect that occurs only once the meaning of the whole sentence has been calculated. But I think there is a risk of muddying the waters with thinking about things like local vs. global effects, because these are processing notions: they are asking about the point in speaker comprehension where an appeal to the context of utterance, i.e. to wider pragmatic understanding, is made.
However I don’t think that minimal semantics should be understood as a claim about on-line processing: I’m quite happy with the idea that the output of semantic theorising might be made available to the wider mind (as it were) of the interpreter in a piecemeal fashion. When I’m engaged in a live communicative act I think it may well be the case that I start pragmatic interpretation before the speaker has got to the end of her sentence (so that there can be both local and global pragmatic effects), e.g. hearing ‘and’ as and then even before the speaker has got the end of her sentence. The point about modularity mentioned above – whereby I think that semantic interpretation is the result of a modular, computational process – doesn’t, for me, entail that this kind of sub-sentential role for pragmatics is ruled out. Rather I think what modularity demands is that there is no input back from pragmatic processing back to semantic understanding. I claim that, left to its own devices, the semantics module is guaranteed to arrive at a full propositional content for every well-formed sentence input to it, yet I’m happy to allow that sometimes an agent doesn’t consciously access that full proposition (e.g. because attentional resources are directed to pragmatic processing before semantic understanding is complete). So, to get back to the question: for me, the notion of ‘what is said by a sentence’ is one that should be understood in terms of the kind of processing resources involved (i.e. discrete, deductive computational resources involving lexical and syntactic understanding alone vs. holistic, inference to the best explanation pragmatic processing), while the local vs. global pragmatic effects debate is a matter of the point at which pragmatic processes are appealed to during the on-line processing of a communicative act. So for me the two notions somewhat cross-cut each other.
3:AM:A standard objection to minimal semantics is that minimal contents are explanatorily redundant. Why do you think this is mistaken?
EB:So I used to think that the most important objection to minimalism concerned the fact that some well-formed sentences seem to fail to express complete propositions (sentences like ‘Jill is ready’, say), but I’ve come to think recently that actually it’s this objection about explanatory redundancy that really puts people off minimalism. Why on earth, non-minimalists object, should we accept that someone who says ‘Fred got married and had children’ literally expresses the proposition that Fred got married and had children when everyone hears the speaker as saying that Fred got married and then had children? And as for this example, so for a vast range of other cases. It is obvious that what we really care about is what speakers are trying to communicate to us, so what’s the point of a semantic theory that deals with uncommunicated minimal contents, what possible explanatory work could such contents do? I’ve recently had an extended go at answering this worry here, and I’ve appealed to what seems to me a pretty intuitive idea of what someone is strictly committed to by their utterance and what they are merely conversationally committed to, and this links to important notions around deniability and the difference between lying and merely misleading.
I think the issues around this topic are fascinating, and I’ve been thinking more recently about the connections between an approach like minimalism and things like statute analysis in the law, but I guess I should say here that why it’s taken me a while to get round to really thinking about this objection is that it’s just never seemed very compelling to me at all. Look at what kids do with language – they are adept at exploiting the difference between what someone literally committed themselves to and what they merely pragmatically conveyed, even where the pragmatic content arrives almost unnoticed for adult listeners. So when my kids were little and I said something like ‘You should give some of that cake to your brother’ they would routinely offer up a crumb rather than the contextually salient ‘reasonable amount of’, or told to switch off the TV but still watching it an hour later they’d note “You just said to turn the TV off, you didn’t say when to do it”. Jokes, white-lies, hyperbole, irony, loose talk – all sorts of things we do so easily with language all rely on a really firm distinction between what the sentence means and what we communicate in context when using it. So the idea that minimal contents might lack an explanatory role has always just seemed really alien to me.
3:AM:How does this help us understand better the Gricean distinction between saying and implicating and how should we think about the philosophical notion of assertion?
EB:Hopefully what I’ve said above helps to answer the first part of this question a bit – I think we should understand the distinction between what a sentence says (not a mode of expression I really like by the way – it comes from Grice but I think ‘saying’ has too many pragmatic associations to be really helpful here, so I usually prefer to talk about ‘sentence meaning’ as the topic of semantics, rather than Grice’s ‘what is said’) and what is implicated in terms of kinds of cognitive processing. If a content is recoverable on the basis of lexico-syntactic content alone, without rich appeal to wider contextual knowledge, then it’s semantic, if it requires pragmatic input (which is not syntactically marked) then it’s pragmatic. Within the massive category of ‘pragmatically induced communicated contents’ some of these count as full-blown Gricean implicatures and some are the kind of only slightly pragmatically enhanced propositions Sperber and Wilson termed ‘explicatures’. But I think any distinction between explicatures and implicatures is a fundamentally blurry distinction amongst contents of the very same kind, not a distinction between different kinds of contents (i.e. semantic versus pragmatic).
Then I think it is a really interesting question how these notions tie up with the question of assertion. It came as a bit of a revelation to me, when I started really looking at the literature on assertion, that much of it just assumes it’s obvious what is asserted by some utterance. So all the action recently has been around what kind of norm assertion involves (e.g. to assert p must one know p, have a justified belief that p, etc), but it has just been assumed that we know what is involved in asserting p (as opposed to say p*) in the first place. But the debate minimalists and contextualists have been having shows that it’s really not clear what gets asserted by a given utterance: when I said “Fred got married and had children” what exactly did I assert? Just that he has these two properties – of being married and having children, or that there was a temporal order to the events? In recent work, I’ve tried to argue that assertion is best understood as a social rather than a purely linguistic notion, which rests on the degree of culpability we take a speaker to have for a given content. Understood in this way, I think we should allow that speakers can assert either minimal contents or explicature contents, depending on the kind of context they are in. So in a court of law we might take a speaker to assert only minimal contents, but in more relaxed environments we might count someone as genuinely asserting some kind of slightly pragmatically enhanced content (so that I really could assert that Fred got married and then had children, even though this is, I claim, a pragmatically enhanced content). Implicatures, then (the propositions which lie at the far end of the spectrum of pragmatically enhanced propositions) are never asserted, they can only be implied. But I think it is worth noting that again I don’t think this claim should be understood in terms of on-line processing. I think there may well be ‘direct access implicatures’ – contents which qualify as implicatures but which are nevertheless immediately available to interlocutors, rather than arrived at via a prolonged process of Gricean derivation. So ‘what is asserted’ doesn’t, on this model, equate to ‘heard first’ content.
3:AM:You say a semantic minimalist doesn’t have to be a semantic internalist. What is an internalist in this context? Why do people suppose minimalists are internalists, and why do you disagree?
EB:Internalism and externalism are pretty vexed terms in philosophy, so we should all be a bit wary of them, but the difference I’m interested in is between contents which make contact with the world (and which thus would be amenable to truth-evaluation) and contents which don’t. So some really smart people have argued that, to the extent that we can talk about semantics for natural language at all, the kind of properties you need to appeal to are all to be found within the agent’s own mind (see Chomsky or, following him, Paul Pietroski, for this kind of line of thought). On this type of approach, what we need is an account that captures the internal behavioural properties of the language, e.g. showing how expressions relate to one another, capturing properties like the kind of complement clauses or arguments expressions can take. It is then only language in use that makes the connection between language and the world, so only at this point that truth-evaluability arises (this idea obviously has roots in Strawson’s work).
Now it looks prima facie as if this kind of approach might be one which is attractive to a minimalist, since she too wants to claim that semantic analysis is underpinned by an internal cognitive module, where that module is sensitive to syntactic, structural properties of the language and not to properties of use. Furthermore, Chomsky has argued that externalism about lexical contents – thinking that lexical contents pick out features of the world – is doomed to failure, since lexical items don’t respect metaphysical boundaries. So the word ‘London’ can be used in sentences where its reference must be a concrete object (‘London covers an area of over 500km2’) and where its reference must be its inhabitants (‘London is made up of many different ethnic communities’), etc. However, as noted above, I belong to a fairly old-fashioned tradition in philosophy of language which thinks that truth is a key notion in understanding how language works. So although I usually put things in terms of propositions, I think of these as having truth-conditions, as being truth-evaluable, and my view of semantics is that it deals with contents which make claims about the world, which are truth-evaluable independent of the way in which a particular speaker is using the sentence on any given occasion. So I want to resist any moves towards internalism in semantics. And part of the problem I have with the internalist move is that I think the constraints that Chomsky puts on the things we can find in the world (i.e., that the only really respectable objects are the ones alluded to in physics or chemistry, say) are simply too strong. Lots of perfectly respectable objects are interest-relative, but that doesn’t make them unreal in the way Chomsky seems to suggest.
3:AM:Are there five books that you can recommend which will take us further into your philosophical world?
EB:
I think everyone should read Paul Grice’s Studies in the Way of Words. It’s one of the few books I actually bought as an undergraduate and I loved reading it. Even if some the examples are a bit out-dated now, the distinctions he draws and the arguments he presents are just beautiful. What I love about philosophy is its ability to take a phenomenon that just seems totally unproblematic and basic at first glance, and expose the complexity and the difficulties inherent to it, and that is what Grice does here for language use. For instance, his classic reference-writing example is such a clear case of saying one thing and meaning another, but without Grice highlighting it for us, I think we’d probably just pass by without registering what’s going on or noticing how hard it is to explain or model.
Jerry Fodorwas a huge influence on me (as on so many other people, as the testimonies that poured out in response to his recent passing demonstrate) and, unlike so many other philosophers, he’s just a joy to read. He’s rarely fair to his opponents, but you can’t read his work and not share his overwhelming sense of excitement about the kinds of questions he’s addressing.
So I’d definitely recommend The Language of Thought,
The Mind Doesn’t Work That Way, or just about anything by him.
There’s probably a conflict of interest in recommending books by friends,
but both Robyn Carston’s Thoughts and utterancesand
Francois Recanati’s Literal Meaningare brilliant books that provide wonderful introductions to the debate about the semantic/pragmatic divide. Both of them somehow manage to arrive at non-minimalist views, but ignoring that obvious error, these are both books that served to define the landscape and had a huge role in shaping my thinking about these issues.
Finally, and just for something a bit different, I’d have to recommend some Plato. I was taught Greek philosophy by Richard Sorabji at KCL and his enthusiasm for it was totally infectious, and there is no doubt that if you want someone who showed why we should be worried about phenomena that seem initially totally unproblematic, Plato is your man. Also, when I first arrived at Reading (straight from grad school) the very first class I was asked to teach (to about a hundred first year students) was on poetry in Plato’s Republic– something I knew almost nothing about. Somehow I survived that (though I’m not so sure about the students), but stranded on a desert island I’d definitely like to go back to Plato and see if I couldn’t find something more intersting to say than I probably had back then.