Interview by Richard Marshall
'Roughly speaking, the Great Divide in metaphysical debates about laws of nature is between Humeans, who think that laws are merely descriptions, and non-Humeans, who think that laws govern. (This oversimplifies the situation because there are non-Humeans who maintain that metaphysical powers reside not in laws but in properties / dispositions.) '
'The Past Hypothesis says (roughly) that the universe started in a low-entropy state. It’s proposed a solution to the problem of the arrow of time, one of the hardest in the foundations of physics. If the dynamical laws do not distinguish between the past and the future, what explains the irreversible behavior of many objects in our environment, such as the melting of ice cubes, the decaying of flowers, and the mixing of cream in coffee? No one has ever seen, for example, the spontaneous separation of cream and coffee in a cup. '
'David Albert and Barry Loewer call the package that includes the dynamical laws, the Past Hypothesis, and Statistical Postulate the Mentaculus. (It is meant to be “the probability map of the universe” and named after a notebook that appeared in the Coen Brothers movie A Serious Man.) My proposal, which I call the Wentaculus, is inspired by the Mentaculus theory. The Past Hypothesis is a constraint that limits the possibilities of the initial state to a small but infinite set. I propose an alternative, which I call the ‘Initial Projection Hypothesis,’ that further narrows down the choice to a unique one. '
'As far as I know, the Everettian Wentaculus is the first realistic physical theory that achieves strong determinism. If strong determinism is shown to be possible and can be achieved with sufficient simplicity, we have to reconsider several issues in philosophy of science. The first issue concerns naturalness in metaphysics. An influential argument, due to David Lewis, that our definition of natural laws requires the notion of perfect naturalness, would be unsound if strong determinism is possible. '
Eddy Keming Chen's primary research interests are philosophy of physics, philosophy of science, and metaphysics. He also has interests in philosophy of mind, decision theory, formal epistemology, philosophy of mathematics, philosophy of religion, and Chinese philosophy. Here he discusses Humean and non-Humean laws, minimal primitivism, ‘time’s arrow’, ‘the past hypothesis’, ‘the Wentaculus’ , the Everettian Wentaculus', vagueness and the past hypothesis, the past hypothesis and self-locating probabilities, Bell's Theorem, non-locality, the wave function, surreal numbers and decisions, comparativism and the mental, and the compatibility between science and Chinese philosophy.
3:16: What made you become a philosopher?
Eddy Keming Chen: When I was a kid, I asked too many questions. One day my parents brought home One Hundred Thousand Whys, a popular science book for kids. It explains, for example, why planets move and oceans form. It’s an awesome book. But it did not really answer my why-questions. I still did not understand some of the basic issues, such as why two plus three equals five, whether it could be false on other planets, whether aliens would be subject to the same mathematical rules, and how we could be sure our math was right. Of course, it's not the fault of the authors that they did not address such questions. I should have read a different book, maybe something like Philosophy for Kids. For a long time, I struggled in mathematics classes because I got so worked up about those basic issues that I could not bring myself to learn the multiplication tables. Later I had similar questions in physics classes, about what the electric fields really are and what on earth quantum mechanics really means. I had good teachers, but they didn’t know what to say about such questions. It was only in high school I started reading outside the textbooks.
That’s when I picked up some philosophy books by Bertrand Russell. I realized that my questions, albeit formulated in a naive and unsophisticated way, fell under the domain of philosophy. In college, I became a philosophy major and found my intellectual home.
3:16: You're a philosophy of physics expert, and one of the big issues you’ve examined is the question about laws of nature. Humeans and non-Humeans are key positions in the metaphysical debate regarding how we understand these laws. So what do these two opposing views claim, and what metaphysical commitments get tangled up in each?
EKC: I am currently writing a book on the topic. However, laws of nature is one topic that I tried to stay away in graduate school, because it seemed so intractable. But every time I worked on something else, I got drawn back to it. I realized that many central questions in philosophy of physics, including the meaning of quantum mechanics and the problem of time’s arrow, were intimately related to the issues about laws of nature. That’s when I started thinking more seriously about laws. It’s remarkable that we live in a universe governed by laws. We have devoted significant resources into the discovery of the true fundamental laws—the basic principles that govern the world. The resources were well spent, although the discovery of laws is far from complete. Still, over the last few centuries, we have made significant progress in our understanding of the universe, including the atomic structure, the stability of matter, planetary motion, galaxy formation, and the nature of spacetime, thanks to our better understanding of the laws that govern those domains.
On a day-to-day basis, we benefit from the search for laws: examples include the invention of GPS, smart phones, and many technological breakthroughs. Laws also play important roles in backing scientific explanations, supporting counterfactuals, and enabling predictions. But what kind of things are fundamental laws? Most people believe that laws are different from the material objects they govern. But what is this governing relation? What makes the material objects respect such laws? The questions cannot be settled by experiments or observations, and their answers are controversial. They fall in the domain of metaphysics.
Roughly speaking, the Great Divide in metaphysical debates about laws of nature is between Humeans, who think that laws are merely descriptions, and non-Humeans, who think that laws govern. (This oversimplifies the situation because there are non-Humeans who maintain that metaphysical powers reside not in laws but in properties / dispositions.) Humeans believe that laws merely describe how matter (such as particles and fields) is distributed in the universe, across space and time. In David Lewis’s version, the best-system account, fundamental laws are axioms of the best summary of the distribution of matter in the universe, also known as the Humean mosaic. The summary is best when it optimally balances considerations of simplicity, informativeness, among other things.
Ultimately, there is nothing that explains the patterns in the Humean mosaic. It is an austere metaphysics: events happen, and that’s it. A common theme in non-Humean views is that laws govern the distribution of matter; laws are not mere summaries of the events but exist over and above them. By appealing to the governing laws, the patterns are explained. Metaphysical accounts of laws can restrict what physical laws should look like. It is sometimes assumed that the governing view requires a fundamental direction of time: to govern, laws must be dynamical laws that produce later states of the world from earlier ones, in accord with the direction of time that makes a fundamental distinction between past and future. Laws are like the engines of the world, making the universe moment by moment. Let’s call this conception of governing 'dynamic production.’ This influential picture was significantly clarified by Tim Maudlin in his book The Metaphysics Within Physics. Judging from anecdotal evidence, although I don’t know the percentage, many philosophers are attracted to this picture. After all, the passing of time is deeply ingrained in our pre-theoretical conception of the world, and dynamical laws are the most familiar examples of laws. Even some non-Humeans who do not think that laws govern accept the notion of dynamic production. For example, Heather Demarest suggests that the way fundamental dispositions explain the pattern in the universe is by dynamic production in time: later distributions of dispositions in the universe are metaphysically dependent on earlier ones, and ultimately on the initial distribution (at or near the time of the Big Bang).
However, there are many possible candidates for fundamental laws that do not fit in the picture of dynamic production. For example, Gauss’s law in classical electrodynamics—one of Maxwell’s equations—governs the distribution of electric charges and the electric field in space. It governs what things are like at a time, and not about how earlier events produce later ones. The Einstein equation, which is generally regarded as the fundamental law in general relativity, is a constraint on the relation between the geometry of spacetime and the distribution of matter. In its usual presentation, it is not a dynamical law; it governs the entire spacetime and its contents. Moreover, it allows spacetimes with closed timelike curves (CTCs), which must be precluded if one regards governing as dynamic production, since they may lead to an event that dynamically produces itself, which is impossible. The Past Hypothesis of a low-entropy condition of the early universe is another example. All of these examples, according to the conception of dynamic production, become unsuitable candidates for fundamental laws.
It seems to me that this picture of dynamic production and the emphasis on dynamical laws are too restrictive. When we reflect upon the variety of kinds of laws that physicists present as fundamental, we find many that do not fit in the form of dynamical laws. Moreover, even when physicists postulate dynamical laws, dynamic production does not seem essential to how these laws govern the world or explain the observed phenomena. There are more laws than dynamical laws that allow a ‘dynamic production’ interpretation. Laws are not (necessarily) engines.
3:16: So what’s ‘minimal primitivism’ and does it resolve certain metaphysical aspects of the Humean/non-Humean debate regarding laws of nature?
EKC: Shelly Goldstein and I propose a minimal primitivist view (MinP) about laws of nature that disentangles the governing conception from dynamic production. On our view, fundamental laws govern by constraining the physical possibilities of the entire spacetime and its contents. They need not exclusively be dynamical laws, and their governance does not presuppose a fundamental direction of time. For example, they can take the form of global constraints or boundary-condition constraints for spacetime as a whole; they can govern even in an atemporal world; they may permit the existence of temporal loops. Our minimal view captures the essence of the governing view without taking on extraneous commitments about the direction of time. Moreover, as a version of primitivism, our view requires no reduction or analysis of laws into universals, powers, or dispositions. Because of the minimalism and the primitivism, our view accommodates several candidate fundamental laws, such as the principle of least action, the Past Hypothesis, the Einstein equation, and various symmetry principles and conservation laws. We believe that the flexibility of MinP is a virtue. It should be an empirical matter what forms the fundamental laws take on. One’s metaphysical theory of laws should allow the diverse kinds of laws entertained by physicists. It may turn out that nature employs laws beyond those expressible in the form of dynamical equations or can be given a dynamic productive interpretation.
The metaphysics of laws should not stand in the way of scientific investigations. MinP encourages openness. Let’s think about some examples. Suppose Newton’s second law of motion F=ma and Newton’s law of universal gravitation F=m1m2/r^2 express the fundamental laws of the universe. They govern the world by limiting the physical possibilities to exactly those that are compatible with the two equations. The actual world has to be compatible with them. Now, a constraint does not require a fundamental distinction between past and future, or one between earlier states and later states. In fact, in this example, the dynamical equations are time-reversible. For every solution to the two equations, its image under time reversal (the sequence of physical events in reverse order) is also a solution to the same equations. The fundamental laws are blind to the distinction between past and future. Since the concept of governing in MinP does not presuppose a fundamental direction of time, two solutions that are time-reversal of each other can be identified as the same physical possibility. In some cases (such as the above), the constraint imposed by a law can be expressed by differential equations that may be interpreted as determining future states from past ones.
But not all constraints need be like that. Take the Einstein equation of general relativity for example. There are ways of converting the equation into another set of equations suitable for a dynamic productive interpretation. (A famous example is the ADM formalism.) However, they often discard certain solutions. For non-Humeans who take dynamic production as constitutive for governing or explanations, those reformulations will be entirely necessary. For them, the true laws of spacetime geometry should presumably describe the evolution of a spatial geometry in time. In contrast, on MinP there is no need to do so, as there is no metaphysical problem for taking the original Einstein equation as a fundamental law. After all, it is a simple and elegant equation and is generally regarded as the fundamental law in general relativity. We prefer not to discard or modify it on metaphysical grounds.
For another example, consider the Past Hypothesis (PH). It is a special boundary condition that is postulated to explain the direction of time in our universe, such as the one codified in the Second Law of Thermodynamics. We may state PH as this: at one temporal boundary, the universe is in a low-entropy state. We may specify the level of low entropy using various physical properties. Despite it being a boundary condition, PH satisfies many considerations to be a candidate fundamental law. We think that metaphysical accounts of laws should make room for a boundary condition to be a fundamental law. On MinP, that is no problem at all. Together, PH and dynamical laws can govern the actual world by constraining it to be one among the histories compatible with all of them. They deem that the actual world is a member of the intersection of two sets, the set compatible with PH and the one compatible with dynamical laws.
One might still prefer theories with dynamical laws to those without them. MinP allows this preference. Even though MinP does not restrict laws to dynamical ones, it comes with a principle of “epistemic guides” that encourages us to look for simple and informative laws. Dynamical laws may be especially simple and informative. The preference for dynamical laws may be explained by a preference for laws that strike a good balance between simplicity and informativeness. The principle of epistemic guides is a link between epistemically limited agents such as ourselves and mind-independent facts about laws of nature. The status of this principle is open to debate. I am inclined to accept the theoretical virtues such as simplicity as fundamental epistemic guides to lawhood. However, it is a delicate issue.
3:16: Much of your thinking regarding the nature of these laws involves your thinking around the ‘time’s arrow’, ‘the past hypothesis’, and ‘the Wentaculus’ doesn’t it? Can you first sketch for us what these three elements of your thinking are and how they are related? So firstly, what’s the past hypothesis and how does this link with time’s arrow?
EKC: The Past Hypothesis says (roughly) that the universe started in a low-entropy state. It’s proposed a solution to the problem of the arrow of time, one of the hardest in the foundations of physics. If the dynamical laws do not distinguish between the past and the future, what explains the irreversible behavior of many objects in our environment, such as the melting of ice cubes, the decaying of flowers, and the mixing of cream in coffee? No one has ever seen, for example, the spontaneous separation of cream and coffee in a cup. These phenomena are captured by the Second Law of Thermodynamics: the thermodynamic entropy of a closed system tends to increase towards the future, until it reaches the entropy maximum. Following Shelly Goldstein’s terminology, we may distinguish two parts of the problem of time’s arrow: (1) the easy part: if a system is not at maximum entropy, why should its entropy tend to be higher at a later time? (2) the hard part: why should there be an arrow of time in our universe that is governed by fundamental reversible dynamical laws?
The easy part was studied by the physicist Ludwig Boltzmann. Crucial to his answer is the insight that states of higher entropy occupy much larger volume in the system’s phase space than those states of lower entropy. However, to solve the hard part of the problem of time’s arrow, we need something more. The Past Hypothesis is offered as an answer. (The name “Past Hypothesis” was coined by David Albert but the idea had a long history and was discussed at length in Richard Feynman’s The Character of Physical Laws, ch. 5.) The reason there is an arrow of time in our universe is because there is a low-entropy constraint on one temporal boundary of the spacetime and not the other. Without the Past Hypothesis, we would predict that the entropy is higher towards both temporal directions, which would be incorrect. However, the Past Hypothesis by itself is not enough. It is compatible with the universe starting in a “bad” state, one that according to the dynamical equations leads to a history that decreases in entropy. We need an additional assumption of “randomness," according to which the actual state of the universe is not likely to be bad.
A version of the randomness assumption is called the Statistical Postulate. It posits a uniform probability distribution over all the states compatible with the Past Hypothesis. There are questions about how we should understand this assumption of randomness, and how it may be related to randomness one finds in quantum theory. David Albert and Barry Loewer call the package that includes the dynamical laws, the Past Hypothesis, and Statistical Postulate the Mentaculus. (It is meant to be “the probability map of the universe” and named after a notebook that appeared in the Coen Brothers movie A Serious Man.) My proposal, which I call the Wentaculus, is inspired by the Mentaculus theory. The Past Hypothesis is a constraint that limits the possibilities of the initial state to a small but infinite set. I propose an alternative, which I call the ‘Initial Projection Hypothesis,’ that further narrows down the choice to a unique one.
The key is to exploit the structure of quantum mechanics and use an object called the density matrix, which is sometimes denoted by W. There is a mathematically rigorous way of defining a density matrix, but let me use a metaphor for illustration. Think of each possible wave function as a pixel on a screen, and think of the wave function of the actual universe as a particular pixel marked in red. If we have a powerful microscope, we see every dot on the screen, including the red one. Specifying the location of the red dot requires a lot of information. Now, if we adjust the magnification level and zoom out a bit, we stop seeing individual pixels. At the right level of magnification, we see some pattern emerging. The pattern, being more coarse-grained, can be easier and simpler to describe than the exact locations of individual pixels. I suggest that the coarse-grained pattern suffices to describe the motion of ordinary objects. And the less detailed description is given by the density matrix, as postulated by the Initial Projection Hypothesis. Of course, the metaphor is not entirely right. On my proposal, the density matrix of the universe is not metaphysically derivative of some underlying wave function, where the coarse-grained pattern is emergent from the fine-grained facts. On my view, the universe can’t be described by a wave function but must be described by a density matrix, and the density matrix is the fundamental object that does not emerge from anything else. Nevertheless, the metaphor can be a helpful first step towards understanding the new picture of reality offered by the Wentaculus.
One may wonder why we should consider the Wentaculus. My motivation was to use it to solve two problems in philosophy of physics. First, since there is a unique (nomologically) possible state, we no longer need a randomness assumption about the initial quantum state of the universe. The only source of probability and randomness comes from quantum mechanics. This approach eliminates the need for two independent sources of objective probability in nature: quantum mechanics and the statistical mechanics. In a sense, statistical mechanical probability is reduced to quantum mechanical probability. Second, this approach simplifies the initial quantum state to a level that meets the bar for lawhood. This is based on the arguments that the Past Hypothesis is simple enough to be a law. On the Initial Projection Hypothesis, there is a one-to-one correspondence between the Past Hypothesis constraint and the initial quantum state. The latter is no more complicated than the former. This result makes it much easier to be a realist about quantum mechanics than on the wave-function picture. Besides offering solutions to these two problems, the Wentaculus has some unexpected features. For example, it provides a principled way to resolve “fundamental nomic vagueness” and a new example of “strong determinism.”
3:16: And what’s the Everettian Wentaculus? Is it a strongly deterministic theory of physics and even if it isn’t should we still assume that strong determinism, even if false, is close enough to reality to be important for some topics in philosophy and physics?
EKC: The Everettian Wentaculus is a version of the Wentaculus. It combines the Wentaculus with the Everett (many-worlds) interpretation of quantum mechanics. Personally, I am more sympathetic with single-world interpretations such as Bohmian mechanics and collapse theories than with Everett. However, it can be very useful to consider the Everettian Wentaculus, because it has some remarkable properties. For example, it achieves strong determinism in a simple way. The concept of strong determinism was introduced, I believe, by Roger Penrose in the 1980s. On my definition, strong determinism means that the fundamental laws of nature allow only one physically (nomologically) possible world. Strong determinism entails but is not entailed by determinism. On strong determinism, the actual world is the only physical possibility, and is therefore physically necessary. This means the fundamental laws, by themselves, can completely explain why the world is as it is. (Strong determinism is different from super-determinism, which requires violation of statistical independence assumptions.)
Strong determinism may appear impossible to achieve in a world like ours; one might have thought that it would make the laws very complicated, because they have to completely specify the world down to its microscopic detail. However, the Everettian Wentaculus shows that strong determinism can be achieved in a simple way. If Everettian quantum mechanics is empirically adequate, we can combine the theory with the Initial Projection Hypothesis, by replacing the universal wave function with the universal density matrix introduced earlier, and interpreting the branches of the density matrix as emergent worlds. Since the universal quantum state is the only fundamental object in such a universe, and since there is only one possible way for it to begin, and only one possible evolution in time, there is only one possible history of the entire universe (multiverse). Hence, on the Everettian Wentaculus, the history of the Everettian multiverse could not have been different on pain of violating the physical laws. This is to be contrasted with the standard picture of Everett, where there is a multiverse but it could have started differently, with many physically possible histories of the entire multiverse.
As far as I know, the Everettian Wentaculus is the first realistic physical theory that achieves strong determinism. If strong determinism is shown to be possible and can be achieved with sufficient simplicity, we have to reconsider several issues in philosophy of science. The first issue concerns naturalness in metaphysics. An influential argument, due to David Lewis, that our definition of natural laws requires the notion of perfect naturalness, would be unsound if strong determinism is possible. Lewis argues (in “New Work for a Theory of Universals”) that without naturalness we would have the consequence that every regularity is a law, because some gruesome property F can appear in the laws to summarize everything in the actual world in a super simple way (for all x, Fx). He argues that since that is obviously wrong, we have to posit naturalness. But if strong determinism is true, we should expect that every regularity is a law. When we accept the possibility of strong determinism, we realize the argument should be revised as follows: if we don’t appeal to naturalness, strong determinism would become metaphysically necessary. But strong determinism should be metaphysically contingent, so we need naturalness.
Another example concerns counterfactual dependence accounts of causation. If the space of physical possibilities is a single-member set (a set of one world), then on the standard accounts, we have the triviality result that every event causes every other event, because every event counterfactually depends on every other event. How to resolve this is an open question. In any case, I think philosophers of science and metaphysicians ought to pay more attention to strong determinism. We see from the example of Everettian Wentaculus that strong determinism is not a remote possibility that can safely be ignored, but an important one that may well describe the world we live in. Even if strong determinism fails to be true, it is much closer to the actual world than we have presumed, with implications for some of the central topics in philosophy and foundations of physics.
3:16: Another part of the equation is ‘vagueness’ and how it’s part of the specification of the past hypothesis. So how do you approach its ‘fundamental nomic vagueness’ and why might someone think that the vagueness means that the past hypothesis can’t be a fundamental law of nature?
EKC: We associate vagueness with mundane properties such as baldness and tallness. Consider Tom who is borderline bald. Tom has only 5000 hairs on his head. It is not determinate that Tom is bald, and not determinate that Tom is not bald. Moreover, because of the phenomenon of higher order vagueness, there can’t be a sharp boundary between baldness and borderline baldness. Now, we might expect that vagueness disappears at the level of fundamental physics. In particular, we might expect that the fundamental laws of physics are exact, giving us a completely objective and precise description of nature. But we must first ask: what would it even mean for fundamental laws to be vague? I characterize fundamental nomic vagueness, or vagueness in the fundamental laws, as the existence of worlds that are borderline physically possible. There is no well-defined set of physical possibilities.
We can run sorites paradoxes on vague laws. Start from a world that is determinately physically possible, proceed to gradually make small changes to the world along some relevant dimension, and eventually arrive at a world that is determinately physically impossible. But no particular small change makes the difference between physical possibility and physical impossibility. Moreover, whenever there are borderline physically possible worlds, there are borderline borderline physically possible worlds, and so on. Much is at stake here. For example, higher order vagueness in fundamental laws is a challenge to their mathematical expressibility. Any cutoff in the boundary of the borderline cases would be too precise, and it is hard to see how to model the vague boundary of nomological possibilities in set theory. If some fundamental law is vague, and if vagueness (in particular higher-order vagueness) isn’t completely mathematically expressible, then part of fundamental physics isn’t completely mathematically expressible, which would be a surprising consequence. Many of us assume that every part of fundamental physics can be faithfully and completely described by mathematics. That assumption would be wrong if there is fundamental nomic vagueness.
Can fundamental laws be vague? I think they can, and I argue that the Past Hypothesis (PH) is such an example. If the PH is a fundamental law, and if it is vague, it is an instance of fundamental nomic vagueness. I think there are powerful reasons to think the PH is a fundamental law and the standard version of the PH is vague. If we take the PH seriously, we should also take fundamental nomic vagueness seriously. This has several implications. For example, as mentioned earlier, if the PH is part of the package of the fundamental laws of this universe, that package can’t be completely and faithfully expressed in mathematics, as set theory is too precise to capture the phenomenon of higher-order vagueness. If fundamental laws are part of the fundamental facts of the universe, fundamental nomic vagueness entails there are certain fundamental facts that are vague. This is an instance of ontic vagueness, though it is different from the usual cases about the spatio-temporal boundaries of physical objects, as it is vagueness of what is physically possible. One’s modus ponens is another person’s modus tollens.
One might think that these are reasons enough to think that the PH is not a fundamental law, or that the PH is not vague. However, there are significant costs to either option, if one is thinking about standard versions of the PH. For example, if one eliminates the vagueness of the PH by fiat, one introduces an objectionable kind of arbitrariness in laws, one that I call untraceability. The basic idea is that there would be more wiggle room in the laws than what is revealed in the world. But no previous theories have such kind of wiggle room. All previous laws are tightly connected to material ontology in a way that is traceable. That is the key difference between the arbitrarily precise gravitational constant and an arbitrarily precise version of the PH. The issue here is similar to the trade-off between vagueness and arbitrary cut-offs for ordinary predicates such as ‘bald.’
3:16: You argue that the new quantum theory of time deals with this dilemma of fundamental nomic vagueness and the consequent untraceable arbitrariness that it builds into the past hypothesis don’t you? So how does this theory of time work to do this?
EKC: The Wentaculus provides a version of (and an alternative to) PH without vagueness, and without arbitrariness. This is because the Wentaculus boundary condition is completely traceable, leaving no wiggle room between the laws and the material ontology. Given what the material ontology, there would be no wiggle room in adjustable parameters in the laws. It is a bit like Humean supervenience, though the two are logically independent. The key idea is that on the Wentaculus, the initial density matrix is both the microstate and the macrostate of the universe at that particular time. It is the microstate that participates in the fundamental dynamical laws, and it is also the macrostate as it corresponds uniquely to the PH subspace. So there is an interesting kind of dualism that is absent in other theories. Because of this, the boundary of the macrostate is no longer arbitrary; it is pinned down by the microscopic history of the universe.
For concreteness, let us consider the Bohmian case. In the Bohmian Mentaculus, the exact boundary of the Past Hypothesis macrostate leaves no trace in the material ontology, because small changes typically won’t make any difference to how particles actually move. In contrast, in the Bohmian Wentaculus, the exact value of the initial density matrix, picked out by the Initial Projection Hypothesis, is traceable. Small changes typically lead to different histories of the universal quantum state and the material particles. The same is true in the Everettian case and the GRW case (a spontaneous localization theory). It is also philosophically interesting for another reason. On the Wentaculus theory, the innovations of quantum theory can be used to eliminate fundamental nomic vagueness without introducing untraceable arbitrariness in nature. A kind of vagueness that exists in a classical universe would naturally disappear in a quantum universe. This stands in sharp contrast to the idea that quantum theory makes reality inherently indeterminate or fuzzy.
3:16: Why does PH need supplementing with self-locating probabilities and as a result of this, why does this cause problems for providing a completely scientific explanation of time's arrow?
EKC: I am generally optimistic about the prospects of the PH to provide a scientific explanation of time’s arrow. But I worry about the Boltzmann brain problem. If the universe is large enough and evolves long enough, there will be copies of us in the distant past and the distant future, who are just brains floating in an otherwise structureless soup. If that is too minimal and too skeptical, one can add more environment to them and make them as big as the entire universe at the present moment, but whose local histories are vastly different — they grow older in both directions of time. As long as those “bubbles” of spacetime are not as big as the entire history of spacetime, we have a problem. How do we know we are not in one of those bubbles? Is there any principled reason to think that our bubble is special? This is an instance of self-locating uncertainty. Other examples include the thought experiments of sleeping beauty and Adam Elga’s Dr. Evil. But if we need to postulate some principle of self-locating probability in a biased way, how can that be objective? And how can that be a completely scientific explanation of time’s arrow? Can we have a physical law that is about self-locating propositions? Perhaps quantum theory can ultimately resolve these worries, and perhaps not. It is important to get to the bottom of this. But it requires the joint work of philosophers of physics and epistemologists.
3:16: Tim Maudlin says that if Einstein is the greatest philosopher of physics in the first half of the twentieth century then Bell is the greatest in the latter half. I guess everyone knows about Einstein, but Bell less so unless you’re involved in physics and philosophy of physics. So what is Bell’s theorem and the notion of non-locality?
EKC: Bell’s theorem is one of the most significant results in the history of physics. Its conclusion is striking and yet its assumptions seem quite innocuous. It requires radical revisions about how we think about the world we live in. Before Bell’s theorem, we picture the world we live in to be one where physical things interact only locally in space. An explosion on the surface of Mars will produce immediate physical effects in the immediate surroundings on Mars. The event will have physical effects on Earth only at a later time, and perhaps at a smaller magnitude. We expect the world to work in a local way that events arbitrarily far apart in space cannot instantaneously influence one another. This picture is baked into the formulations of classical theories of physics such as those of Maxwell and Einstein. After Bell’s theorem, that picture is untenable.
Bell proves that the world is non-local if certain predictions of quantum mechanics are correct. Many high-precision tests have confirmed those predictions over and over. Hence, we should be highly confident that the world is non-local: events that are arbitrarily far apart can instantaneously influence each other. (In relativistic terms, it means that events that are space-like separated can influence each other.) But not everyone is convinced. There are still disagreements about what Bell proved and how general the result is. Some can be traced to misunderstandings about the assumptions in the proof, and others may be due to general disagreements about scientific explanations and the standards of theory choice.
3:16: You look at strategies for avoiding non-locality that intersect with the philosophy of probability – in particular, quantum probability and superdeterminism. What are the issues here?
EKC: Some people have thought that Bell’s theorem requires assumptions about classical probability theory. Hence, to avoid the conclusion of non-locality, one can simply reject classical probability in favor of new ways of thinking about probability. This seems to me a non-starter, as Bell’s theorem only require frequencies and proportions that obey the rules of arithmetic. One can replace talks of probability with talks about counting. One can prove versions of Bell’s theorem using just facts about how to count. Counting percentages obviously obeys laws of arithmetic, and there is no obvious or natural way to revise the rules for counting. So we only need classical logic and rules of counting to prove Bell’s theorem. Some people have tried to identify some other “weak link” in the proof of the theorem. One purported “weak link” is associated with the assumption of statistical independence, which is roughly the idea that we can draw random samples. Under this assumption, given any collection of photon pairs adequately prepared, and after the experimental setup is completed, we can perform random sampling on the collection and obtain a sub-collection that reflects the same statistical profile as the overall collection and any other sub-collection so randomly chosen.
People who reject statistical independence are often in favor of “super-deterministic theories.” On such theories, the choice of which photon pairs to send to which experimental setup is correlated with the choice of the setup itself. This opens a door to maintain locality and respect the observed quantum statistics. However, such a violation of statistical independence would seem to require some extraordinary conspiracies in the world. Not only does this have to be true for such setups, which is already incredible, we need there to be similar conspiracies for every such experimental setup, done by anyone, anywhere, and anytime. No matter where, when, and who to carry out the experiment, the strategy requires that no matter what random sampling method we use, the photon pairs with the “right” statistical profile should always find themselves at the “right” experimental setup.
I don’t think “conspiracies” themselves are physically impossible or forbidden by nature. My main objection is that it’s hard to construct a simple theory that does this in a realistic setting. One might ask why I keep insisting on simplicity. In my mind it is a cornerstone of a realist understanding of physics. But as I said, it is a delicate issue.
3:16: You’ve also written about the wave function haven’t you? So what is the best way we should understand what a wave function actually is? You don’t think it’s a really strange object do you? But if it’s not, and is just an equation or some math of some sort, then how can it be part of the universe of our space and time? Is this another key metaphysical question regarding how we understand what quantum physics is telling us?
EKC: The wave function is a central object in quantum mechanics, but it is nothing like the objects of our ordinary experience. It lives on a vastly high-dimensional space. It attributes complex numbers such as the square root of -1. So it might be difficult (though not impossible) to treat it as representing something in space and time. A proposal that I find interesting is to regard it as a “multi-field” that lives on physical space. A field is something that assigns properties to every point in some space, but a multi-field is something that assigns properties to some regions, though they need not be connected. For a 5-particle wave function, it assigns some property (represented by some complex number) to every collection of 5 points in space. There are good arguments in favor of this multi-field proposal. However, now I think there is an even better solution.
We should understand the wave function as representing a law of nature instead of a material object. The wave function tells material objects (such as particles and fields) how to move. It’s like the Hamiltonian function of classical mechanics. Now, there is an obvious problem: we expect laws to be relatively simple, but we have reasons to think that the wave function of the universe is not so simple to write down. This problem motivated the Wentaculus proposal. Instead of using a wave function to represent the quantum state of the universe, I use a density matrix, which is a somewhat more general gadget. Given the Intial Projection Hypothesis, a version of the Past Hypothesis, the specific density matrix that we assign to the initial state of the universe is as simple as the specification of the Past Hypothesis.
The debate about the wave function is one place with lots of fruitful engagement between metaphysics and foundations of physics. The issue of the wave function has inspired much work on the Humean metaphysics of science (e.g. Quantum Humeanism, as defended by Elizabeth Miller, Michael Esfeld, Craig Callender, Harjit Bhogal, and Zee Perry, among others; I’ve argued that the quantum Humeans should adopt the Wentaculus, for the reasons we’ve discussed). In the other direction, the metaphysical literature on laws, powers, and chances is influencing current debates in the foundations of quantum mechanics.
3:16: I love the idea of surreal decisions and surreal numbers. So what are these and how do these become useful when trying to resolve some of the issues arising from applying expected utility theory to infinite values? And why does your approach show that Pascal’s Wager fails to deliver a rationally compelling argument that people should lead a religious life regardless of how confident they are in theism and its alternatives?
EKC: Surreal numbers were invented by the late Princeton mathematician John Conway. They have some fascinating mathematical properties. They are simple to construct and intuitive to manipulate. For example, given a positive and infinite surreal number k, we can subtract 1 from it, and get a smaller number. 1/k is also well defined, and it is larger than zero. That’s not the case for standard accounts of infinities. Following a suggestion of Alan Hajek, Daniel Rubio and I show that we can construct a decision theory where utilities and probabilities are represented by surreal numbers. In this way, we can meet many challenges of standard decision theory.
For example, dominance reasoning is restored. Moreover, we can use it to model various versions of Pascal’s Wager, and especially mixed strategy cases, where we need to make sense of things like 1/2 of an infinite number. On our approach, mixed strategies may have smaller or larger expected utilities than the standard Wager. And ultimately whether someone should be a theist should depend on their probability assignments. This shows that Pascal’s wager is not a straightforward practical problem with an obvious solution, and may depend ultimately on one’s evidence for or against theism.
3:16: You’re also interested in the metaphysics of mental qualities and propose a new hypothesis about the nature of mental qualities as being fundamentally comparative; for example, being more painful than is more fundamental than being painful. How does this work and why do you think the consequences of this new theory link with reductionist physicalism and what David Chalmers calls micro-idealism?
EKC: I am a big fan of comparativism about physical qualities, the idea that quantities such as mass are relational in nature. Instead of taking 1 kilogram as basic, comparativism regards ‘being more massive than’ as basic and seeks to recover absolute quantities from the relational ones. In my PhD thesis, I have extended this to provide a comparativist and nominalistic treatment of the wave function in quantum mechanics (along the line of Hartry Field’s Science Without Numbers). I am also interested in whether we can extend comparativism to the mental realm.
It turns out it is possible, with some interesting consequences. Take pain for example. A received idea is that pain is on the absolute scale, say, from 0 to 10. I suggest it should be regarded as a comparative quality. ‘Being painful to degree 10’ is derived from the two place relation ‘being more painful than’ and the three place relation ‘being the sum of pain of.’ The idea is to adapt representation theorems about physical qualities to mental qualities, and recover the absolute scales from axioms about the comparative relations. This can generalize to other mental qualities such as color and credence. This allows us to contemplate what I call the Comparative Mentality Hypothesis (CMH): the mental structure is given by comparative mental relations.
There are several advantages of mental comparativism. First, it is parsimonious. Instead of postulating both mental properties and mental relations, we just need the latter in the case of mental comparativism. Second, it explains why there is certain conventional and surplus structure in our representation of the mental properties. Moreover, it may be closer to how mental properties are measured. David Chalmers has recently proposed a version of micro-idealism. This is the combination of panpsychism and idealism, according to which microscopic things have mental properties and mental properties ground physical properties. There is a potential conflict with physics, because spacetime has much relational structure. It is hard to see how monadic properties (according to absolutism about the mental) can give rise to complex relations. This conflict is solved in the comparativist framework, where both physical quantities and mental qualities are taken to be comparative relations. There is at least no logical problem of how physical structure may arise from mental structure. The remaining question is how rich a mental structure is required for the emergence of physical structures.
3:16: Interestingly, alongside your philosophy of physics and metaphysics work you’re also working in aspects of Chinese philosophy. You’ve also thought about Needham’s question regarding the possibility of science within traditional Chinese thought and explored the connections between concepts of laws of nature with the concept of Dao in Chinese philosophy. So what are the connections – how do your own theories about laws of nature you’ve discussed above touch on issues of Dao?
EKC: There is a deep suspicion, even among some Chinese intellectuals, that science is irrelevant to the goals of Chinese philosophy, or Chinese philosophy is an obstacle to the development of science. For example, the prominent Chinese philosophy Fung Yu-Lan, in his 1922 paper “Why China Has No Science,” suggests that “China has no science, because according to her own standard of value she does not need any.” The scientist Peng Gong, in a 2012 article published in the journal Nature, argues that Chinese cultural history, such as the teachings of Confucius and Zhuang Zhou, “[helped] produce a scientific void in Chinese society that persisted for millennia. And they continue to be relevant today.” Those are complex issues that do not have easy answers.
The relationship between Chinese philosophy and science, I think, is more nuanced and deserves further philosophical analysis. I have been thinking about how and in what way Chinese philosophy contributed to the development of science in China, an issue relevant to Needham’s question. I think the concept of laws of nature is central to the development of science in the West. To better understand the history and philosophy of science in China, we may consider the relevance of the concept of Dao, and its similarities with and differences from the concept of laws. Dao is a very rich concept in Chinese philosophy. A common translation renders it as the ‘way,’ the path we walk on, or the path we should walk on. The concept was central to the debates about moral and social order in the pre-Qin period and beyond. It occupies a central place in the Confucian and Daoist classics. It also has a metaphysical dimension, and it may be regarded as the principle that governs the world. Its status in Chinese philosophy is perhaps similar to that of “truth” in Western philosophy.
Some earlier conceptions of laws presume that laws govern by dynamically producing things in time, and that earlier events produce later events. One of my questions is about how various conceptions of Dao relate to time and whether Dao is more timeless like eternal constraints or more time-directed like dynamical principles. A further question concerns the two metaphysical conceptions of laws---the governing view (a non-Humean metaphysics) and the non-governing view (most commonly held by Humeans). If laws of nature permit these two interpretations, perhaps Dao is too.
3:16: And finally, for the curious readers here at 3:16, are there five books you could recommend that will take us further into your philosophical world?
EKC:
First, I recommend Roger Penrose’s The Emperor's New Mind: Concerning Computers, Minds and The Laws of Physics. It is a beautiful book, filled with the mathematician and physicist Penrose’s hand drawings and philosophical insights about computers, consciousness, arrows of time, quantum gravity, and physical laws.
Second, I recommend Alyssa Ney and David Albert’s edited volume The Wave Function: Essays on the Metaphysics of Quantum Mechanics. It contains a very helpful introduction essay and ten papers by leading philosophers of physics on the metaphysics of the wave function. It provides many example of the rich connections between philosophy of physics and metaphysics.
Third, I recommend Jenann Ismael’s How Physics Makes Us Free. Ismael argues eloquently that taking physics seriously does not threaten human freedom but actually provides a more nuanced understanding of the distinctive nature of our agency.
Fourth, I recommend Hartry Field’s Science Without Numbers, which is republished by Oxford University Press. I was a mathematical Platonist before reading the book and became a committed nominalist afterwards. It inspired me to look for an intrinsic and nominalistic theory of quantum mechanics.
Finally, I recommend Fung Yu-Lan’s A Short History of Chinese Philosophy. It’s a monumental book written in the first half of the 20th century, at a time of significant upheavals in China that led to many debates about and reflections of the two-thousand years of philosophical tradition. Much of it was in dialogue with Western philosophy. One can disagree with Fung’s conclusions but still admire his ingenuity.
ABOUT THE INTERVIEWER
Richard Marshall is biding his time.
Buy his second book here or his first book here to keep him biding!
End Time series: the themes
Huw Price's Flickering Shadows series.
Steven DeLay's Finding meaning series
Josef Mitterer's The Beyond of Philosophy serialised
NEW: Art from 3:16am Exhibition - details here