Publications


'Why Katz is Wrong: A Lab-Created Creature Can Still Have an Ancient Evolutionary History' 
Ethics, Policy and Environment (2022) forthcoming
Abstract: Katz denies that organisms created in a lab as part of a de-extinction attempt will be authentic members of the extinct species, on the basis that they will lack the original species’ defining biological and evolutionary history. Against Katz, I note that an evolutionary lineage is conferred on an organism though its inheriting genes from forebears already possessed of such a lineage, and that de-extinction amounts to a delayed, human-assisted reproductive process, in which genes are inherited from forebears long dead. My conclusion is that de-extinct organisms can perfectly well have an ancient evolutionary history, contrary to Katz’s assumption. 


'Not so distinctively mathematical explanations: topology and dynamical systems' 
Synthese (2022) forthcoming
Coauthored with Adita Jha, Philip Wilson and Clemency Montelle 
Abstract: So-called ‘distinctively mathematical explanations’ (DMEs) are said to explain physical phenomena, not in terms of contingent causal laws, but rather in terms of mathematical necessities that constrain the physical system in question. Lange argues that the existence of four or more equilibrium positions of any double pendulum has a DME. Here we refute both Lange’s claim itself and a strengthened and extended version of the claim that would pertain to any n-tuple pendulum system on the ground that such explanations are actually causal explanations in disguise and their associated modal conditionals are not general enough to explain the said features of such dynamical systems. We argue and show that if circumscribing the antecedent for a necessarily true conditional in such explanations involves making a causal analysis of the problem, then the resulting explanation is not distinctively mathematical or non-causal. Our argument generalises to other dynamical systems that may have purported DMEs analogous to the one proposed by Lange, and even to some other counterfactual accounts of non-causal explanation given by Reutlinger and Rice. 

'Does the Solar System Compute the Laws of Motion?'
Synthese (2019) Volume 198, Issue 4, pp. 3203-3220
Abstract: The counterfactual account of physical computation is simple and, for the most part, very attractive. However, it is usually thought to trivialize the notion of physical computation insofar as it implies ‘limited pancomputationalism’, this being the doctrine that every deterministic physical system computes some function. Should we bite the bullet and accept limited pancomputationalism, or reject the counterfactual account as untenable? Jack Copeland would have us do neither of the above. He attempts to thread a path between the two horns of the dilemma by buttressing the counterfactual account with extra conditions intended to block certain classes of deterministic physical systems from qualifying as physical computers. His theory is called the ‘algorithm execution account’. Here we show that the algorithm execution account entails limited pancomputationalism, despite Copeland’s argument to the contrary. We suggest, partly on this basis, that the counterfactual account should be accepted as it stands, pancomputationalist warts and all. 
Book note: "The Fragmentation of Being", by Kris McDaniel
Australasian Journal of Philosophy (2019) Volume 97, Issue 3, p. 634-635
Abstract: This is a review of Kris McDaniel's book, 'The Fragmentation of Being'. In the book McDaniel defends ontological pluralism -- the doctrine that there are multiple 'ways of being' (i.e., multiple modes, or degrees, or orders, or levels, or gradations of existence). In defending ontological pluralism, McDaniel must reject the rival, Quinean position that there is at root just one generic way for a thing to exist: viz., by its falling in the domain of unrestricted quantification. McDaniel argues against Quine by contending that the unrestricted quantifier is really just shorthand for a ‘gruesome’ disjunction of restricted quantifiers. On McDaniel's view, the unrestricted quantifier plays ontological 'second fiddle' to these restricted quantifiers, which are ontologically fundamental, and which each represent one particular mode of being. Against this, I contend that if the disjunction in question was as gruesome as McDaniel makes out then logic would be apt to explode in our faces. If I am right then McDaniel's response to Quine falls flat. 

'Doxastic Desire and Attitudinal Monism'
Synthese (2018) Volume 195, Issue 3, pp. 1139-1161
Abstract: How many attitudes must be posited at the level of reductive bedrock in order to reductively explain all the rest? Motivational Humeans hold that at least two attitudes are indispensable, belief and desire. Desire-As-Belief theorists beg to differ. They hold that the belief attitude can do the all the work the desire attitude is supposed to do, because desires are in fact nothing but beliefs of a certain kind. If this is correct it has major implications both for the philosophy of mind, with regards the problem of naturalizing the propositional attitudes, and for metaethics, with regards Michael Smith’s ‘moral problem’. This paper defends a version of Desire-As-Belief, and shows that it is immune to several major objections commonly levelled against such theories. 

The Eightfold Way: Why Analyticity, Apriority and Necessity are Independent
Philosophers' Imprint (2017) Volume 17, Number 25, pp. 1-17.
Abstract:  This paper concerns the three great modal dichotomies: (i) the necessary/contingent dichotomy; (ii) the a priori/empirical dichotomy; and (iii) the analytic/synthetic dichotomy. These can be combined to produce a tri-dichotomy of eight modal categories. The question as to which of the eight categories house statements and which do not is a pivotal battleground in the history of analytic philosophy, with key protagonists including Descartes, Hume, Kant, Kripke, Putnam and Kaplan. All parties to the debate have accepted that some categories are void. This paper defends the contrary view that all eight categories house statements—a position I dub ‘octopropositionalism’. Examples of statements belonging to all eight categories are given.


Book: Resurrecting Extinct Species: Ethics and Authenticity
Palgrave Macmillan (2017)
Abstract:  This book is about the philosophy of de-extinction. To make an extinct species ‘de-extinct’ is to resurrect it by creating new organisms of the same, or similar, appearance and genetics. The book describes current attempts to resurrect three species, the aurochs, woolly mammoth and passenger pigeon. It then investigates two major philosophical questions such projects throw up. These are the Authenticity Question—‘will the products of de-extinction be authentic members of the original species?’—and the Ethical Question—‘is de-extinction something that should be done?' The book surveys and critically evaluates a raft of arguments for and against the authenticity or de-extinct organisms, and for and against the ethical legitimacy of de-extinction. It concludes, first, that authentic de-extinctions are actually possible, and second, that de-extinction can potentially be ethically legitimate, especially when deployed as part of a ‘freeze now and resurrect later’ conservation strategy.


Against Lewis on 'Desire as Belief'
Polish Journal of Philosophy (2017) Volume 12, Number 2
Abstract: David Lewis describes, then attempts to refute, a simple anti-Humean theory of desire he calls ‘Desire as Belief’. Lewis’ critics generally accept that his argument is sound and focus instead on trying to show that its implications are less severe than appearances suggest. In this paper I argue that Lewis’ argument is unsound. I show that it rests on an essential assumption that can be straightforwardly proven false using ideas and principles to which Lewis is himself committed. 


'On the Authenticity of De-extinct Organisms, and the Genesis Argument'
Animal Studies Journal (2017) Volume 6, Number 1
Abstract:  Are the methods of synthetic biology capable of recreating authentic living members of an extinct species? An analogy with the restoration of destroyed natural landscapes suggests not. The restored version of a natural landscape will typically lack much of the aesthetic value of the original landscape because of the different historical processes that created it—processes that involved human intentions and actions, rather than natural forces acting over millennia. By the same token, it would appear that synthetically recreated versions of extinct natural organisms will also be less aesthetically valuable than the originals; that they will be, in some strong sense, ‘inauthentic’, because of their peculiar history and mode of origin. I call this the ‘genesis argument’ against de-extinction. In this article I critically evaluate the genesis argument. I highlight an important disanalogy between living organisms and natural landscapes: viz., it is of the essence of the former, but not of the latter, to regularly reproduce and die. The process of iterated natural reproduction that sustains the continued existence of a species through time obviously does not undermine the authenticity of the species. I argue that the authenticity of a species will likewise be left intact by the kind of artificial copying of genes and traits that a de-extinction project entails. I conclude on this basis that the genesis argument is unsound. 


'The Inconceivable Popularity of Conceivability Arguments'
Coauthored with Jack Copeland and Zhao-Ran Deng 
Philosophical Quarterly (2017) Volume 67, Issue 267, pp. 223-240.
Abstract. Famous examples of conceivability arguments include: (i) Descartes’ argument for mind-body dualism; (ii) Kripke’s ‘modal argument’ against psychophysical identity theory; (iii) Chalmers’ ‘zombie argument’ against materialism; and (iv) modal versions of the ontological argument for theism. In this paper we show that for any such conceivability argument, C, there is a corresponding ‘mirror argument’, M. M is deductively valid and has a conclusion that contradicts C’s conclusion. Hence a proponent of C—henceforth, a ‘conceivabilist’—can be warranted in holding that C’s premises are conjointly true only if she can find fault with one of M’s premises. But M’s premises—of which there are just two—are modeled on a pair of C’s premises. The same reasoning that supports the latter supports the former. For this reason a conceivabilist can repudiate M’s premises only on pain of severely undermining C’s premises. We conclude on this basis that all conceivability arguments, including each of (i)—(iv), are fallacious. 

'A case for resurrecting lost species—review essay of Beth Shapiro’s, “How to Clone a Mammoth: The Science of De-extinction”'
Biology and Philosophy (2016) Volume 31, Issue 5, pp. 747-759.
Abstract. The title of Beth Shapiro’s ‘How to Clone a Mammoth’ contains an implicature: it suggests that it is indeed possible to clone a mammoth, to bring extinct species back from the dead. But in fact Shapiro both denies this is possible, and denies there would be good reason to do it even if it were possible. The de-extinct ‘mammoths’ she speaks of are merely ecological proxies for mammoths—elephants re-engineered for cold-tolerance by the addition to their genomes of a few mammoth genes. Shapiro’s denial that genuine species de-extinction is possible is based on her assumption that the resurrected organisms would need to be perfectly indistinguishable from the creatures that died out. In this article I use the example of an extinct New Zealand wattlebird, the huia, to argue that there are compelling reasons to resurrect certain species if it can be done. I then argue that synthetically created organisms needn’t be perfectly indistinguishable from their genetic forebears in order for species de-extinction to be successful.   

'We Could Recreate an Extinct Species, But Should We?'
Opinion Piece in The New Zealand Herald (Dec 2, 2016)

'Why We Shouldn't Reason Classically, and the Implications for Artificial Intelligence'
In C. Vincent Müller (ed.), Computing and Philosophy: Selected Papers From IACAP 2014. (2016)
Abstract. In this paper I argue that human beings should reason, not in accordance with classical logic, but in accordance with a weaker ‘reticent logic’. I characterize reticent logic, and then show that arguments for the existence of fundamental Gödelian limitations on artificial intelligence are undermined by the idea that we should reason reticently, not classically.  

'Radicalizing Enactivism: Basic Minds with Content By Daniel F. Hutto and Erik Myin.'

Analysis (2013) Volume 74, Issue 1, pp. 174-176.
Abstract. In Radicalizing Enactivism, D. D. Hutto and E. Myin develop a theory of mind they call ‘Radical Enactive (or Embodied) Cognition’ (REC). They argue that extant enactivist and embodied theories of mind are, although pretty radical, not radical enough, because such theories buy into the representationalist doctrine that perceptual experience (along with other forms of ‘basic’ mentality) possesses representational content. REC denies this doctrine. It implies that perceptual experience lacks reference, truth conditions, accuracy conditions, or conditions of satisfaction. In this review I summarise their anti-representationalist argument and show that it has at least three major weaknesses.   

'The Semimeasure Property of Algorithmic Probability -- "Feature" or "Bug"?' 
In David L. Dowe (ed.), Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence: Papers From the Ray Solomonoff 85th Memorial Conference, Melbourne, Vic, Australia, November 30 -- December 2, 2011. (2013)
Abstract. An unknown process is generating a sequence of symbols, drawn from an alphabet, A. Given an initial segment of the sequence, how can one predict the next symbol? Ray Solomonoff’s theory of inductive reasoning rests on the idea that a useful estimate of a sequence’s true probability of being outputted by the unknown process is provided by its algorithmic probability (its probability of being outputted by a species of probabilistic Turing machine). However algorithmic probability is a “semimeasure”: i.e., the sum, over all xA, of the conditional algorithmic probabilities of the next symbol being x, may be less than 1. Solomonoff thought that algorithmic probability must be normalized, to eradicate this semimeasure property, before it can yield acceptable prob- ability estimates. This paper argues, to the contrary, that the semimeasure property contributes substantially, in its own right, to the power of an algorithmic-probability-based theory of induction, and that normalization is unnecessary. 

For a full list of my research outputs, including talks, see my UC Spark page.