Cookies

We use cookies to ensure that we give you the best experience on our website. You can change your cookie settings at any time. Otherwise, we'll assume you're OK to continue.

Durham University

Department of Philosophy

Abstracts

10th Integrated HPS Workshop – Durham

ABSTRACTS (in order of surname)


Dominic Berry, Leeds

Landscapes and labscapes? - Field science and the standards of experimental practice

A number of historians and philosophers of science maintain that there is something qualitatively different about work conducted in fields, removed as they are from the controlled setting of the laboratory. This paper undermines the epistemological distinction between lab and field, drawing upon recent work in the history and philosophy of science to demonstrate that lab and field are much the same as other sites of knowledge production, and that nothing qualitative distinguishes them. Its historical focus is on the origins of the Randomized Control Trial (RCT) in the twentieth century, pioneered by R.A. Fisher at an agricultural science institute. While today the RCT is often touted as the gold standard in experimental practice, and the solution to virtually all epistemological problems, in its early years the RCT was deemed inferior to other trialling methods. Understanding this opposition will require an appreciation of the kind of space that the field is, how the objects studied there are situated within it, and the epistemological goals of the experimenters.


Alper Bilgili, Leeds

Trials of a Debate: A Late Ottoman Response to Darwinism

The Scopes trial rekindled discussions on social and political implications of Darwinism as well as the scientific validity of the theory in the United States. For the defenders of the law which prohibited the teaching of Darwinism in schools, Darwinism was responsible for atrocities including the First World War. This view was supported by Ismail Fennî, a late Ottoman intellectual, who wrote a book to debunk scientific materialism immediately after the trial, and argued that Darwinism blurred the distinction between man and beast and thus destroyed the foundations of morality. However, despite his anti-Darwinist stance, Ismail Fennî argued against laws forbidding the teaching of Darwinism in schools, and emphasized that even false theories contributed to scientific improvement. His belief in science—which for him ruled out Darwinian Theory—meant that Muslims should not reject Darwinism if it were to be supported by future scientific evidence. Religious interpretations would have to be revised accordingly. In this talk, I aim to focus on early Muslim reactions to Darwinism by examining Ismail Fennî’s views, which were notably sophisticated when compared with those of the anti-religious Darwinist and anti-Darwinist religious camps that dominated late Ottoman intellectual life.


Nick Binney, Exeter

History as evidence – disease, historical contingency and the genetic fallacy.

It has been claimed that “what counts as disease – definitions, diagnostic practices, and social meanings – is historically contingent”, and that as a consequence of this “history facilitates critical perspective on the contingency of knowledge production and circulation, fostering clinicians’ ability to tolerate ambiguity and make decisions in the setting of incomplete knowledge”. I endorse this position, but it has been objected to because it appears to commit the genetic fallacy (which holds that it is fallacious to use premises concerned with how something came to be to inform arguments about what something is in the present). Additionally, to endorse the historical contingency of medical knowledge is often said to endorse extreme forms of relativism.

I will use a case study to argue that these concerns are unwarranted. This case study will describe how diagnostic practices for the disease “heart failure” changed in the early twentieth century, and will concentrate particularly on the diagnostic value of auscultation. I will focus on the work of the British physicians James Hope (1801-1841) (a strong advocate of auscultation), and James Mackenzie (1853-1925) (who argued that the practice of auscultation “has probably done more harm than good”). The case study will show how the production of medical knowledge can be historically contingent without collapsing into extreme forms of relativism.

I will also draw attention to the work of N.R. Hanson to show how observation in medicine can usefully be understood as “theory-laden”, and how abductive reasoning is used in medical discovery. Using these insights, I will show how knowledge of the historical development of diagnostic practices can be used to inform the evaluation of diagnostic practices in the present day, without committing the genetic fallacy.


Chris Campbell, UCL
Charles Peirce and Prout’s Hypothesis

Charles S. Peirce’s philosophy is gradually gaining visibility in the HPS community. Historians and philosophers alike are rediscovering the value and fruitfulness of his version of pragmatism (aptly re-labelled “pragmaticism”), as well as the historical insights deriving from his scientific work on measurement, geodesy and cartography. But a still neglected aspect of Peirce’s thought, deserving further visibility even among specialised Peirce scholars, is his training and subsequent work in chemistry.

Working from Peirce’s very first published paper (1863) and one of only two dealing with chemistry, I will present a critical study of his evidence in support of Prout’s hypothesis. I suggest that the evidence base Peirce develops might also point to the rejection of Prout’s hypothesis – quite the opposite of what he intended. This evidence in support of Prout forms part of Peirce’s argument towards his attempt to construct an overarching philosophical argument for no less than nine of the major chemical theories of his time, and one founded on the Kantian metaphysical concept of interpenetration.

Over forty years later Peirce explicitly construed the nature of his logical diagrams in analogy with chemistry: ‘Chemists have, I need not say, described experimentation as the putting of questions to Nature. Just so, experiments upon diagrams are questions put to the Nature of the Relations concerned’ (CP 4.530). I will show that as a young chemist Peirce’s table – or diagram of evidence – was not the focus for experimentation but a means of displaying the empirical data for numerical analysis. As a chemist Peirce’s questions to Nature are mediated by an analysis of the numerical evidence available to reveal the nature of the underlying relations.


Hasok Chang, Cambridge

If you can spray phlogiston, is it real? Evidence and integrated HPS

[Abstract TBC]


Alison Fernandes, Columbia

A Deliberative Account of Causation

Statistical-mechanical accounts of causation from Albert (2000) and Loewer (2007) attempt to reduce causation to fundamental laws and contingent features—and explain causal asymmetry in these terms. But these accounts face the same problem as other reductive accounts. While they might pick out the right extension of the concept CAUSATION, they don't help us understand why the relation picked out is useful—why causal relations are effective for manipulation and control.

To remedy this, this paper puts forward and defends a deliberative account of causation: causation captures the evidential relations agents use in deliberation when they decide on one thing in order to achieve another. Roughly, a causes b if and only if an agent's deciding on a for the sake of b (in proper deliberation) is good evidence of b. This account explains why causation is useful—causation directs us to decisions that are evidence of good outcomes. And it avoids concerns typically raised against agent-based accounts. It avoids circularity, by building off an epistemic rather than causal characterisation of deliberation. It does not make causation mind-dependent, because it uses objective evidential relations, rather than subjective credences. And the account does not imply backwards causation, because evidential correlations towards the past are 'screened off' by deliberation.

While this deliberative account is explanatory in its own right, it also combines fruitfully with reductive accounts. Unlike Price's agency theory of causation (1992), the account does not merely tell us about the function or genealogy of the concept. It also provides a constraint on other accounts and can explain why the relation they pick out is useful. Using this deliberative account, we can see that what is right in statistical-mechanical accounts of causation should be understood in evidential terms.

The counterfactuals appealed to lead to useful causal relation because they pick up on evidential relations—they are 'normal procedures of inference'. We can also replace unexplained agential primitives, such as Albert's 'fiction of agency', by using evidence to explain why it seems agents can intervene. But statistical-mechanical accounts provide crucial resources as well. They relate evidential relations to fundamental laws, and explain an epistemic asymmetry—why we have records of the past but not the future. This asymmetry is needed to explain why we deliberate towards the future, and so why it is correlations towards the past are screened off by deliberation.

Because they can appeal to each other in these ways, deliberative and statistical-mechanical accounts of causation are in fact strongest combined.

References

Albert, David. Z. 2000. Time and Chance. Cambridge, Mass.: Harvard University

Press.

Loewer, Barry. 2007. Counterfactuals and the Second Law. in Huw Price and Richard

Corry (eds.) Causation, Physics, and the Constitution of Reality. Oxford

University Press.

Price, Huw. 1992. Agency and Causal Asymmetry. Mind, 101, 501–520.


Toby Friend, UCL

Pluralism Needs Laws

According to Hendry (2012:60-61), ‘pluralism about natural kinds is the thesis that there is more than one way to classify things, and that, from a purely ontological point of view, none of these ways is privileged over the other.’ As he, and others (e.g. Chang, 2012; Dupré 1993), make clear, this needn’t incorporate an anarchy of taxonomy––not ‘anything goes’––nor, for Hendry at least, need this preclude us from arguing for a non-ontological point of view from which one taxonomy is privileged (e.g. for reasons of clarity, epistemology, unification, etc.). Still, it is important to notice what this kind of pluralism does commit one to.

My presentation aims to reveal an implicit commitment pluralists about natural kinds make to the individuative mechanisms for natural kinds, specifically: a commitment to laws of nature. I start by rehearsing a well-known pluralist commitment to what I have called the ‘subordination of reference’. This is the widely noted (Chang 2012, Davidson1970, Kuhn 1970, Feyerarbend 1962) rejection of both ‘reference-first’ semantics (e.g. Devitt 1983, Putnam 1973) and ‘reference-essential’ semantics (e.g. Lewis 1983, 1984) of natural kind terms in favour of a meaning holism in which the classification of natural kinds is dependent on practical, interest-relative, experimental, even political considerations.

But what is a holistic semantics for a scientific taxonomy if not a structure of laws? I argue, drawing on Hendry’s historically informed defence of the determinacy of element-terms, that the taxonomic determinacy of scientific practices in general can only come from a commitment to laws. Consider, for example, the generalisations that chemical elements are both present in their compounds and survive specific kinds of chemical change. These generalisations are not commonly thought of as laws, but it is hard to see why we should deny them that status. First, as is typical of laws, they are highly general and necessary. Second, we cannot deny their law-hood by virtue of any presumed analyticity or non-discoverability, since an adoption of the holistic framework precludes these classifications of isolated statements. But if Hendry is right, it is generalisations such as these which help ground the very practice of chemistry as we know it. Hence, chemistry, at least, seems committed to laws.

However, I believe the argument can be extended to other sciences, most significantly, the life-sciences. By the above process of reasoning, we may argue, e.g. that the generalisation that enzymes are catalysts, and that whales are mammals, are real laws of nature. Surprisingly, this seems to imply, pace Cartwright (1999) and Dupré (1993), that a pluralism about natural kinds is incompatible with the view of particularism, that ‘for all we know, most of what occurs in nature occurs by hap, subject to know law at all (Cartwright, 1999:1).


Jim Grozier, UCL

Absolute Measurement and its Legacy

Absolute measurement sought to measure quantities in terms of other quantities, which were regarded as more fundamental. The term has been applied to the measurement of force and temperature, but is most often encountered in the study of electrical measurement in the 19th century.

Measurement of electrical quantities in terms of mechanical quantities (represented nowadays as mass, length and time) was championed by William Thomson and James Clerk Maxwell, following the precedent set by Gauss and Weber in the 1830s in relation to the measurement of force. A key step was Maxwell’s definition of a unit of charge in the Treatise in 1873. This definition led to the idea that electrical phenomena had mechanical dimensions and hence could be identified with mechanical phenomena. One consequence of this was that resistance was considered to have the dimensions of either velocity or the inverse of velocity, depending on which system of units (electromagnetic or electrostatic) were used; the ohm was originally conceived as being “expressed by” a velocity of 107 metres per second in the former system. Maxwell believed that dimension could reveal the “essential nature” of a quantity, but this view was criticised by Percy Bridgman and others, and remains controversial.

I will argue that in identifying resistance as a[n inverse] velocity, Maxwell used a sound argument – the link between dimension and essential nature – to draw an incorrect conclusion from a false premise. I will show that the assumption that electric charge has mechanical dimensions rests on either an inappropriate interpretation of Coulomb’s Law in electrostatic units, or on an interpretation of his definition as a definition, not just of a unit, but of electric charge itself, which appears inconsistent with other statements made by Maxwell.

This is important, and not just in a historical sense, since, although we now recognise electricity as a base quantity and not a derived one, the legacy of absolute measurement is alive and well in quantum field theory – pioneered by Richard Feynman and others in the 1940s – which employs a one-dimensional “energy space”, an idea that seems to have been modelled on the same flawed arguments.


Gregor Halfmann, Exeter

Is data always evidence? On values of data in oceanography

The potential to be used as evidence for claims about phenomena can be regarded as one or even the most fundamental constituting feature of scientific data (Leonelli, forthcoming; Woodward, 1989). I want to challenge this position by highlighting that the use as evidence is not always the most important role of data in science. Hasok Chang (2012) promotes a view of knowledge as the ability to successfully perform scientific work instead of justified belief. Supported by this shift of perspective, I argue that data’s role in science can be interpreted as enabling scientific action, such as modelling, calibrating, visualising, or many other activities, which do not include the production of specific knowledge claims as a direct result. In these activities, the evidential value of data can be regarded as secondary, if not absent. This thesis can be illustrated with cases from the history of various geoscientific disciplines (e.g. Hamblin, 2005; Edwards, 2010; Parker, 2011). I will present additional examples from historic and contemporary scientific practice in oceanography, in which data primarily enable scientists to perform further actions. For example, the hydrographic data collected by autonomous floats drifting in the oceans, measuring parameters, and transmitting data via satellites give scientists the ability to compile reanalysis data sets and to calibrate or initialise prediction models. Helmreich (2009) describes how a real-time multimedia data stream allows biological oceanographers on research vessels to operate remote underwater vehicles. Mud and water samples taken by these vehicles are subsequently used to compile an ecosystem’s DNA library. It is certainly possible for scientists in these environments to make justified claims based on the collected data; however, these and other examples shall demonstrate that large amounts of oceanographic data enable scientific work and are not directly used as evidence for claims. The theory-based prediction and subsequent empirical discovery of deep western boundary currents in ocean basins during the 1950s and 1960s is popular among physical oceanographers, but is also a rare case of data functioning primarily and directly as evidence for a scientific claim. Moreover, this story implicates a hypothesis-driven epistemology, whereas my examples show that knowledge understood as the ability to perform scientific work rather depends on data than on the formulation of hypotheses.

References:

Hasok Chang (2012), Is Water H2O? Evidence, Realism and Pluralism, Boston Studies in the Philosophy of Science Vol. 293, Dordrecht, Heidelberg, New York, London: Springer.

Paul N. Edwards (2010), A Vast Machine. Computer Models, Climate Data, and the Politics of Global Warming, Cambridge, MA, and London: The MIT Press.

Jacob Darwin Hamblin (2005), Oceanographers and the Cold War. Disciples of Marine Science, Seattle and London: University of Washington Press.

Stefan Helmreich (2009), Alien Ocean. Anthropological Voyages in Microbial Seas. Berkeley, Los Angeles, London: University of California Press.

Sabina Leonelli (forthcoming), “What Counts as Scientific Data? A Relational Framework,” Philosophy of Science.

Wendy Parker (2011), “When Climate Models Agree: The Significance of Robust Model Predictions,” Philosophy of Science 78, no. 4, 579–600.

James Woodward (1989), “Data and Phenomena,” Synthese 79, 393–472.


Dolores Iorizzo, UCL

Cures for Madness, Menstruating Men, and An Account of a Child Being Taken out of the Abdomen, after Having Lain There Upwards of 16 Years, during Which Time the Woman Had 4 Children, All Born Alive: The Secret Life of Early Royal Society Medical Experiments

In nearly every history of the Royal Society we are presented with the well-rehearsed and remarkable list of achievements of Boyle's Air-Pump, Hooke's discoveries under the microscope, and Newton's Opticks, but no one ever really focuses on medical experiments. One would be justified in thinking that medicine was simply not an important part of the research strategy of the Royal Society, but they would be wrong. In fact there were well over 500 medical experiments reported at the Royal Society between 1665-1850 that covered a range of topics: there are the obvious ones on anatomy, physiology, zoology, botany and poisons, but the less obvious ones on pneumatics, meteorology, and hydraulics. So what kind of conclusions should we draw concerning the medical research strategy of the Royal Society from this massive collection of experiments that have been virtually forgotten? In this paper I will examine two things. First I will argue that these medical experiments exemplify what Lorraine Daston identifies in the History of Observation (2011) as the fusion of medicine and Baconian natural history in the mid-seventeenth century which plays an essential role in shaping the language of scientific observation and experimentation that we currently use today. Building on the work of Pomata she argues that 16th century medical case studies, curationes, provided a model for the scientific reports of the Philosophical Transactions of the Royal Society. Second, I will argue that evidence from the Classified Papers in the Royal Society Archives shows that these medical experiments were generated from a list of experiments on the nature of man, outlined in Francis Bacon's Catalogue of Particular Natural Histories appended to his Parasceve. I conclude with a warning: if we do not include medical history within iHPS, we neglect important evidence that provides insights into what motivated and generated the research agendas of early modern natural philosophers such as Bacon, Boyle, Hooke, Locke, Descartes and Leibniz.​


Lijing Jiang, Nanyang Technological University & University of Leeds

Evidence for Dialectics or Evidence for Production: Crafting Socialist Embryology in China, 1950-1963

The few available studies of Chinese socialist sciences in the 1950s often render a double-negative picture about the function of scientific evidence in supporting socialist ideology and production. It is suggested, on one hand, that the evidence for socialist ideology such as those supporting natural dialectics was often mere fiction, and, one the other hand, when such evidence was appropriated as instrumental reference for agricultural and industrial development, they simply did not work. My paper challenges both views through embryologist Tong Dizhou (童第周, 1902-1979)’s work in embryology based on fish development and its use in fishery development. It also explores potentials in studying history of biology in modern China through its changing epistemologies shaped by changing political economy of different times.

Having trained in Belgium in 1930s with embryologist Jean Brachet, Tong had cultivated a view of organicism in his own studies of embryology of various fishes and amphibians. Yet, after the communist revolution, Tong made a connection of embryology to Marxist philosophy in the 1950s with the influence from Hua Gang, a historian of communist revolution and translator of Marx and Engels’ The Communist Manifesto. Under the devoted communist Hua’s influence, Tong studied dialectic materialism extensively and found many embryological observations could be used to support dialectic materialism. Tong thus used the phenomena of circulation of matters in the cytoplasm and the division of cells during development as a support for Engel’s thesis that “matter is always in motion as a whole.” The influence of environmental factors, such as gravity, light, and temperature, in varying biological traits, such as the length of the rat’ tail, the colors of butterflies, and amphibians’ sex were used as a support for the thesis “quantitative change leads to qualitative change.” He became thus interested in the question of cytoplasmic inheritance in the early 1960s, a time when such question gradually went out of fashion in world biology of the Cold War. At the time, Tong’s cloning experiments on foodstuff fishes were designed to offer experimental evidence for both natural dialectics and fish breeding at the same time. The paper will examine and compare the ways in which evidence for dialectics and evidence for fishery production was generated, appropriated, and discussed in Tong’s work, and how that revised historians’ view of modern biology in China, if we take philosophical understanding of dialectics and its material underpinning seriously.


Elizabeth Dobson Jones, UCL

The “Death” of Ancient DNA Research: Expectations and Evidence

In this talk, I explore the history and philosophy of ancient DNA research and offer evidence for the “death” of ancient DNA research. Ancient DNA research – the search for molecules in fossils – is a set of contemporary, interdisciplinary, and controversial scientific and technological practices. It emerged from the interface of paleontology, archaeology, and molecular biology in the late twentieth and early twenty-first centuries. Over the last thirty years, ancient DNA research has evolved from an emergent into an established technoscientific practice. However, ancient DNA researchers have suggested its “death.”

This talk addresses the nature of ancient DNA research as a technoscience and its past, present, and future. In writing the history, I am interviewing 40 ancient DNA and related researchers. These interviews have revealed historical and philosophical questions about the nature of ancient DNA research specifically and the process of science more generally. The early history of ancient DNA research was a race for the most iconic fossils and the most ancient DNA. Interviewees from three generations of the thirty year history have described this past as “an answer in search of a question” rather than “a question in search of an answer.” Today interviewees recognize that ancient DNA research has matured from this past. At the same time, however, they anticipate its “death” through changing demands and expertise, and its eventual – and perhaps inevitable – absorption into genetics and genomics. I offer evidence from interviews with the intention to explore the idea of the “death” of ancient DNA research and engage other philosophers in its significance. As a case study, ancient DNA research is relevant to historians, philosophers, and sociologists interested in the nature and process of science.


Ian Kidd, Durham

Why did Feyerabend defend astrology? Lessons for integrated HPS

A good question for ‘integrated history and philosophy of science’ is that of what other philosophical disciplines and intellectual traditions we ought to integrate with. Few historians and philosophers pursued this question more vigorously than Paul Feyerabend, even if his own efforts lapsed, at times, into excess. In this talk, I engage with the ‘limits of integration’ theme by asking why Feyerabend ‘defended’ astrology – and what, if anything, contemporary practitioners of ‘integrated history and philosophy of science’ might learn from it. Two common explanations of the purpose of those defences are rejected as lacking textual support. A third ‘pluralist’ reading is judged more persuasive, but found to be incomplete, owing to a failure to accommodate Feyerabend’s focus upon the integrity and characters of scientists. I therefore suggest that the defences are more fully understood as defences of the epistemic integrity of scientists that take the form of critical exposures of failures by scientists to act with integrity. An appeal is made to contemporary virtue epistemology that clarifies Feyerabend’s implicit association of epistemic integrity and epistemic virtue. If so, what he was defending was science, not astrology. I end with two claims. The first is that, read in this way, Feyerabend is more conservative and less radical than people often suppose. The second is that it would be very useful to further integrate history and philosophy of science with virtue epistemology – as Feyerabend, forty years ago, tried to do. Doing so would helpfully line up a range of issues of interest to integrated HPS – scientific practice, pluralism, epistemic virtues – and open up new ways of understanding science.


Cheryl Lancaster, Durham

The First Identification of Embryonic Stem Cells: What’s the Evidence?

In this paper, I will consider whether English physician Martin Barry (1802-1855) was the first to identify embryonic stem cells in the 1830s.

In 1838, 1839, and 1840, Barry published three papers, called Researches in Embryology, in the Royal Society’s Philosophical Transactions. In these articles, Barry describes the mature egg, fertilisation, and the early development of the mammalian embryo. This presentation will focus on the second paper, published in 1839.

Barry went to Germany to work with physiologist Johannes Müller (1801-1858) and learn about animal development and microscopy. The skills Barry learned enabled him to dissect and slice mammalian ovaries; in Researches in Embryology: First Series (1838), Barry describes his observations regarding ova development, maturation, structure and size.

The Second Series (1839) focused on development of the ovum, tracing the early stages of development. Barry noted that there was still a ‘dark period’ (between mating and appearance of vertebrae) in mammalian development - little was understood regarding this time, and Barry aimed to shed some light. Barry examined hundreds of ova, mostly from rabbits, carefully measuring and drawing what he saw.

Barry described the stages of development in intricate detail. For example, Barry notes many vesicle (cell) divisions, eventually resulting in small vesicles hung together ‘like a mulberry’. Within each vesicle was a ‘nucleus’, previously described by physiologist Gabriel Gustav Valentin (1810-1883) in the nervous system. Here then, Barry is clearly indicating that what he is seeing in the developing embryo can be directly compared with the adult cells observed by Valentin. By doing this, Barry is establishing that what he sees at this developmental stage is analogous with the ‘subunits’ of adult animals.

A further indication that Barry understood the vesicles of the early embryo as those which would become the vesicles of adulthood, is in the discussion of methods. Barry utilised the most modern techniques available for his observations, which were primarily histological methods, which required tissue preservation. Barry described using ‘kreosote water’ for preserving ova; a solution Müller had shown to Barry, used to preserve tissues of the nervous system. Barry must have considered the ovum tissue similar to nervous tissue to believe that Müller’s kreosote water would be as useful for preserving ova as it was nervous tissue.

In order to further examine the evidence concerning whether Barry identified embryonic stem cells or not, the research citing Barry’s studies will also be examined.


Sabina Leonelli, Exeter

Valuing Data as Evidence for Multiple Claims: A Relational Approach to Data Epistemology

The wide dissemination and integration of various types of data, garnered from different sources, that features prominently in contemporary scientific research raises several philosophical issues. In this talk, I focus on data epistemology and propose a relational definition of data that takes account of recent developments in their production, circulation and use, while at the same time doing away with the idea that data consist of mind-independent representations of the world. This perspective emphasizes the value of data as potential evidence for multiple scientific claims over all the other functions that data can have within research. My argument is grounded on a detailed historical study of the ways in which data about model organisms have been disseminated over the last three decades of biological research, ongoing research into the journeys and re-use of scientific data in plant science, and the comparative study of data handling practices across scientific disciplines.

J. Brian Pitts, Cambridge

Space-time Theory, Particle Physics and Evidence

Letting the philosophy of fundamental physics and space-time be guided by evidence requires entertaining not just our best theory (e.g., Einstein’s General Relativity), but any other theories that have some prior plausibility and that fit the data comparably well. But are there any?

A powerful but neglected tool to answer that question is particle physics, for two reasons. First, particle physics provides roughly necessary conditions for viability: theories in which the energy is not positive definite are likely to be violently unstable. With no nontrivial observations and very little mathematics, such a criterion rules out Einstein’s 1913-15 Entwurf theory and Nathan Rosen’s 1970s bimetric theories. But second, particle physics also calls attention to ‘massive’ modifications of Newton’s, Maxwell’s, Nordström’s, and Einstein’s theories that owe something to Seeliger and Neumann (1890s), Einstein (1917), de Broglie (1922), Klein, Gordon et al. (mid-1920s), Proca (1936), Wigner (1939), Fierz and Pauli (1939), Marie-Antoinette Tonnelat (1941), and Schrödinger (1943). The four aforementioned theories all admit a modification, satisfying a linear partial differential equation, such that a point source generates a potential of exp(-m*r)/r rather than 1/r. From the 1920s this modification become recognizable as corresponding to the mass m of photons (light particles), ‘gravitons’, or the like---hence ‘massive photons’ and ‘massive gravitons’ (the latter of enormous interest in physics since 2010: Claudia de Rham, G. Gabadadze, A. Tolley; Rachel Rosen and F. Hassan, etc.). The modified theories offer, at least prima facie, permanent underdetermination from only approximate, but arbitrarily good, empirical equivalence: if small m theories approximate m=0 theories empirically in the limit, then evidence can never show that m=0.

Non-zero m theories generally differ conceptually, however, from the m=0 theories by virtue of having a different (smaller) symmetry group, in some cases implying the absence vs. presence of formal indeterminism (gauge freedom, Einstein’s hole argument, etc.). Thus the physical features of greatest interest to philosophers are often the least evidentially supported. The 1990s-2000s recognition of massive neutrinos is noteworthy. Plausibly we don’t know based on evidence whether the physical world has the conceptual features of Maxwell’s or Einstein’s theories (both m=0) of interest to philosophers unless those properties hold for non-zero m.

Alternately, one might try to define an objective prior probability distribution and then ascertain how far our failure to observe m thus far implies that m=0. Such a potentially important project faces technical-conceptual challenges, perhaps requiring infinitesimals for uniform normalized probabilities.

Greg Radick, Leeds

Is Mendel's Evidence 'Too Good to Be True'? An Integrated HPS Perspective

Commonly people tend to know two things about Gregor Mendel: first, that he was the "monk in the garden" whose experiments with peas in mid-nineteenth-century Moravia became the starting point for genetics; and second, that, despite that exalted status, there is something fishy, maybe even fraudulent, about the data that Mendel reported. In the year (indeed the month) marking the 150th anniversary of Mendel's first lecture on his experiments, this talk will explore the cultural politics of this accusation of fraudulence against Mendel. Although the notion that Mendel's numbers were, in statistical terms, too good to be true was well understood almost immediately after the famous "rediscovery" of his work in 1900, the problem only became widely discussed and agonized over from the 1960s, for reasons having as much to do with Cold War geopolitics as with traditional concerns about the objectivity of science. Appreciating the Cold War origins of the problem as we have inherited it can, I will suggest, be a helpful step towards appreciating what's *really* wrong with Mendel's work -- and what, 150 years later, we should do about it.

Tom Rossetter, Durham

Imaginary Evidence: How Thought Experiments Reveal Nature’s Powers

We typically think of evidence in terms of concrete states of affairs such as experimental results or empirical observations. Sometimes, however, the evidence marshalled in support of or against a theory is not straightforwardly experimental or observational but imaginary or hypothetical. This is the case in so-called “thought experiments”, which have been employed throughout the history of science by such prominent figures as Stevin, Galileo, Huygens, Newton and others to support and refute various theories. Thought experiments present a fascinating problem for the epistemologist, since they appear to be ways of arriving at new knowledge about the world simply by thinking about it. If this knowledge consisted solely in revealing logical or mathematical truths, then they would be less problematic. Yet in some cases they appear to yield nomic or metaphysical knowledge, and they appear to do so without the need for any new empirical data. Furthermore, as a number of philosophers have pointed out (e.g. Mach, Brown, Gooding), the evidence of thought experiments is often more compelling than that of actual concrete experiments and empirical observations. How is this possible? How can thought experiments present compelling evidence for – or against – a theory without any new empirical data?

I attempt to answer this question by developing further an account of thought experiments advanced by Ernst Mach, Roy Sorensen, David Gooding and James McAllister which McAllister has dubbed “experimentalism”. According to experimentalism, thought experiments work in much the same way as actual experiments. I think this is correct. However, as Gooding himself has acknowledged, “a more systematic comparison” of thought experiments and actual experiments is required to give this account some “substance”. Here I try to provide some of this “substance” by comparing two thought experiments with two experiments. I argue that, like actual experiments, thought experiments provide evidence for or against theories essentially by creating (in the case of thought experiments, imaginary) situations in which powers can manifest unimpeded by those other powers with which they are typically co-instantiated and which impede to some extent their manifestation in the real world. I begin by explaining two thought experiments. Next, I briefly expound the experimentalist account. Following this, I analyse two actual experiments. I then show how the two thought experiments can be analysed in the same way as the two actual experiments. Finally, I consider how we can reliably infer what would happen were these hypothetical situations actually to obtain.

Julia Sánchez-Dorado, UCL

An integrated genealogy of the concept of ‘similarity’ to explain current debates in philosophy of science

In the past years, numerous philosophers of science have discussed the role played by similarity in the construction of scientific representations. What this issue refers to is the mimetic relation that needs to be established –or not – between scientific models and the systems of the world they intend to explain. But ‘similarity’ is a many-sided term, and accordingly diverse and usually conflicting accounts of it have been endorsed in contemporary philosophy of science –namely, accounts of isomorphism, homomorphism, similarity as resemblance, etc.

In this paper, I would like to step back from this sort of discussions, and defend a more integrating approach to the notion of similarity based on a historical and interdisciplinary analysis of it. To be more specific, I would like to characterize similarity as a feature of the process of representing, instead of a fixed set of features in the objects of the representation (vehicle mirroring target, the common-sense idea of similarity as it is usually addressed by philosophers of science). My proposal aims to show that, by paying attention to key moments of the history of science and art in the twentieth century, we can find highly valuable reflections on similarity understood as a creative practice.

In particular, I would like to describe to two very fertile moments of debate around the idea of creative similarity. The first one is the period of the artistic Avant-gardes at the beginning of the twentieth century, when fundamental treatises on the new nature of art were written. I will particularly refer to those built upon practices of artists themselves, such as Kandinsky’s Concerning the Spiritual in Art (1910) and also Klee’s and Mondrian’s writings on similarity. And the second one is the period of the late sixties and early seventies, where remarkable discussions concerning representation and similarity took place, both in philosophy of science and in aesthetics. In this case, I will refer to the well-known works of Nelson Goodman, but also to very interesting proposals by Max Black and Ernst Gombrich.

We can infer three significant observations from the analysis of these debates in their historical context. First, that against some of the strictures formulated on its value, it is beneficial to maintain a flexible and context-dependent notion of similarity to explain how representations –in science and in art – advance understanding about the world. Secondly, that different kinds of similarity –as isomorphism, resemblance, conceptual similarity – are not necessarily incompatible with each other, because the goals in mind are the ones defining which type is relevant in each case. And finally, that similarity is inseparable and compatible with distortions of different kinds, all of them interlaced in the same creative practice of representing.

Yafeng Shan, UCL

Did Mendel have good evidence for Segregation? The Gap Problem in Hypthetico-Deductivism

Facing many famous paradoxes (e.g. tracking paradox) and problems (e.g. underdetermination), the hypothetico-deductive (H-D) theory is no longer the mainstream theory of evidence in the philosophy of science. Nevertheless, in the history (and even contemporary practice) of science, the H-D method is widely used by working scientists. Gregor Mendel’s justification of his “law of composition of hybrid fertilizing cells” (1865) is such a good case.

From a philosophical point of view, given the problems of the H-D theory, it seems that Mendel did not have good evidence for his law. However, from a scientific point of view, Mendel’s justification is valid and well accepted. So there is a gap between the philosophical theory of evidence and the actual scientific practice. (The Gap Problem)

In response, some (Geme 1993; Geme 2005; Betz 2013) attempt to defend the validity of scientist’s application of the H-D method by modifying the H-D theory against the paradoxes. An alternative response is proposed by Peter Achinstein (1995; 2000; 2008) to introduce a sufficiently strong and empirical concept of evidence. Though none of these approaches solve the problem conclusively, two lessons still can be learnt:

a) There are different types of evidence. The H-D theory only provides an account of one type of evidence.

b) The evidential relation should be empirical rather than a priori.

Based on these two points, I attempt to resolve the gap problem in a new way. I shall argue that in the problem of evidence, we (philosophers) should (1) shift our attention from what a good evidence is to how an evidence-hunting practice is conducted; and (2) abandon the project of looking for a universal theory of evidence. Correspondingly, the right question thus is whether Mendel conducted a good evidence-hunting practice rather than whether Mendel had good evidence for his law. Furthermore, I shall propose an account of a good H-D evidence-hunting practice. Finally, I shall conclude that Mendel conducted a good evidence-hunting practice.

References

Achinstein, Peter. 1995. “Are Empirical Evidence Claims A Priori?” Bristish Journal for the Philosophy of Science 46 (4): 447–73.

———. 2000. “Why Philosophical Theories of Evidence Are (And Ought to Be) Ignored by Scientists.” PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 67: S180–92.

———. 2008. “Evidence.” In The Routledge Companion to Philosophy of Science, edited by Stathis Psillos and Martin Curd, 337–48. London: Routledge.

Betz, Gregor. 2013. “Revamping Hypothetico-Deductivism: A Dialectic Account of Confirmation.” Erkenntnis 78: 991–1009.

Geme, Ken. 1993. “Hypothetico-Deductivism, Content, and the Natural Axiomatization of Theories.” Philosophy of Science 60 (3): 477–87.

———. 2005. “Hypothetico-Deductivism: Incomplete But Not Hopeless.” Erkenntnis 63 (1): 139–47.

Mendel, Gregor. 1865. “Versuche Über Pflanzenhybriden.” Verhandlungen Des Naturforschenden Vereins Brünn 4 (Abhandlungen): 3–47.

Andreas Sommer, Cambridge

Standards of evidence and the reception of unorthodox science. A fundamental challenge for integrated HPS?

Science is supposed to be essentially data-driven and based on rational, calm and impartial considerations of the best available evidence. Yet, historical and sociological evidence implies that scientists have often been notoriously bad at keeping an open mind in the face of novel data as soon as they appear to contradict cherished hypotheses and worldviews.

Using the history of parapsychological research and its links to mainstream sciences as a test case, I will discuss elite intellectuals such as William James pursuing radical empirical tests of ostensible telepathy during the formation of modern psychology, and suggest that their lack of success had very little to do with the quality of the presented evidence. I will further sketch the continuity both of open-minded perusals of disputed parapsychological problems by sometimes eminent scientists, and the overwhelming success of polemics over impartial tests in determining their scientific status.

Concluding with instances of widely publicised attacks by popular science writers on present-day unorthodox scientists, I hope to initiate the discussion of a basic strategic question: How candid should and can we be when challenging naive but widespread notions of the intrinsic impartiality of scientific practice without politically jeopardising the project of integrated HPS?

Erman Sözüdoğru, UCL

History of Neglected Tropical Diseases: a Case for Epistemic Pluralism

In this paper, I argue that drug discovery involves multiple systems of practices. My aim in doing this is to articulate a normative account of epistemic pluralism, which we might define as a philosophical thesis that claims that no single system practice can explore and explain all aspects of some phenomena of interest.

I use the case of neglected tropical diseases (NTDs), which are a group of infections that are under-researched by the pharmaceutical industry due to their low profit potential. More specifically, I concentrate on Human African Trypanosomiasis (HAT) research. HAT is a parasitic infection that is prevalent in sub-Saharan Africa effecting extreme poor in rural areas. Due to the economic context in the areas where HAT is prevalent, it was neglected by market driven research.

HAT research takes place in public-private-partnership (PPP), which are global networks of academia, industry, governmental and nongovernmental stakeholders. These PPP networks are an exemplary case study of the interactions between systems of practices: interact in order to investigate different aspects of the phenomenon - for instance, medicinal chemists’ work is informed by the work of structural biologists which is informed by the work of molecular biologists. Plurality in practices in this case is essential since none of these systems are capable of finding a desired cure for HAT alone. Moreover HAT research allows us to further the normative aspect of pluralism by allowing to demonstrate benefit of pluralism based on the aim of research. The aim is to find an adequate cure to eradicate HAT, which is shaped by epistemic values (linked to furthering knowledge, understanding and explaining the phenomena) and non-epistemic values (linked to the broader social and historical context). PPPs undertaking HAT research determines the overall normative values that guide process allowing us to underline how non epistemic values linked to socio-economic conditions in disease endemic regions or values linked to economic interest plays a significant role in shaping the overarching values, therefore influences the scientific practice.

Here, each system of practice contributes towards aims that are defined by both epistemic and non-epistemic values. Moreover, the multiplicity of systems of practices in this kind of scientific inquiry is non-eliminable and it is beneficial to the aims of research.

Mauricio Suárez, Madrid
The Origin of Quantum Propensities: Henry Margenau’s ‘Latency’ School

In this paper I undertake a detailed historical examination of the origin of the concept of quantum latency in the works of Henry Margenau in the 1950s. Margenau was a highly influential and respected member of the theoretical physics community after the war. He was not only a prolific writer but a tireless organiser and a consummate teacher. He was also a devout advocate of the philosophy of science and physics. In fact Margenau played a key role in establishing philosophy of physics as a discipline. He founded the journal Foundations of Physics, and was instrumental in setting up the Philosophy of Science Association. As the Yale Professor in Theoretical Physics he was able to attract a large number of students to the foundations and philosophy of physics over the years. Many of these students worked out the details of Margenau’s ideas, including quantum latencies during the 1960s in particular.

I focus on the social and intellectual roots and influences of Margenau’s latency school. Margenau himself had philosophical training; he had studied the writings of Cassirer and the Marburg school, and considered himself a Neo-Kantian. Because of his institutional role as a mersenne for philosophy of physics, Margenau came into regular contact with some of the most important philosophers and physicists in his time. In particular I consider Margenau’s engagement with the work of both Carnap and Heisenberg, precisely at a time when the latter two thinkers were developing ‘dispositional’ accounts of quantum properties (Heisenberg) and of theoretical terms in general (Carnap). I also consider the possible influence of Margenau’s school upon Popper and his followers in the 1960s – in particular I ask whether Popper’s propensities may be seen as a critical rationalist response to Margenau’s neo-Kantian latencies.

Aleksandra Traykova, Durham

Anti-vaccination rhetoric then and now – logical fallacies gone viral

Since the beginning of widespread vaccination in 19th century England, national schemes for compulsory immunization gradually gained world-wide acceptance as a medically proven and cost-effective strategy for reducing infection transmission rates. Diseases which had previously been a major cause of death and disability among entire populations were declared eradicated (as was the case with smallpox in 1980, according to WHO) or at the very least contained (like poliomyelitis) at just a fraction of the cost that would otherwise have gone towards cures. Despite the efficacy of vaccination programmes throughout the world, there are still skeptics who question their purpose, safety and health benefits, or suggest that other factors – like better nutrition or improved hygiene – should be credited for the sharp drop in infection rates. The original English anti-vaccination movement was formed in response to a series of vaccination acts passed between 1840 and 1853, but at the present moment there are several similar leagues in Europe, Australia and the US. These vaccine opponents have made it their mission to spread resistance against immunization practices and to raise awareness of the ‘dangers’ involved in them, sometimes relying on questionable methods to justify their claims. This paper does not attempt to critique the scientific basis of said claims or disprove them; instead it will show how their logical fallacies could be exposed and investigated through the combined power of tools like comparative historical research and rhetorical analysis. The goal is to prove that the movement’s commonly deployed rhetorical tactics (as observed in press conferences, website materials or interviews from recent years) and the ideologies of their historical predecessors are all underpinned by the same set of fallacious arguments: the argument from precedent, fallacious appeals to nature, the argument from financial (dis)interest, and the argument from autonomy.

Catherine Wilson, York

Experimental and Speculative Revisited: What was Behind the Rejection of "Hypotheses"?

After the introduction of Cartesianism into England there was a flurry of condemnation of hypotheses, culminating in Newton's famous 'non fingo' declaration. On one hand, the condemnation is seen an important step in the evolution of an experimental culture that could (ideally) prove empirical claims. On the other hand, the condemnation seems to make no sense since Newton himself flings hypotheses (in our sense) about right and left. I will argue that the condemnation was directed exclusively against those suspected of Epicureanism --i.e. the self-organisation of the universe and the sufficiency of purely corpuscularian explanations.