The title’s clickbait, but I mean it: science, even physics, isn’t objective, and, more to the point, it can’t be.
There are a lot of assumptions to unpack there (for instance, why do people think physics is the most objective science?), but let’s start with the obvious one: the assumption science, whatever it is, is, in fact, objective, whatever that is.
I get into this fight a lot. I’ve told a roomfull of physicists at parties that physics isn’t objective (generally, that gets you an invitation to their next party—physicists are cool like that). I’ve told my best friend the ichthyologist that what he does isn’t objective science…and it’s most certainly science. His response? The Black Cat Analogy.
Is any of this controversial among educated folk these days? Not really. I mean, he knew it would be with me—I’m an ex-metaphysician who occasionally puts on my black robes and plays around with religious studies, he’s a scientist—but, for most people, science is real, philosophy is not, and TMBG is just generally awesome.
Now, I’m not going to dispute that last point, but let’s talk about how science is actually done, not how people who don’t do science (e.g., most philosophers who think philosophy is a branch of science) think it’s done.
Is science really like turning on a flashlight so much as it is taking some cobbled-together bits of spare equipment you had laying around that you duct taped together (seeing as your budget had been cut for the third year in a row) and turned into a narrow-focused light emitting device that turned up something furry that has a chance of being feline (it has fur! It’s smallish! The lit review says that’s what cats are like, at least in the majority of instances we surveyed!) before offering up an article for peer review that calls for more investigation, in the hopes your grant gets reviewed?
No, no it’s not. You can’t even afford duct tape half the time because science is done by grad students who can either afford to repair their equipment out of their own budget or buy ramen for the next two weeks. Contrary to what philosophers and the general public seem to think, science is not always done in whitewashed labs where everything turns out perfectly; it’s done in the field with shoestring budgets on good days, where the environment interferes. Roads get built over your sample site. Gels fail to gel. Your Western blot goes south. The GPS coordinates you were given turn out to be on private land, and the owner has a very big gun he’s pointing right at you.
Sometimes, investigating the world means you have to deal with the problems of the world. The method that would answer the question you wanted to answer just isn’t going to work. What do you do? Get creative, of course! Find a different question to answer that fits the techniques you have at hand, not the ones you wish you had!
And, truth be told, that’s often not a bad solution. It’s just not one that the “lab coat” view of science anticipates. We like to think of scientists as removed from this messy world we live in, well-fed and caffeinated in comfortable, air-conditioned offices, making discoveries that will change our world, when, in reality, they’re grad students a few dollars away from being homeless or starving, working in greenbriar and insane heat with outdated lab equipment that only works if a butterfly in South America sneezes three seconds after they turn it on.
There is no mystical Scienceland where things that work theoretically actually work. While I think this is where most of the myth of “objective physics” comes from (physicists, after all, work either with theoretical models or in air-conditioned labs tweaking atom smashers, right?), the fact is that even physicists have to work with nasty things like gravity and background radiation, to say nothing of hastily constructed instruments that should work so long as a bit of stray humidity didn’t get into the wiring…say, the mid-Atlantic during hurricane season’s pretty dry, right?
Of course, the philosophers would like you to believe that, as our instruments get better and our grad students get more funding (AS IF!), our science will become more accurate, since, after all, the only inaccuracy in science is because of instrument error, right?
Tell that to the fish who don’t perfectly follow the precise linear formulation of whatever independent variable you’re testing against vs. size. Some fish may have come from upstream, some fish may have had a bad week in their past and be a bit undersize, some may just be outliers in the genetic lottery and not fit within whatever perfect model you’d like. The fact is, nature doesn’t live in Scienceland either, and thus doesn’t follow precise and easily discovered rules. You may want a perfect curve, a simple equation along which all your data points will fall, but that’s not how reality works. You get a range into which things fall, not a perfect equation that explains everything and makes you “as master and possessor of nature.”
Speaking of perfect equations…
Right. This is where things get really nuts. So, let’s say we do get to perform our experiments in Scienceland; our grad students are well-funded, our equipment actually works, we don’t order the bad batch of electrophoresis gel, the electroshocker doesn’t come un-ducttaped, the stats software gives you meaningful results the first time you try to program it, the lichenologist doesn’t have to make do with chemical tests that remember Hitler, and, most importantly (and improbably!) everything actually works like it should.
Yeah, still no objective truth. It’s illustration time.
Let’s take the following data plot from an experiment—who cares what it supposedly is, let’s call it lichen colony size vs. substrate pH:
Now, we intuitively think that the best scientific model is the one that best fits the data—in other words, the equation we use to explain this data plot should account for every point we’ve plotted. So, given that assumption, the line that best fits the data might look something like this:
But, if you look at that line, it doesn’t actually go through any of the data points! It’s accounting for exactly none of the data we’ve collected, especially those two outliers, while the first curve perfectly accounts for all of them! Clearly, if fit to data is the sole (or even single most important) criterion for making models to explain data sets, we should stick with the first model, not the obviously flawed second one.
But wait; our other field crew just got back from sampling another set of lichens at a site near ours (turns out that landowners and their shotguns aren’t so unfriendly if you bring a case of beer and ask nicely) and has some more data for us! Now, let’s keep our two lines, but this time add in the extra data:
Our first graph is in serious trouble; if we’re going to make it fit perfectly to the data, we’re going to have to completely redraw it. Quite frankly, it doesn’t look like it has any relation to our data plot at all at this point. Graph #2, however, has actually gotten more accurate, despite there being one or two more outliers; in fact, I’d be willing to bet that if we sent out a third field crew, it’d only fit the data more closely. While I’m pretty sure that the line only goes through one or two points (and none of those perfectly), that line is only going to become more and more accurate the more we keep sampling and plotting. On the whole, it seems that we should keep it, rather than the line that perfectly fits about half the data points.
This is a lesson one of my professors at Maryland used to introduce the Akaike Information Criterion in a class on, of all things, the philosophy of voting mechanisms.*** To make a long story short, fit to data is only one of many virtues we look for in a scientific model; we also want one to have predictive power (the straight line becomes more accurate over time), to be mathematically simple (y=mx+b, like the first degree equation on the right, is a lot easier to work with than the unweildy 26-odd degree equation of the plot on the left…especially if you’re doing preliminary plots out in the wilderness to make sure your fieldwork is even remotely correct), to be parsimonious (you shouldn’t have to be invoking quantum physics and string theory to explain why fish swim if you don’t have to), and, if at all possible, to be useful—though that’s usually the first one to be discarded.*†
So, how do you decide which value to prejudice over the others? Why do you give more weight to one variable than another? Why use Akaike rather than Bayes? Why is the critical p value p<0.05, rather than p≤0.05, or p<.045? How much weight should you give to simplicity verses parsimony verses fit, and at what point do you draw the line?
Simply put, it’s a judgment call—and the environment in which scientists make those judgments is one that is created through living in a certain form-of-life, through participation in a certain kind of language-game with rules, a discursive field that conditions scientists towards making certain decisions and judgments over others. This is a sleeping giant I’m trying to tiptoe past for the time being, but the fact that science involves certain judgment calls and prejudicing certain values over others in certain situations is inescapable and, what is more, these values are created and conditioned by living in a society with its own preferences and prejudices.
This creates problems for those who hope for an objective science. For starters, our society has some pretty nasty prejudices; is it just perhaps possible that some of them could influence our supposedly objective science and how it constructs models?
Wouldn’t you know it, but there’s a really good paper on exactly that topic (JSTOR, from Hypatia, if you’ve got access) with about the worst pun-bearing title ever. To summarize, it turns out that the “objective” science of anatomy has often encoded and responded to the social prejudices and (mis)conceptions of its day and broader culture. If a society assumes X as common sense, or doesn’t think there’s any reason to doubt X, then scientists are also going to assume X, and are going to gather data, formulate models, and otherwise do what they do best always assuming X.
So how do you escape from X? Remove scientists from any sort of society whatsoever? Claim that these are only problems inasmuch as it’s humans who are doing science? Point these things out at the beginning of the seminar, claim that empirical scientific models trump a priori truths midway through, and, by the end, claim that the proof is in the pudding of science’s triumph, as it has made airplanes fly and the Internet run?† Say that science is wrong, scientists are biased, there’s clearly a scientific conspiracy, and evolution and global warming are hoaxes?
I’m not too keen on any of these approaches, especially the last two. I’ve read enough of them in Leiter’s work on philosophy of law and growing up in Oklahoma. As for the first two…yeah, if you want to show me nonhuman scientists who don’t live in any sort of society at all (pro hint: sharing your results in peer-reviewed journals is both essential to the current model of scientific activity and a form of engagement in a socially constructed and regulated activity), I’d love to see it.†*
I think there’s another way forward—a better way, actually, one that might actually be beneficial for both scientists and science. It’d also get more to the heart of why we value “objective” science.
Next time: why science shouldn’t be objective.
*Yes, it’s the line that best fits the data. No, it’s not a “line of best fit.” That term’s a misnomer, especially given what you’ve probably already read if you’re down at the footnotes.
**Not that you should be using Excel’s default functions in science classes anyway, especially not to do anything that even looks remotely like statistics. That’s what R is for. And SAS. And sobbing, because SAS, while insanely powerful, is insanely evil.
***Short explanation: voting mechanisms are actually just ways of aggregating preferences in ways that each have their own advantages and disadvantages. Scientific models are ways of aggregating data points in ways that have their own advantages and disadvantages. Looking at one way of aggregating individual things in certain ways might have useful applications for other places we need to aggregate certain things in certain ways.
*†As Sherlock Holmes realized, there’s no value for someone living on Earth to know that we go around the Sun, and not vice-versa; in fact, if you’re trying to tell time by the sun or Big Dipper, navigate by the stars, or aim a telescope, it’s more helpful to assume geocentrism. We may know that it’s technically wrong to think of the Sun rising in the east when it’s actually relatively still and it’s the Earth that’s moving, but, from our Earthbound perspective, that’s what it looks like and what fits our Earthbound society and discourse best. Also, I’ve switched to quasi-Roman footnote markers. Aren’t they a nifty solution to losing count after six asterisks?
†I know more than a couple of engineers who would take issue with giving the scientists credit for their work. Note to everyone who thinks Our Modern World is solely due to science: yeah, it kinda is, but remember to thank the engineers who took those ideas and research articles and did something with ’em.
†*I’m pretty sure if you’ve found nonhumans doing science you should tell SETI before you tell me. While they might be stuck with these same problems as well—assuming, of course, that they have any concept of “science” remotely comparable to ours—discovering nonhuman life is much more important than settling issues relating to the sociology of scientific knowledge (SSK) on the Internet.