In one of his essays in Philosophy and Social Hope, Richard Rorty noted the tendency of scientists to assume that they are best positioned to adjudicate questions on the philosophy of science. As Rorty compelling detailed, they are not. So I was reminded when reading Can Darwinism Improve Binghamton? in NYT’s Book Review.
The author starts off this way:
My undergraduate students, especially those bound for medical school, often ask why they have to study evolution. It won’t cure disease, and really, how useful is evolution to the average person? My response is that while evolutionary biology can explain, for example, the origin of antibiotic-resistant bacteria, we shouldn’t see evolution as a cure for human woes. Its value is explanatory: to tell us how, when and why we got here (by “we,” I mean “every organism”) and to show us how all species are related. In the end, evolution is the greatest tale of all, for it’s true.
This is, in my view, quite misguided. Without usefulness it soon becomes impossible to locate any measure by which to evaluate the truth of an explanation. This, to me, (and I believe to Rorty), is the point of postmodern philosophy. By accepting it we don’t have to accept all that falls under the heading of “postmodernism”; we can dodge it by embracing pragmatism. While I recommend Rorty to anyone looking to read more about science and pragmatism, I occasionally have come across succinct yet poignant statements of the subject, two of which I’d like to share here.
Economist Joseph Stiglitz put it simply but astutely in a recent paper
: “Prediction is the test of a scientiﬁc theory.”
It’s as simple as that. Conservative writer and entrepreneur Jim Manzi had an equally useful (get it?) take on science and pragmatism a while back.
He wrote: “I claim that the purpose of science is to create useful, reliable, non-obvious predictive rules.”
We would do well to heed these words.*
*Yes, that’s circular reasoning, if you really think about it. But as Rorty might say, what’s the alternative?
I know I’ve already written twice about the Mercier/Sperber argumentation research, but this NYT piece brings to mind one more point to make. Mercier and Sperber argue that we evolved our capacity for reason largely to convince one another. They make the related point that reasoning is a social rather than an individual process. Regardless of whether they’re right about the evolutionary roots of reasoning, the latter point is critical to discussions of bias. The NYT piece talks about the research with regard to the peer review process:
Doesn’t the ideal of scientific reasoning call for pure, dispassionate curiosity? Doesn’t it positively shun the ego-driven desire to prevail over our critics and the prejudicial urge to support our social values (like opposition to the death penalty)?
Perhaps not. Some academics have recently suggested that a scientist’s pigheadedness and social prejudices can peacefully coexist with — and may even facilitate — the pursuit of scientific knowledge…
…It’s salvation of a kind: our apparently irrational quirks start to make sense when we think of reasoning as serving the purpose of persuading others to accept our point of view. And by way of positive side effect, these heated social interactions, when they occur within a scientific community, can lead to the discovery of the truth.
The point I want to make here is simple and perhaps even obvious. As science illuminates various shortcomings in our ability to reason, our best hope is to design better social processes to account for them. We already do this. From the courtroom to the newsroom, we structure our intellectual processes to help overcome our own individual shortcomings. But with increasingly sophisticated research into how we think, and with the digital public sphere providing both massive amounts of data on how we communicate and the opportunity to constantly redesign our media environment, we have the chance to design better processes that allow us to overcome our individual faults and reason better.
Thanks to Edge, I posted about the new research into the evolutionary basis of reason and argument well before The New York Times picked it up. But here, as a follow-up to that NYT piece, is another post that clarifies the authors’ position. Turns out it’s right in line with what I expected. Here’s what I wrote in my previous post:
The first question that comes to mind for me is this: Why, if reasoning isn’t based at least in part on developing correct beliefs, would reasons be useful for convincing others? In other words, if I’m not using reasoning in the traditional enlightenment sense then why would I treat reasons as useful input when someone else tries to convince me? Reasons would seem to be more useful tools for convincing in a world where individuals were also using them as tools for obtaining correct beliefs.
I take that to be what the authors are saying in the NYT follow-up:
We do not claim that reasoning has nothing to do with the truth. We claim that reasoning did not evolve to allow the lone reasoner to find the truth. We think it evolved to argue. But arguing is not only about trying to convince other people; it’s also about listening to their arguments. So reasoning is two-sided. On the one hand, it is used to produce arguments. Here its goal is to convince people. Accordingly, it displays a strong confirmation bias — what people see as the “rhetoric” side of reasoning. On the other hand, reasoning is also used to evaluate arguments. Here its goal is to tease out good arguments from bad ones so as to accept warranted conclusions and, if things go well, get better beliefs and make better decisions in the end.
Also, apologies for the light blogging lately. I’ve been writing a bunch about clean energy in the last few days over at the NECEC blog, so if you’re really desperate to read stuff I’m writing, you’ll find some new stuff over there.