Friday, 5 January 2018

On Science and Nonscience

Today, I’m going to write about science. This won’t be a technical paper. It won’t be full of numbers or equations. Instead, I’m going to look at science from the generalist point of view. I’m going to ask questions like: What is science? How useful is it to the making of decisions, including political ones? And, how can we tell good science from bad?

What is science?

According to Webster’s, science is: “knowledge or a system of knowledge covering general truths or the operation of general laws.” The way I see it, science is a method of discovering truths. For the idea to make any sense at all, though, we need first to agree that scientific truth is objective. Now, a particular truth or fact may of course be unknown, or poorly understood, or wrongly apprehended, at a particular time. But in science, one man’s truth must be the same as another’s.

Those of certain philosophical tendencies, such as postmodernism or cultural relativism, like to pooh-pooh science. They dispute its objectivity and neutrality. They point out that scientists have their own agendas, and that the scientific establishment is politicized. But I think they bark up the wrong tree. As criticisms of how science is actually conducted by some who call themselves scientists, their points may have merit. But they do not tarnish one whit the idea of science itself.

The scientific method

Properly done, science is conducted according to a procedure known as the scientific method. The details may vary a little from one discipline to another; but the basic scheme is the same. Here’s a brief outline of the steps within the scientific method:

  1. Pose a question, to which you want to find an answer.
  2. Do background research on that question.
  3. Construct a hypothesis. This is a statement, giving a possible answer to your question. In some circumstances, you may want to take someone else’s hypothesis for re-testing.
  4. Develop testable predictions of your hypothesis. For example: “If my hypothesis is true, then when X happens, Y will happen more often than it does when X doesn’t happen.”
  5. For each prediction, formulate an appropriate null hypothesis, against which you will test your prediction. For example: “X doesn’t influence whether or not Y happens.”
  6. Test the predictions against their null hypotheses by experiment or observation. If you need to use someone else’s data as part of this, you must first check the validity of their data.
  7. Collect your results, and check they make sense. If not, troubleshoot.
  8. Analyze your results and draw conclusions. This may require the use of statistical techniques.
  9. Repeat for each of the predictions of your hypothesis.
  10. If the results wholly or partially negate your hypothesis, modify your hypothesis and repeat. In extreme cases, you may need to modify the original question, too.
  11. If the results back up your hypothesis, that strengthens your hypothesis.
  12. If negative results falsify your hypothesis, that weakens or destroys the hypothesis.
I see the construction of the null hypothesis, which is to be upheld when a prediction fails, as one of the most important steps in this procedure. I think of the null hypothesis in science as somewhat akin to the presumption of innocence in criminal law!

Rules for the good conduct of science

It’s very easy to get science wrong. In fact, it’s even easier than getting mathematics wrong. And, having been trained as a mathematician, I know well how easy that is! In science, there’s always a possibility of error in your measurements, or in your statistics, or in your deductions. Or of insufficiently rigorous testing or sampling. Or of bias, whether conscious or unconscious.

To minimize the chances of getting science wrong, and to enable others to build on its results, there are a number of rules of conduct which scientists are expected to follow. Here is a list of some of them:

  1. Any hypothesis that is put forward must be falsifiable. If there’s no way to disprove a hypothesis, it isn’t science.
  2. Data must not be doctored. Any necessary adjustments to raw data, and the reasoning behind them, must be fully and clearly documented.
  3. Data must not be cherry picked to achieve a result. Data that is valid, but goes against a desired result, must not be dropped.
  4. Graphs or similar devices must not be used to obfuscate or to mislead.
  5. Enough information must be supplied to enable others to replicate the work if they wish.
  6. Scientists must be willing to share their data. And code, too, when code is involved.
  7. Supplementary information, such as raw data, must be fully and promptly archived.
  8. To identify and quantify the error bars on results is important. (For example, by stating the range within which there’s a 95% chance that a value being measured lies.)
  9. Uncertainties are important, too. They must be clearly identified and, if possible, estimated.
  10. Above all, the conduct of science must be honest and unbiased. In a nutshell: If it isn’t honest, it isn’t science. It’s nonscience (rhymes with conscience).
A failure to obey one or more of these rules of conduct doesn’t necessarily mean that the science is bad. However, it does raise a red flag; particularly in cases where there may be a suspicion of bias or dishonesty. And if a sufficiently skilled person, with sufficient time to spare, doesn’t have enough information to check the validity of a scientific paper, or to attempt to replicate the work it describes, then there’s a very good chance the science in it is bad.

Peer review and spear review

In the world of scientific journals, there is a quality control mechanism known as peer review. The idea is that a number of independent experts scrutinize a proposed paper, check its correctness and its utility, and suggest changes where necessary. But peer review doesn’t always catch issues with papers before they are published. This is a particular problem when the reviewers work or have worked closely with the authors, and share their conceptual framework. Indeed, where a group of experts on a subject have formed a clique, it’s easy for groupthink to develop. In such a situation, only those ideas with which clique members are comfortable are likely to pass muster and get published.

In recent times, there has been a great increase in informal papers on scientific blogs. The usual procedure in these circumstances is one I call “spear review,” in which commenters provide comments in response to a blog article. It does have some drawbacks. One is that not all the commenters actually have much, if any, expertise in the subject they are commenting on. Another is that some commenters are biased or trolling. A third is that the process can often resemble a pack of dogs chasing a cat. But when it’s done by people who are trying to be objective and helpful, it’s very useful. Particularly in determining whether a scientific idea is good enough to be worth trying to publish through more formal channels.

Paradigms and consensus

At any time and in any area of science, there is almost always a particular paradigm. This is a framework of concepts, thoughts and procedures, within which work in that area is generally confined. Past examples are Ptolemy’s earth-centred model of the universe, the phlogiston theory of combustion, and the “luminiferous aether” which was said to carry light waves.

Within such a paradigm, there is usually some kind of consensus. Hypotheses, which have been repeatedly confirmed, can aggregate into theories; and such theories can be agreed on by all or most practitioners in the area. However, in an area of science which is advancing, there will always be parts that are disputed. There will be different hypotheses, and different interpretations of the results of experiments or observations. Moreover, there will be parts on the “cutting edge,” which are still under investigation. And in any area of science, there is always a possibility of a previously unknown factor being discovered.

Thus, however mature the science in an area may be, it can never truly be said to be “settled.” There is always a possibility of altering or overturning the consensus in an area of science, or even of overturning the paradigm and creating a new one. For example, Galileo’s telescope observations overturned Ptolemy’s geocentric model. Michelson and Morley’s measurements on the speed of light overturned the idea of the aether. And Einstein’s theories of relativity provided a more accurate replacement for Newton’s laws on the dynamics of bodies in motion.

The example of Einstein, who was a patent clerk when he published his ideas on special relativity and the equivalence of matter and energy, shows up another important feature of science. In science, it doesn’t matter who you are. You don’t need to be a credentialled “scientist” to contribute to science. All that matters is whether or not your science is right.

And the converse applies, too. In science, even the acknowledged experts aren’t always right. As Steven Weinberg put it: “An expert is a person who avoids the small errors while sweeping on to the grand fallacy.” In fact, it’s worse than that. Experts in a paradigm often tend to form a clique to defend that paradigm, and may ignore or even try to suppress ideas contrary to it. And most of all, when their livelihoods depend on the paradigm being maintained.

Science and decision making

Science is useful in making many decisions. Engineers, for example, use it all the time. They depend on the science, which they use to make their design decisions, being right. If it isn’t, their machines won’t work; with potentially disastrous consequences.

A relatively recent phenomenon is to attempt to apply science to political decisions. If difficult decisions must be made, there is a lot to be said for using science in making and justifying them where appropriate. As climate scientist Hans von Storch has put it: “Science is supposed to provide coldly, impassionately, knowledge about the options of policymaking.” But he added the caveat: “There should be a separation between scientific analysis and political decision making.” In other words, to be useful in any political context, science must be completely non-politicized.

Since in science one man’s truth is the same as another’s, it’s hard to argue against a decision that has been honestly made on the basis of accurate, unbiased science. If, of course, the science really is accurate and unbiased; and the decision has been made honestly. Those are big, big Ifs.

Science, properly and honestly done, can supply data to the “business case” for a decision. In particular, it can help to estimate the likely costs and benefits of a range of actions being considered. But this can only work when the science is completely honest, accurate and unbiased, and the error bars and other uncertainties are fully accounted for. For when it comes to adjudicating costs versus benefits, as every mathematician knows, subtracting one uncertain number from another often leads to orders of magnitude more uncertainty in their difference. Even the sign of the result may be unclear. In which case, that piece of science is useless as any guide to a decision in that case.

Politics and science

There are several cases from the past, in which those in political power have rejected good science; or they have been negatively influenced by, or even driven by, bad science. Galileo’s persecution at the hands of the Catholic church is one case in point.

Another example is provided by Lysenkoism in Soviet Russia. The paradigm that the methods of Comrade Lysenko radically improved plant yields became so politically strong, that those who dared to question it were fired from their jobs, imprisoned or even executed.

And even in the West, the shameful misuse of science is not unknown; as shown by the Eugenics movement. This movement began in the early 20th century, when genetics as a science was in its infancy. Eugenics became a respected academic discipline at many universities, particularly in the USA. Even though the whole idea was (wrongly) based on genetic determinism; if not also on racism.

The eugenics agenda re-defined moral worth in terms of genetic fitness. And it allowed doctors to decide who they thought was fit to reproduce or not. Moreover, this agenda was actively supported by the mainstream scientific establishment. And it numbered among its supporters, in the UK alone, prime ministers Neville Chamberlain and Winston Churchill, economist John Maynard Keynes, and architect of the welfare state William Beveridge. The results? Tens of thousands of people forcibly sterilized in the USA, and thousands in Canada too. Not to mention the hundreds of thousands who suffered when the nazis got their hands on the idea.

To sum up

Science is a method of discovering truths, using a procedure called the scientific method.

There are a number of rules for the good conduct of science. These aim to enable others to check the validity of, and to build on, the work of scientists. Failure to adhere to these rules may well be a sign of bad science. And the conduct of science must always be honest and unbiased. If it isn’t honest, it isn’t science; it’s nonscience.

Peer review aims to improve the quality of science. But it doesn’t always work, particularly when a clique has formed.

Most of the time, each area of science operates within its own current framework or paradigm, and there is a level of consensus among scientists in the area. But paradigms can be overturned. And importantly, in science, it doesn’t matter who you are. All that matters is whether or not you’re right.

Science can be helpful in making decisions, even political ones. But any science to be used in such a context must be completely honest, accurate, unbiased and non-politicized. And the record of the politically powerful in matters of science is, historically, not a good one.

No comments: