Monday 15 January 2018

On the Precautionary Principle

Today, I’m going to look at a mantra much trumpeted by environmentalists; the precautionary principle. I’ll seek to make a case that, since the early 1980s, this idea has been perverted. To such an extent, that the principle now has an effect all but opposite to its true intention. I’ll trace how this happened, and try to outline how we might fix the resulting mess.

What is the precautionary principle?

The precautionary principle, even in its original, pre-1980s form, is an elusive beast. There’s no generally agreed wording of it. But its essence can be summarized as “better safe than sorry,” or “look before you leap.” Though some – myself included – go further, and see it as akin to the Hippocratic oath for doctors: “First, do no harm.”

In this form, the principle is very sensible advice. Before they put a new product on the market, for example, sane business people will test it thoroughly to check it has no bad side-effects. If they don’t do this, and something goes wrong, they will face lawsuits, and perhaps worse.

But some wish to take the principle further. Today, it’s often interpreted to mean that if there’s a risk of something bad happening, particularly to the environment or to human, animal or plant health, then action should, or even must, be taken to avoid or to minimize that risk. And on this excuse, policies have been made that have imposed huge costs on all of us.

Risk

On examination, this new form of the principle doesn’t fit well with our common-sense ideas of how to deal with risk. For in thinking about risk, we recognize two kinds: risk to ourselves, and risk to others. As far as risk to ourselves goes, each of us must make our own decisions. We do it all the time; just about everything in life involves some degree of risk. We judge, rationally or otherwise, whether a particular risk is justified for us. And we decide either to take the risk, or not. For example, every time we go in a plane, there’s a risk it may crash and kill us. We weigh this up, consciously or not, against the gain we expect from making the journey. We look, and then we leap; or not. And most of us come out with the same decision: We get in that plane.

Today’s version of the precautionary principle is worse than useless in assessing risk to ourselves. For it would have us either avoid risks altogether, or focus on minimizing them. But a life without taking risks is, at best, the life of a vegetable. And a life spent focusing on risks is a paranoid one.

Risk to others is a more difficult subject. Sometimes our actions may have negative impacts on others; on their property, on their health, even in extreme cases on their very lives. Now, all individuals are responsible for the consequences of their actions to others, unless those actions were coerced. And it may be that in a particular case the harm, which an action causes others, exceeds what reasonable people will bear in a spirit of mutual tolerance. In such cases, in a sane world, we should be required to compensate those we have harmed. In environmental terms, that’s the basis of the idea of “polluter pays” – one with which I heartily agree.

There are, therefore, good reasons to invest in minimizing risk to others. I gave already the example of a company putting a new product on the market. In making decisions on such risks, particularly if the damage caused may be great, it makes sense to assess the risks, and their consequences and costs if things go wrong, as objectively as possible.

Rationally, we will invest in minimizing such a risk as long as the likely gain from the reduction of risk exceeds the cost involved in reducing it. Beyond that point, we have only two options; we either go ahead and face the consequences, or we scrap the whole thing. If we tried to use the precautionary principle as often interpreted today, however, we would have to spend forever more and more to allay less and less likely, or less and less serious, risks.

Weak and strong precaution

In the original (weak) form of the precautionary principle, the burden of proof is always on the party wanting to make change. If one party wishes to stop or restrict an activity of another party on the grounds that it causes risk to them or to others, it’s up to the accuser to show that the risk is real and significant. It’s also up to the accuser to show that the change they propose is both necessary and beneficial. And the cost effectiveness of any such change must be taken into account. For it isn’t reasonable to expect anyone to spend more on reducing a risk, than the gain which results to those whose exposure to the risk is reduced.

However, many environmentalists, politicians and regulators favour a stronger form of the principle. In its strong form, it can be used to regulate or prohibit any action that has actual or perceived risks, even if those risks cannot, or cannot yet, be accurately quantified. Further, the burden of proof is inverted, so that the proponent of an activity must prove that it is harmless. And the costs of preventative action are not to be taken into consideration. Thus the strong form of the precautionary principle is, simply put, a power grab and a tool for tyranny. A long, long way from “First, do no harm.”

A history of corruption

The perversion of the precautionary principle into an excuse for tyranny began in the early 1980s. And it was the United Nations that did it. The World Charter for Nature, a 1982 UN resolution, included an extreme formulation of the precautionary principle. It stated: “Activities which might have an impact on nature shall be controlled,” and “where potential adverse effects are not fully understood, the activities should not proceed.” The Charter was passed by 111 votes to 1, with 18 abstentions. The USA was the only country voting against.

Fast forward a decade, to the Rio Declaration of 1992. Principle 15 states: “In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.”

At first sight, this looks like a walk back towards the weak form of the principle. But there’s a catch; and a big one. If you don’t have a high degree of scientific certainty about the size and the likelihood of a problem, how can you assess whether or not a proposed counter-measure is cost-effective? You might (or might not) be able to estimate the costs of the measure accurately; but without high scientific certainty, you can’t accurately estimate the benefits to compare them with! And so, sneakily, the activists bypassed the cost effectiveness condition that was supposed to be built into the principle. The world’s politicians bought it; and they sold us all down the Rio.

Then there was the much touted Wingspread Declaration of 1998. This came out of a conference of academics, politicians and activists, convened by an organization, only formed in 1994, called the Science and Environmental Health Network. Whose mission statement reads: “In service to communities, the Earth and future generations, the Science & Environmental Health Network forges Science, Ethics and Law into tools for Action.” An activist organization, no?

Here’s how they re-defined the principle: “When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically. In this context the proponent of an activity, rather than the public, should bear the burden of proof.” We’re back to the strong form, aren’t we? And worse. For when they talk of “the public,” they don’t mean us ordinary people. What they mean is that government shouldn’t have to bear the burden of proving its accusations; so we’re all guilty until proven innocent. Clever about-face, eh?

By 2002, the UK’s Inter-Departmental Liaison Group on Risk Assessment had perverted the principle still further. They saw its purpose as “to create an impetus to take a decision notwithstanding scientific uncertainty about the nature and extent of the risk.” And they wanted to invoke the principle “even if the likelihood of harm is remote.” They said, too, that “the precautionary principle carries a general presumption that the burden of proof shifts away from the regulator having to demonstrate potential for harm towards the hazard creator having to demonstrate an acceptable level of safety.” And they misused an aphorism attributed to Carl Sagan, saying: “‘Absence of evidence of risk’ should never be confused with, or taken as, ‘evidence of absence of risk’.” Bureaucrats seeking more power, no?

What does all this add up to? First, the activists have inverted the burden of proof, and require the defendants (that’s us, who want to do things like heat our homes and drive our cars) to prove a negative. Proving a negative is often impossible. How, for example, would you prove there are no fairies at the bottom of your garden? Second, they want the judge to rule, and to find us guilty, before all the evidence has been heard. And third, even if there’s no evidence at all that our activity causes any harm to anyone, they wouldn’t accept that fact as evidence! In essence they have decreed, in contradiction to the norm of presumption of innocence, that absence of evidence of guilt is not evidence of absence of guilt. We’ve been had, haven’t we?

Post-normal science

Beginning in about 1993, an idea called “post-normal science” started to take root in academe. This claimed to be a new way to use the outputs of science, in situations where standard methods of risk and cost-benefit analysis were insufficient. These situations were described as: “facts uncertain, values in dispute, stakes high and decisions urgent.”

But what post-normal science actually is, is a hard question to answer. It describes itself as a “problem solving strategy.” It seeks to replace the hard edged objectivity of properly done science with something much woollier, that it calls “quality.” It seeks the involvement in the decision process of “all those who wish to participate in the resolution of the issue.” And through its concept of “extended facts,” it allows ideas which are not facts to be treated in the debate on an equal basis with facts.

In my view, post-normal science merely provides a way for glib, persuasive activists to direct policy debates towards outcomes which suit their agendas, even when the facts do not support those outcomes. It’s little different, either in intent or in effects, from the perversion of the precautionary principle into an activist tool. It’s not a form of science, but of nonscience. And it has been used to blur and to obfuscate the interface between science and policy.

Here are my own thoughts on the situations post-normal “science” claims to address. If facts are uncertain, you must put more effort into clarifying them. If values are in dispute, that increases the need for the decision to be, and to be seen to be, absolutely objective. For if not, those on the losing side of the debate will have good cause to become resentful. If stakes are high, that increases how much you should be willing to spend on making the decision as objective as possible. And if decisions are urgent, you must use the precautionary principle – properly. Look before you leap. First, do no harm. Don’t do anything that damages innocent people.

How to fix the problem

In my view, those that have perverted the precautionary principle, and have tried to discredit science and to substitute it by nonscience, have acted in bad faith. To fix this, we first need to restore the precautionary principle in the public understanding to its proper meaning, of “Look before you leap,” or “First, do no harm.”

Second, we must seek to compensate those who have been unjustly harmed by bad policies made as a result of these perversions. If we accept the idea of “polluter pays” – and we should – then why should we not also accept the idea of “politicker pays?” Should we not hold those, that have acted in bad faith in support of those policies, responsible for the effects of what they did to us? Should we not require each of them to compensate us for their share of the bad things they did to us? And if any of them have committed offences such as perjury, should we not be seeking to prosecute them too?

To sum up

Over the last 35 years or so, the precautionary principle, “Look before you leap,” has been perverted out of all recognition. It no longer tallies with our common sense ideas of risk. At the instigation of the United Nations and other activist groups, the principle has been re-cast into a strong form, which inverts the burden of proof and has become a tool for tyranny. There has also been a movement to obfuscate the interface between science and policy. These perversions have helped politicians to make environmental policies that have harmed all of us.

Friday 5 January 2018

On Science and Nonscience

Today, I’m going to write about science. This won’t be a technical paper. It won’t be full of numbers or equations. Instead, I’m going to look at science from the generalist point of view. I’m going to ask questions like: What is science? How useful is it to the making of decisions, including political ones? And, how can we tell good science from bad?

What is science?

According to Webster’s, science is: “knowledge or a system of knowledge covering general truths or the operation of general laws.” The way I see it, science is a method of discovering truths. For the idea to make any sense at all, though, we need first to agree that scientific truth is objective. Now, a particular truth or fact may of course be unknown, or poorly understood, or wrongly apprehended, at a particular time. But in science, one man’s truth must be the same as another’s.

Those of certain philosophical tendencies, such as postmodernism or cultural relativism, like to pooh-pooh science. They dispute its objectivity and neutrality. They point out that scientists have their own agendas, and that the scientific establishment is politicized. But I think they bark up the wrong tree. As criticisms of how science is actually conducted by some who call themselves scientists, their points may have merit. But they do not tarnish one whit the idea of science itself.

The scientific method

Properly done, science is conducted according to a procedure known as the scientific method. The details may vary a little from one discipline to another; but the basic scheme is the same. Here’s a brief outline of the steps within the scientific method:

  1. Pose a question, to which you want to find an answer.
  2. Do background research on that question.
  3. Construct a hypothesis. This is a statement, giving a possible answer to your question. In some circumstances, you may want to take someone else’s hypothesis for re-testing.
  4. Develop testable predictions of your hypothesis. For example: “If my hypothesis is true, then when X happens, Y will happen more often than it does when X doesn’t happen.”
  5. For each prediction, formulate an appropriate null hypothesis, against which you will test your prediction. For example: “X doesn’t influence whether or not Y happens.”
  6. Test the predictions against their null hypotheses by experiment or observation. If you need to use someone else’s data as part of this, you must first check the validity of their data.
  7. Collect your results, and check they make sense. If not, troubleshoot.
  8. Analyze your results and draw conclusions. This may require the use of statistical techniques.
  9. Repeat for each of the predictions of your hypothesis.
  10. If the results wholly or partially negate your hypothesis, modify your hypothesis and repeat. In extreme cases, you may need to modify the original question, too.
  11. If the results back up your hypothesis, that strengthens your hypothesis.
  12. If negative results falsify your hypothesis, that weakens or destroys the hypothesis.
I see the construction of the null hypothesis, which is to be upheld when a prediction fails, as one of the most important steps in this procedure. I think of the null hypothesis in science as somewhat akin to the presumption of innocence in criminal law!

Rules for the good conduct of science

It’s very easy to get science wrong. In fact, it’s even easier than getting mathematics wrong. And, having been trained as a mathematician, I know well how easy that is! In science, there’s always a possibility of error in your measurements, or in your statistics, or in your deductions. Or of insufficiently rigorous testing or sampling. Or of bias, whether conscious or unconscious.

To minimize the chances of getting science wrong, and to enable others to build on its results, there are a number of rules of conduct which scientists are expected to follow. Here is a list of some of them:

  1. Any hypothesis that is put forward must be falsifiable. If there’s no way to disprove a hypothesis, it isn’t science.
  2. Data must not be doctored. Any necessary adjustments to raw data, and the reasoning behind them, must be fully and clearly documented.
  3. Data must not be cherry picked to achieve a result. Data that is valid, but goes against a desired result, must not be dropped.
  4. Graphs or similar devices must not be used to obfuscate or to mislead.
  5. Enough information must be supplied to enable others to replicate the work if they wish.
  6. Scientists must be willing to share their data. And code, too, when code is involved.
  7. Supplementary information, such as raw data, must be fully and promptly archived.
  8. To identify and quantify the error bars on results is important. (For example, by stating the range within which there’s a 95% chance that a value being measured lies.)
  9. Uncertainties are important, too. They must be clearly identified and, if possible, estimated.
  10. Above all, the conduct of science must be honest and unbiased. In a nutshell: If it isn’t honest, it isn’t science. It’s nonscience (rhymes with conscience).
A failure to obey one or more of these rules of conduct doesn’t necessarily mean that the science is bad. However, it does raise a red flag; particularly in cases where there may be a suspicion of bias or dishonesty. And if a sufficiently skilled person, with sufficient time to spare, doesn’t have enough information to check the validity of a scientific paper, or to attempt to replicate the work it describes, then there’s a very good chance the science in it is bad.

Peer review and spear review

In the world of scientific journals, there is a quality control mechanism known as peer review. The idea is that a number of independent experts scrutinize a proposed paper, check its correctness and its utility, and suggest changes where necessary. But peer review doesn’t always catch issues with papers before they are published. This is a particular problem when the reviewers work or have worked closely with the authors, and share their conceptual framework. Indeed, where a group of experts on a subject have formed a clique, it’s easy for groupthink to develop. In such a situation, only those ideas with which clique members are comfortable are likely to pass muster and get published.

In recent times, there has been a great increase in informal papers on scientific blogs. The usual procedure in these circumstances is one I call “spear review,” in which commenters provide comments in response to a blog article. It does have some drawbacks. One is that not all the commenters actually have much, if any, expertise in the subject they are commenting on. Another is that some commenters are biased or trolling. A third is that the process can often resemble a pack of dogs chasing a cat. But when it’s done by people who are trying to be objective and helpful, it’s very useful. Particularly in determining whether a scientific idea is good enough to be worth trying to publish through more formal channels.

Paradigms and consensus

At any time and in any area of science, there is almost always a particular paradigm. This is a framework of concepts, thoughts and procedures, within which work in that area is generally confined. Past examples are Ptolemy’s earth-centred model of the universe, the phlogiston theory of combustion, and the “luminiferous aether” which was said to carry light waves.

Within such a paradigm, there is usually some kind of consensus. Hypotheses, which have been repeatedly confirmed, can aggregate into theories; and such theories can be agreed on by all or most practitioners in the area. However, in an area of science which is advancing, there will always be parts that are disputed. There will be different hypotheses, and different interpretations of the results of experiments or observations. Moreover, there will be parts on the “cutting edge,” which are still under investigation. And in any area of science, there is always a possibility of a previously unknown factor being discovered.

Thus, however mature the science in an area may be, it can never truly be said to be “settled.” There is always a possibility of altering or overturning the consensus in an area of science, or even of overturning the paradigm and creating a new one. For example, Galileo’s telescope observations overturned Ptolemy’s geocentric model. Michelson and Morley’s measurements on the speed of light overturned the idea of the aether. And Einstein’s theories of relativity provided a more accurate replacement for Newton’s laws on the dynamics of bodies in motion.

The example of Einstein, who was a patent clerk when he published his ideas on special relativity and the equivalence of matter and energy, shows up another important feature of science. In science, it doesn’t matter who you are. You don’t need to be a credentialled “scientist” to contribute to science. All that matters is whether or not your science is right.

And the converse applies, too. In science, even the acknowledged experts aren’t always right. As Steven Weinberg put it: “An expert is a person who avoids the small errors while sweeping on to the grand fallacy.” In fact, it’s worse than that. Experts in a paradigm often tend to form a clique to defend that paradigm, and may ignore or even try to suppress ideas contrary to it. And most of all, when their livelihoods depend on the paradigm being maintained.

Science and decision making

Science is useful in making many decisions. Engineers, for example, use it all the time. They depend on the science, which they use to make their design decisions, being right. If it isn’t, their machines won’t work; with potentially disastrous consequences.

A relatively recent phenomenon is to attempt to apply science to political decisions. If difficult decisions must be made, there is a lot to be said for using science in making and justifying them where appropriate. As climate scientist Hans von Storch has put it: “Science is supposed to provide coldly, impassionately, knowledge about the options of policymaking.” But he added the caveat: “There should be a separation between scientific analysis and political decision making.” In other words, to be useful in any political context, science must be completely non-politicized.

Since in science one man’s truth is the same as another’s, it’s hard to argue against a decision that has been honestly made on the basis of accurate, unbiased science. If, of course, the science really is accurate and unbiased; and the decision has been made honestly. Those are big, big Ifs.

Science, properly and honestly done, can supply data to the “business case” for a decision. In particular, it can help to estimate the likely costs and benefits of a range of actions being considered. But this can only work when the science is completely honest, accurate and unbiased, and the error bars and other uncertainties are fully accounted for. For when it comes to adjudicating costs versus benefits, as every mathematician knows, subtracting one uncertain number from another often leads to orders of magnitude more uncertainty in their difference. Even the sign of the result may be unclear. In which case, that piece of science is useless as any guide to a decision in that case.

Politics and science

There are several cases from the past, in which those in political power have rejected good science; or they have been negatively influenced by, or even driven by, bad science. Galileo’s persecution at the hands of the Catholic church is one case in point.

Another example is provided by Lysenkoism in Soviet Russia. The paradigm that the methods of Comrade Lysenko radically improved plant yields became so politically strong, that those who dared to question it were fired from their jobs, imprisoned or even executed.

And even in the West, the shameful misuse of science is not unknown; as shown by the Eugenics movement. This movement began in the early 20th century, when genetics as a science was in its infancy. Eugenics became a respected academic discipline at many universities, particularly in the USA. Even though the whole idea was (wrongly) based on genetic determinism; if not also on racism.

The eugenics agenda re-defined moral worth in terms of genetic fitness. And it allowed doctors to decide who they thought was fit to reproduce or not. Moreover, this agenda was actively supported by the mainstream scientific establishment. And it numbered among its supporters, in the UK alone, prime ministers Neville Chamberlain and Winston Churchill, economist John Maynard Keynes, and architect of the welfare state William Beveridge. The results? Tens of thousands of people forcibly sterilized in the USA, and thousands in Canada too. Not to mention the hundreds of thousands who suffered when the nazis got their hands on the idea.

To sum up

Science is a method of discovering truths, using a procedure called the scientific method.

There are a number of rules for the good conduct of science. These aim to enable others to check the validity of, and to build on, the work of scientists. Failure to adhere to these rules may well be a sign of bad science. And the conduct of science must always be honest and unbiased. If it isn’t honest, it isn’t science; it’s nonscience.

Peer review aims to improve the quality of science. But it doesn’t always work, particularly when a clique has formed.

Most of the time, each area of science operates within its own current framework or paradigm, and there is a level of consensus among scientists in the area. But paradigms can be overturned. And importantly, in science, it doesn’t matter who you are. All that matters is whether or not you’re right.

Science can be helpful in making decisions, even political ones. But any science to be used in such a context must be completely honest, accurate, unbiased and non-politicized. And the record of the politically powerful in matters of science is, historically, not a good one.

Monday 1 January 2018

On Political Ideologies

A couple of weeks ago, I looked at political societies and ways of organizing them. Today, I’ll take another slice from the same piece of wood, but along a different grain. I’ll look at the ideologies which have guided, and continue to guide, the tone and flavour of political societies.

The nation state

To re-cap what I said about the nation state in my earlier essay. The system was devised in the 16th century by a monarchist Frenchman called Jean Bodin. In this system, a “sovereign” or a ruling élite has privileges over the “subjects” or people. Among much else, it can make taxes, it can make wars, and it can make laws to bind the people. Further, it isn’t itself bound by the laws it makes. And it bears no responsibility for the consequences of what it does; also known as “the king can do no wrong.”

Today’s establishment, of course, will tell you that it isn’t like that any more. But it sure as hell is! Political governments still have the power to oppress, steal from and murder people, if they want to. And most of them will do just that, as long as they think they can get away with it.

The Enlightenment

The 17th century was the time of the “divine right of kings” par excellence. Buttressed by Bodin’s ideas, the norm back then was that a king or prince, along with his élites, ruled over a state essentially as he wished. Not surprisingly, people who weren’t part of the establishment weren’t happy with this at all. The results? War, revolution, and in time – Enlightenment.

The Enlightenment of the late 17th and 18th centuries affected primarily the European Christian cultures, and those derived from them. Though it spread to the Jews within a few decades, and its knock-on effects reached places like Japan and much of the Islamic world during the 19th century. However, its values set the tone for most political societies for more than two centuries.

What the Enlightenment did was free human minds from shackles, both religious and political. Here’s a brief list of some of its values. The use of human reason, and the pursuit of science. Greater tolerance in religion. Freedom of thought and action. Natural rights, natural equality of all human beings, and human dignity. The idea that society exists for the individual, not the individual for society. The idea of a social contract, to enable people to live together in a civil society and to protect their rights. Government for the benefit of, and with the consent of, the governed. The rule of law. A desire for progress, and a rational optimism for the future.

Liberalism and conservatism

In Enlightenment times, there were just two political ideologies. In England these were represented, in broad terms, by two factions: Whigs and Tories. Not all Whigs were always liberal; and not all Tories were always conservative. But these two factions tended to support the two opposing ideologies, of liberalism and conservatism.

Liberals – or what we might now call classical liberals – were the progressives of their times. They promoted the new Enlightenment ideas, such as reason, tolerance and natural rights. They wanted maximum freedom for every individual, consistent with a civilized society. And they saw societies, including political ones, as being for the benefit of each individual in them.

Conservatives, on the other hand, supported the state and its powerful élites – such as kings, nobles and church leaders. They saw these élites as possessing both rightful authority, and immunity from being held to account. And they resisted change. They sought to preserve the existing order both religious and political, and their own privileged positions in it.

Socialism and anarchism

In the early 19th century, a new ideology appeared: socialism. One of the difficulties in discussing socialism is that there’s no clear, widely accepted definition of it. For some, it means collective ownership and control over the means of producing, distributing and exchanging goods. For others, it means a social organization with an egalitarian distribution of wealth, and no such thing as private property. My 1928 dictionary calls it the “principle that individual liberty should be completely subordinated to the interests of the community with the deductions that can be drawn from it, e.g. the State ownership of land and capital.”

To be fair to them, the earliest socialists weren’t all bad guys. Often, they sought to create model communities, bound together by shared ideology. Robert Owen’s community at New Harmony, Indiana was an example. Of course, most – if not all – of these communities failed. And so, socialism started on its long slide down towards a militant collectivism, in which Society and the socialist agenda are paramount, and the individual is of no significance. Far from its intended purpose as a new and better form of liberalism, socialism became illiberal.

Another ideology, which began to grow at much the same time as socialism, was anarchism. The distinguishing feature of anarchism is its opposition to the state and to political government. But, just as socialism had done, anarchism began to degenerate. By the late 19th century, the anarchists had become little more than terrorist gangs.

Marxism and communism

Next came Marxism, and the communism it spawned. Marxism, so its adherents claimed, was scientific socialism; an attempt to apply the scientific method to social and political ideas. But Marxists saw capitalism – that is, ownership of property and of the means of production by individuals and by voluntarily formed groups – as leading, not to prosperity, but to inequality and instability. So, they fanned class war between working people and the classes they called “capitalists” and “bourgeoisie.”

The Marxists predicted that, once their system was in place, the political state would wither away. And yet, they set out to capture the state, and to use it to achieve their objectives! No wonder, then, that the result – communism – turned out so evil. Its results? Oppressions, famines, massacres and mass deportations; leading to nearly a hundred million unnecessary deaths. As to the economy, as one wag put it: “The problem of queues will be solved when we reach full Communism. How come? There will be nothing left to queue up for.”

Fascism

Then came fascism. In some ways, it’s hard to separate fascism from communism. Both shared an attachment to dictatorial power, extinguishing individual freedom, forcible suppression of opposition, social indoctrination and a lack of ethical restraints on the state. But in some respects, fascists went further. They were racists. They sought to make unpopular groups of people into scapegoats, and to purge those they considered inferior, such as Jews. But above all, fascists glorified violence and war. With predictable results.

Modern ideologies

In the course of the 20th century, other evil ideologies have also been established in various parts of the world. Notable among them have been racism, as in apartheid South Africa and Idi Amin’s Uganda; theocracy, as in Iran; and dictatorship, as in North Korea. In the West, however, we have been subjected to the unholy trinity of welfarism, warfarism and environmentalism.

The ideology of welfarism, also known as nanny-statism, has led the ruling class to try to bribe people into believing that the state is a benefit to them. They have set up elaborate, re-distributory schemes for welfare, health, education and the like; and commandeered resources to implement those schemes. But these resources don’t go directly from the payers to the recipients. Nor do the payers receive any thanks at all in return. Instead, everything is filtered through the bureaucracy that maladministers the system, and the politically connected cronies that feed off it.

Welfarism has had two main effects. First, it has dragged down into dependence on the state many who, if allowed the chance, would have been able to prosper through their own efforts. Second, it has taken away from productive people the resources they should have been able to use to safeguard their own futures. Welfarism, to use a metaphor, is like breaking people’s legs then giving them crutches – and expecting them to thank you for it.

A recent development of welfarism is what I call social engineering fever. Those affected by this ailment seem to think that they have a right to interfere in others’ lives, for no better purpose than their own social goals. These zealots like nothing better than to seek to change other people’s behaviour – for example, in their diet or means of transport. And they are adept at using state dominated education and politically correct media to promote their nefarious schemes.

Then there are warfarism and its comrade, the security state. Warfarism is the ideology of the school bully. (It’s also very profitable, for those on the right side of it). Warfarists instigate “war on drugs,” “war on terror” and the like. They seek any excuse to use police or military force. Often, while decrying terrorism, they encourage – and even carry out – terrorist acts. At the same time, they pry into people’s lives, and monitor and record our actions in ever increasing detail.

Environmentalism, the third of the unholy trinity, is a large subject. So much so, that it demands a whole essay in itself. Here, I will only point out that, like welfarism and warfarism, environmentalism provides huge opportunities for cronies of the state to make themselves rich.

Democracy

Democracy isn’t, in the technical sense, an ideology. However, many in politics act as if it was. They present themselves as “democrats” of one kind or another; perhaps “social” or “liberal.” So, I’ll add here to what I said about democracy in my earlier essay. It’s my view that democracy, once implemented, will inevitably decay. I see it as going through four phases, each worse than the previous one.

Democracy-1 is the honeymoon period. People believe that they have a real say in what the government does. But it isn’t long before there emerge political factions, looking to take advantage of the situation; as James Madison warned way back in 1787.

In the next stage, democracy-2, two factions (or, rarely, three or more) attract cores of support, and promote policies designed to favour their own supporters. People start to divide along party lines. Those who don’t like any of the main parties will tend to vote for whichever seems less evil at the time. So, power tends to swing from one side to the other and back again. The social fabric becomes more and more stretched, and the tone of politics nastier and nastier.

In democracy-3, the main political parties and their respective cronies align with each other, and against the interests of the people. Here, different factions may spout different rhetoric; and their policies may, perhaps, be a little different around the edges. But their ideologies are essentially the same. Under democracy-3, policies are not made in the interests of the people, but to benefit the political class and their hangers-on, and to satisfy the agendas of special interest groups. And elections become largely irrelevant; for each time, the new king is much the same as the old one.

Democracy-4 is a terminal social illness; already into some countries, like Greece. The political state reaches a critical mass. Those dependent on the state, either for work or for benefits, become an absolute majority. Thus under democracy-4, a single interest bloc can forever outvote, and so oppress, everyone else. There’s no way out of this, short of exit or revolution.

The decay of politics

Except for Enlightenment liberalism, all the ideologies I’ve listed above are anti-Enlightenment, anti-individual and anti-human. In fact, it’s worse than that. Corruption and decay seem to be built in to all political ideologies. I already mentioned the negative changes that took place in both socialism and anarchism. And the word “liberal,” particularly in the USA, has been corrupted in its meaning; so that I find many of today’s self proclaimed “liberals” no more than vaguely socialist illiberals.

Some conservatives, on the other hand, have moved in a better direction. No longer are they merely supporters of the status quo. Some of them, indeed, have come to uphold the values of the Enlightenment! But there are also many far less benign conservatives, who want to forcibly return us all to a mythical past, when Gahd was in his heaven and all was right with the world. And many of them are warfarists, too.

Now, a radical question. Why should anyone have to suffer under someone else’s ideology? Why, for example, should conservatives have to suffer under socialism, or vice versa? Come to that, why should those of us, who hate politics of all stripes, have to suffer under any ideology at all? Why don’t we simply de-politicize life? Why don’t we set up a framework that maintains peace and supplies objective, non-politicized justice, and in which groups of like-minded people can get together and follow their own ideologies as they choose?

I think that all true liberals and the more benign conservatives – at least – could quite easily be accommodated in such a scheme. And those of us who don’t want politics at all can simply be ourselves, and make friends with whomever we damn well wish. Even socialists, racists and theocrats could have their own communes, as long as they behave civilly when outside them. Fascists and warfarists would, of course, have to be banned.

To sum up

We’re still living under a political system devised in the 16th century. In this system, anyone that can acquire enough political power can make taxes, wars and bad laws as they please. And they can force whatever political ideology they want on to everyone around them. Most of the ideologies that are extant today are evil. And democracy, far from fixing the problem, actually tends to make things worse.

This really isn’t good enough. That a species, which has developed nuclear weapons, is still using a political system from the age of the musket, is crazy. And scary. It’s got to change.