Tuesday, 3 March 2026

The Case of the Missing Cost-Benefit Analysis

Sherlock Holmes, I’m told, solved the case of “the missing three-quarter.” Today, I’m going to try to emulate him. What is missing, this time, is not a rugby player; it is a cost-benefit analysis for “net zero.” And a lot more than three-quarters of it is missing.

This essay is a re-work and update of a detailed offering from 2023, here: [[1]].

Externalities and social cost

First, I must introduce an economists’ word: “externalities.” An externality occurs “when producing or consuming a good causes an impact on third parties not directly related to the transaction.” The impact might be negative (a cost), or more rarely positive (a benefit).

It is unreasonable that innocent parties should suffer disbenefits from side-effects of others’ activities. At the least, once the externalities are proven and quantified, those who benefit from the activities ought to pay compensation to those negatively affected by them. If these effects are sufficiently large compared to the benefits from the activities to those who indulge in them, there is a case to be considered for restricting or even banning them.

The effects are measured in terms of “social cost.” The social cost of an externality is defined as the aggregate cost, to all those affected, of that externality.

Cost-benefit analysis

Government, as I’ve said many times before, is supposed to be a nett benefit to the governed. If not, we should get rid of it. So, if government identifies a problem that it thinks may need fixing, then before taking any action, it ought to check that the gains to the people from fixing the problem will outweigh the costs the fix imposes on them.

Fortunately, we have a technique we can use to do this. It’s called cost-benefit analysis. And in principle, it’s simple.

Social cost of the externality

You start out by understanding the problem. The first step is to calculate the social cost of the externality, now and in the future. This gives you an idea of what the costs resulting from the externality would be, if you did nothing at all to control it in any way.

Costs of possible solutions

The second step is to look at possible solutions. Approaches proposed generally divide into mitigation (taking pre-emptive action to reduce the costs of the externality) or adaptation (only taking action to resolve problems as and when they are apparent). It is also possible to have solutions which are combinations of the two.

Each proposed solution will have its own profile of reductions over time in costs resulting from the externality. It will also have its own profile of costs, which must be incurred in order to mitigate, or to adapt to the effects of, the externality.

Costs versus benefits

The third step is to analyze costs versus benefits. In essence, to compare the costs of each solution – including doing nothing, which has zero direct cost – with the benefits obtained through the reductions in costs from the externality. You can then decide which is the best option.

Complexities

Of course, in practice, to do such a cost-benefit analysis is very complicated. There will be uncertainties in the costs for the case where no political action at all is taken over the matter. There will be a range of possible combinations of mitigation and adaptation, all of which must be taken into account. And there will be uncertainties in all their costs.

Uncertainty

Further, as any mathematician or businessman knows, if you subtract one uncertain number from another uncertain number, particularly if the two are close together, the uncertainty can easily become so large that no objective cost-benefit decision can be made.

In this case, the true precautionary principle, “Look before you leap,” ought to apply. You should not attempt any mitigation, but should simply let the externality run its course. While being prepared to fix any problems that may arise as they come up. Such an uncertainty, in effect, disqualifies mitigation approaches as solutions to the externality, and means that adaptation is the only sane approach.

After all, people benefit from the activity that causes the externality. And those benefits must not be taken away from them if the cost-benefit case against the activity is unclear. As in criminal law, people should be assumed innocent, and not punished, until proven guilty.

The nub of the problem as I see it

In the claimed problem to which “net zero” has been put forward as a solution, the externality is seen in terms of effects on the climate of emissions of carbon dioxide (CO2) from human activities such as energy generation, energy use, and transport. It is measured by the “social cost of carbon” (SCC), a misnomer since it is really the social cost of CO2 emissions. In the words of Nobel Prize winning economist William Nordhaus, “This term represents the economic cost caused by an additional ton of carbon dioxide emissions … or its equivalent.”

Now, I do not propose to look at the second and third steps in cost-benefit analysis, which I described above. It is the first step, in this case the calculation of the costs resulting from given amounts of CO2 emissions, independently of any climate policies, on which I focus. For in a sane world, this calculation should have been done before any “solutions” were even considered. And any “solution” more costly than what, according to the SCC, would have been saved by eliminating the emissions, should have been rejected.

But my contention is that the UK government has not published any objective calculations of these costs for the UK. Indeed, it has gone out of its way to prevent any such calculations being made! And thus, no proper cost-benefit analysis for “net zero” has ever been done.

Integrated Assessment Models

The tools used in estimating the “social cost of carbon” are called Integrated Assessment Models (IAMs). IAMs, in effect, take the outputs of a climate model, then estimate the economic consequences, and from those they calculate values for the SCC over time.

An overview of IAMs is here: [[2]]. It separates them into two classes. Cost-benefit IAMs, that “fully integrate a stylised socio-economic model with a reduced-form climate model to simultaneously account for the costs of mitigation and the damages of global warming.” And process-based IAMs, which focus on (harrumph) “the analysis of transformation processes depending on a broad set of activities that induce emissions as side effects.”

I shall consider only the first class of IAMs, as only they can do cost-benefit analysis. They were also the only IAMs available at the time these calculations ought to have been done.

DICE, FUND and PAGE

Three cost-benefit IAMs were available in 2006, when Nicholas Stern conducted his review. These are DICE, FUND and PAGE. As far as I know, they are the only cost-benefit IAMs still used today.

Historically, DICE was the first of the three. Its genesis goes back at least to the 1980s, and its developer was William Nordhaus. FUND comes from Richard Tol of the University of Sussex and his colleague David Anthoff. And PAGE is the model produced by Chris Hope of the Judge Business School in Cambridge.

In general, FUND seems to produce the most optimistic results, because it includes positive side-effects of CO2, such as on plant growth. PAGE is the most pessimistic, because it tries to make allowance for unknown “tipping points.” And DICE is in the middle.

Some issues with IAMs

In an earlier essay at [[3]], I noted the lack of testing of the predictions of climate models against real-world data. IAMs seem to have the same problem, in spades. Particularly given that they either include climate models, or work off their outputs.

Moreover, of the three, DICE and PAGE calculate damage using a “damage function” which is essentially arbitrary. FUND’s damage calculations are based on empirical figures from studies done in the 1990s; but even these may only be valid within a narrow temperature range.

Some history

I shall now dive into the sorry history of the missing cost-benefit analysis.

The Stern Review

In 2006, economist Nicholas Stern and his team published the Stern Review. This was an (apparent) attempt to provide a cost-benefit analysis for policy action or inaction on reducing CO2 emissions. It did, at least, use the SCC approach. But of the IAMs Stern had available to him, he chose the one, PAGE, which gave by far the most pessimistic estimate of the social cost of CO2 emissions.

On top of this, Stern made other assumptions, such as a low discount rate, that resulted in a grossly exaggerated estimate of the costs of not taking any action. That Stern had to fiddle the figures in order to create anything like a cost-benefit case for action, ought to have been a big red flag for all involved. But it was ignored. With hindsight, it is obvious that PM Tony Blair didn’t want Stern to do an honest, objective cost-benefit analysis. What Blair wanted was something he could use to “justify” an alarmist call to action.

Besides, Stern was – and is – not exactly a neutral in the climate alarmism debate. Today, Stern is a professor at the Grantham Institute of the London School of Economics: [[4]]. Funded by Jeremy Grantham, billionaire crusader against climate change. Stern’s latest book is entitled: “The growth story of the 21st century: the economics and opportunity of climate action.” “Opportunity,” for Stern, seems to mean denial of opportunity to ordinary people.

The Climate Change Act 2008

Next, to the 2008 UK climate change bill. They did make a token attempt at a cost benefit analysis. The numbers were based on the Stern review. Not only were these numbers dubious for reasons outlined above, but they had a huge range of uncertainty too.

I actually downloaded and read the 200 or so pages of supporting data. If I recall right, there was a factor of 7 uncertainty in the costs, and a factor of 12 uncertainty in the “benefits,” of taking action to reduce CO2 emissions in order to mitigate climate change. If we could believe the figures in the first place!

Such numbers are useless for making any kind of objective decisions. These estimates were not fit for purpose. Yet, the politicians went ahead regardless.

To their credit, five brave Tory MPs had the gumption to stand up for the people they were supposed to represent, and oppose the bill. The rest of them, more than 450, voted to subject us to Soviet-style “five-year carbon budgets,” as well as all manner of taxes, and caps on emissions of other greenhouse gases too. They acted shamefully and recklessly in making costly commitments, on behalf of the people they were supposed to represent, without rigorous justification. They acted as traitors to our civilization.

The social cost of carbon (SCC)

At the beginning, the UK did use the SCC approach for valuing the effects of CO2 emissions when considering policies. There is a “carbon valuations” page here [[5]], giving some history. The interesting parts are near the bottom, so in essence you need to read the page backwards.

The shadow price of carbon (SPC)

The carbon valuations page also says: “Following the publication of the ‘Stern review on the economics of climate change’, and work commissioned by the Inter-departmental Group on the Social Cost of Carbon, the methodological approach was changed to incorporate use of the shadow price of carbon.” Compared with the SCC, the SPC “takes more account of uncertainty, and is based on a stabilisation trajectory.”

Reviews by economists

This page [[6]] links to the documentation on the change to using the SPC. One key difference, as the page admits, is that the SPC can be set by government. Whereas the SCC is calculated independently of any political policies or aspirations.

The page also links to reviews on that change by several economists. One, Paul Ekins, said: “The issue is how to arrive at such a price in a way that is both defensible and supports the Government’s climate change policy.” So, the climate change policy required a pricing mechanism that wasn’t easily defensible? That’s a bit of a revelation. And an extremely worrying one. Are we, perhaps, looking at a case of fraud against UK taxpayers?

A further review was instituted in 2009. The original page about that review is still available, here: [[7]]. I will quote the details from the bottom of that page.

“The old approach based on estimates of the social cost of carbon should be replaced with a target-consistent approach, based on estimates of the abatement costs that will need to be incurred to meet specific emissions reduction targets. The change will have the effect of helping to ensure that the policies the government develops are consistent with the emissions reductions targets that the UK has adopted through carbon budgets, and also at an EU and UN level.”

This was supported by a review from economist Paul Johnson: “given a target, the consistent approach is to value carbon in such a way as to ensure we hit the target.”

So, let’s get this straight. Instead of doing a proper cost-benefit analysis, they changed the rules to make it look as if whatever “targets” government decided to pick were going to be met. This was not evidence-based policy. This was policy-based fiddling with the “evidence.” Moreover, it made it impossible to do a proper, objective cost-benefit analysis on “net zero.”

A cost-benefit analysis at last?

Fast forward to 2019, when the ideas of “net zero” and “decarbonisation” raised their ugly heads to the general public. There was, obviously, still resistance to going ahead without any kind of cost-benefit justification. The Committee on Climate Change (CCC) was asked, at last, to do a cost-benefit analysis for “net zero!” This was the result: [[8]].

You can note that the chairman of the group that produced this report, Paul Ekins, was the economist who drove the decision to move away from the use of the social cost of carbon. And that one of the eight members of the CCC was an economist called Paul Johnson.

You can see in action the “MAC” approach which replaced use of the SCC, and you can marvel at how obscure and counter-intuitive it seems. You can also marvel at the lack of monetary numbers in the report for the costs of climate change!

You can also see that “Even though the macroeconomic change from ‘net zero’ is likely to be small, the required structural change to the UK economy would be very great.” And “It is clear that getting to ‘net zero’ will require transformative social and economic changes in practically every aspect of society.” And all this was being put forward behind closed doors, without ever once consulting us, the people whom government is supposed to be serving!

You may well conclude, as I did, that whatever this report was, it was not an unbiased, quantitative cost-benefit analysis. It was a recipe for tyranny and economic collapse.

The 2020 Green Book change

The “Green Book” is a set of procedures, meant to guide cost-benefit analyses carried out by the UK government. Here is an overview: [[9]]. The Green Book was first issued in 2003, and radically updated in 2013.

In 2020, there was an update to the Green Book, described here: [[10]]. It says: “In March 2020, the Government announced a review of the approach, to improve how the Green Book supports strategic priorities such as its ‘levelling up’ agenda and the transition to net zero greenhouse gas emissions.” Further: “The 2020 review of the Green Book concluded that it failed to support the Government’s objectives … because the process relied too heavily on cost-benefit analysis.” And there was “insufficient weight given to whether the proposed project addressed strategic policy priorities.”

This seems to imply that policies politicians deem to be “strategic,” including “net zero,” are to be exempt from cost-benefit analysis! No matter how damaging the effects of those policies will be on the people the government is supposed to be serving. Let that sink in.

I’ll also point out that, while it was Gordon Brown and Labour that abandoned the use of the social cost of carbon, this amazingly dishonest turn-about was the work of Boris Johnson and the Tories. And specifically of then-new chancellor Rishi Sunak. Both the main political parties have been caught fiddling the books on this one.

What is the UK’s recent “shadow price of carbon?”

A 2022 independent report, [[11]], revealed that the number the UK government then used to calculate the benefits of reducing CO2 emissions by a tonne (£255.40) was more than five times the sterling equivalent of the US government’s published SCC per tonne (£48.54).

This means that the UK government’s estimated cost of CO2 emissions per tonne has gone up by a factor of more than 5 due to the abandonment of the SCC approach. I call foul on that.

Where we are today

The costs of the “net zero” project are now recognized, by more and more people, to be spiralling out of control. There are many people who understand the energy supply side of the ledger. One I can recommend, if you can tolerate numbers, is David Turver and his “Eigen Values” substack: [[12]]. Government is throwing plenty of misinformation at us there.

But fewer people have looked at the other side of the ledger – at whether or not it actually makes any sense to be doing any this stuff at all. That is the province of people like me. And I can tell you, without any doubt, that if government had any concern at all for the people it is supposed to serve, the “net zero” caboodle would never have been begun.

The political blob have been defrauding us for decades. It’s time to fight back. And as campaign manager for my local branch of Reform UK, I’m going to be doing everything I can in the run up to the May 7th local elections.

To sum up

What happened with the Stern Review was bad enough. Economists took it apart, yet it was still used to “justify” the Climate Change Bill 2008.

The numbers submitted for the Climate Change Bill 2008 were far worse than merely bad. Yet almost all MPs waved it through, against the interests of the people they were supposed to “represent.”

It is even more troubling that in 2009, the UK government chose to make it, in effect, impossible to do proper cost-benefit analysis on anything involving CO2 emissions. Stern had failed to convince people that the problem was big enough for draconian action. So, they made sure no-one could try to show he was wrong.

Then in 2019, they issued a report about costs and benefits of “net zero,” which was not a cost-benefit analysis. And in 2020, they decided to exempt “strategic” projects such as “net zero” from any requirement to analyze costs and benefits at all.

UK governments of all parties have, in my view, committed very serious fraud over the “net zero” issue against the people they are supposed to serve. And they have behaved, over decades, with extreme bad faith towards us. We do, indeed, have a crisis on our hands. But it is not a “climate crisis,” or anything like it. This is a crisis of legitimacy of government.


Friday, 27 February 2026

The Corruption of Science

Image credit: rawpixel.com, freepik

If asked to describe science in one sentence, I would call it a more or less formalized method for finding objective knowledge. Yet, what passes for “science” today – climate science, air pollution, species extinctions, epidemiology, medical statistics and more – seems to be geared up to produce, not knowledge, but propaganda that supports dubious political narratives.

This is, in part, a re-work of an article I published in 2018: [[i]]. But I will also look at how well several areas of “science” today measure up to the standards of science as it should be. And end with some – very good – news from across the pond.

What is science?

Oxford Languages defines science as “the systematic study of the structure and behaviour of the physical and natural world through observation, experimentation, and the testing of theories against the evidence obtained.” This, I think, is basically a good definition; although it does, perhaps, miss the possibility of using science to study human behaviours. Most important, though, is the final component of the methodology: testing ideas against evidence.

The scientific method

Science, as we know it, began in the late 16th and 17th centuries; between the Renaissance and the Enlightenment. The work of people like Galileo and Francis Bacon led to changes in how we examined the reality around us. Which, in turn, led to a great increase in the level of our knowledge about our surroundings and about ourselves.

Properly done, science is conducted according to a procedure known as the scientific method. The details may vary a little from one discipline to another; but the basic scheme is the same. Here’s a brief outline of the steps within the scientific method:

1)     Pose a question, to which you want to find an answer.

2)     Do background research on that question.

3)     Construct a hypothesis. This is a statement, giving a possible answer to your question. In some circumstances, you may want to take someone else’s hypothesis for re-testing.

4)     Develop testable predictions of your hypothesis. For example: “If my hypothesis is true, then when X happens, Y will happen more often than it does when X doesn’t happen.”

5)     For each prediction, formulate an appropriate null hypothesis, against which you will test your prediction. For example: “Whether X happens doesn’t influence whether or not Y happens.”

6)     Test the predictions against their null hypotheses by experiment or observation. If you need to use someone else’s data as part of this, you must first check the validity of their data.

7)     If the test does not support the predictions of your hypothesis, you must consider this as a victory for the null hypothesis.

8)     Collect your results, and check they make sense. If not, troubleshoot.

9)     Analyze your results and draw conclusions. This may require the use of statistical techniques.

10) Repeat for each of the predictions of your hypothesis.

11) If the results wholly or partially negate your hypothesis, modify your hypothesis and repeat. In extreme cases, you may need to modify the original question, too.

12) If the results back up your hypothesis, that strengthens your hypothesis.

13) If negative results falsify your hypothesis, that weakens or destroys the hypothesis.

Key steps within the scientific method are: The construction of the right hypothesis, and of the null against which it is tested. The making of testable predictions of the hypothesis, and their testing. And the feedback loop, which – using the results found – strengthens, weakens or modifies your hypothesis.

I see the construction of the null hypothesis, to be upheld when a prediction fails, as one of the most important steps in this procedure. I think of the null hypothesis in science as like the presumption of innocence in criminal law! As a scientist, you should always be trying to catch yourself out – so you don’t make a mistake that might have serious consequences.

Rules for the conduct of science

It’s very easy to get science wrong. There’s always a possibility of error in your measurements, statistics, or deductions. Or of insufficiently rigorous testing or sampling. Or of bias, whether conscious or unconscious.

To minimize the chances of getting science wrong, and to enable others to build on its results, there are a number of rules of conduct which scientists are expected to follow. Here is a list of some of them:

1)     Any hypothesis that is put forward must be falsifiable. If there’s no way to disprove a hypothesis, it isn’t a scientific one.

2)     Data must not be doctored. Any necessary adjustments to raw data, and the reasoning behind them, must be fully and clearly documented.

3)     Data must not be cherry picked to achieve a result. Data that is valid, but goes against a desired result, must not be dropped.

4)     Graphs or similar devices must not be used to obfuscate or to mislead.

5)     Enough information must be supplied to enable others to replicate the work if they wish.

6)     Scientists must be willing to share their data. And code, too, when code is involved.

7)     Supplementary information, such as raw data, must be fully and promptly archived.

8)     To identify and quantify the error bars on results is important. (For example, by stating the range within which there’s a 95% chance that a value being measured lies.)

9)     Uncertainties are important, too. They must be clearly identified and, if possible, estimated.

10) Above all, the conduct of science must be honest and unbiased. In a nutshell: If it isn’t honest, it isn’t science. It’s nonscience (rhymes with conscience).

A failure to obey one or more of these rules of conduct doesn’t necessarily mean the science is bad. However, it does raise a red flag; particularly if there may be a suspicion of bias or dishonesty. And if a sufficiently skilled person, with sufficient time to spare, doesn’t have enough information to check the validity of a scientific paper, or to attempt to replicate the work it describes, there’s a very good chance the science in it is bad.

To sum up

When done properly, science is, as I said earlier, a more or less formalized method for finding objective knowledge. But if science is to be done properly, it must be done with total honesty. Activities that look like science, yet do not follow the scientific method honestly, or do not give sufficient detail to let others seek to replicate the work, are not science. They are nonscience.

Evidence for corruption of science

Today, though, we frequently see “scientists” publishing results that seem to support a political narrative, rather than contributing – as science should – to the furthering of objective human knowledge. We see reports being collated by bodies that obviously have a political agenda, in an attempt to abet the making of policies that further that agenda. We see political bodies seeking to suppress information, that might lead people to question official narratives. And we see scientists, who try to apply real science in areas affected by such fixing, being wrongly denied publication or funding, or even having to leave the scientific area altogether.

I will give you just five examples of these syndromes.

Climate “science”

I have already published a brief de-bunk of the “climate crisis” meme, here: [[ii]]. And I have documented the history of the climate agenda, and the green agenda as a whole, in a series of three articles: [[iii]], [[iv]], [[v]]. I told how the United Nations and its Intergovernmental Panel on Climate Change (IPCC) has co-opted many scientists into lending their weight to the scares about climate change. They have, in effect, misused the authority of science to mislead people into believing that there is more truth to the climate scares than there really is.

I did not mention there that the primary mechanism, by which climate “scientists” influence public perceptions, is through the outputs of their computer models, called AOGCMs (Atmosphere-Ocean General Circulation Models). Now, computer models can be useful tools, particularly for exploring “what-if” situations. But their outputs are not data. And they should never be treated as if they were data. Moreover, before any model can be used to explore new situations, it must be thoroughly tested, to find out how well its predictions match up with the reality when it eventuates.

Yet there is an astonishing lack of published material on the results of testing AOGCMs against measured data. They may, indeed, have been tested by “hindcasting,” using past climate data. But the crucial test, of model predictions against measurements made at the times the modellers expected their predictions to be fulfilled, seems to be rarely carried out. And very rarely reported.

Indeed, I have only been able to find one recent example of such a test: [[vi]]. Now, this is not a peer-reviewed paper, only a scientist’s blog post. But Dr Roy Spencer, the meteorologist author, is a world expert on satellite temperature measurements. His statement that “all 39 climate models exhibit larger warming trends than all three classes of observational data” does, therefore, carry weight. And his graph shows that more than half the models predict trends greater by 50% or more than those observed.

You would have expected that such tests of climate models would be routinely carried out, and published in prestigious scientific journals for all to see. But not so. It is hard not to suspect that this is an example of corruption of science in service of political agendas.

Air pollution toxicology

I have written on the history of air pollution toxicology in the UK, here: [[vii]]. I told of COMEAP’s (the Committee on Medical Effects of Air Pollution’s) 2009 report, which after skating all around the real problem, came down on the side of numbers provided by the UN’s World Health Organization (WHO). Of their 2018 report on nitrogen dioxide (NO2), in which the views of three dissenting experts, who maintained (correctly, in my view) that there was insufficient evidence on which to give a figure for the toxicity of NO2, were ignored. And of political interference by a deputy mayor of London with a 2018 study that concluded there was “no evidence of a reduction in the proportion of children with small lungs” over the period of the study.

Again, I cannot escape the feeling that in these instances, there was corruption of science in the service of politics. In this case, the agenda is to progressively force out of our cars those of us who drive older cars, and cannot afford either to upgrade them or to pay ULEZ fees or equivalent.

Species extinctions

The UN has its own counterpart to the IPCC in this area. This is IPBES (Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services). IPBES issued a 2019 report claiming that impacts of human activities were threatening a million species with extinction. Yet I myself have several times asked environmentalists to name one species to whose extinction I have contributed, and to say what I did, and roughly when, towards that extinction. I’ve never received any answer.

But just recently, the Royal Society has published a new report: [[viii]]. Its abstract tells us that: “Extinction rates have increased over the last five centuries, but generally declined in the last 100 years. Recent extinctions were predominantly on islands, whereas the majority of non-island extinctions were in freshwater.” And their graph (Figure 2(b)) shows extinctions peaking at about 50 per decade in the 1870s, 1930s and 1970s/80s, but dropping rapidly off since then. It seems, then, that the hype about a million species being threatened with extinction, whatever it was based on, certainly wasn’t founded on science.

Epidemiology

The science of epidemiology will forever be scarred by the antics of “Professor Lockdown,” Neil Ferguson of Imperial College London. In October 2020, I wrote an essay about SAGE, the UK’s “Scientific Advisory Group for Emergencies,” which was heavily involved in lobbying for draconian COVID lockdowns. It is here: [[ix]]. I think it is worth repeating what I said there about Professor Ferguson’s past statements.

“The British response [the first lockdown], Ferguson said on March 25th, makes him ‘reasonably confident’ that total deaths in the United Kingdom will be held below 20,000.” October 15th, cumulative deaths: 43,293 and counting. On August 17th, he was “‘optimistic’ Europe won’t see very large numbers of new COVID-19 cases this year.” October 15th, daily new case count: 18,980. That’s 2.4 times the peak of 7,860 on April 10th. Then, on September 22nd, we had this headline in the Sun: “Professor Lockdown doubles down on 500k UK coronavirus deaths forecast [from March] – and claims it was ‘underestimate’.”

Far be it from me to be kind to climate “scientists,” but it is fair to say that the equations of epidemiology are orders of magnitude simpler than the equations of climate science.

Medical statistics

And then there’s the infamous “excess deaths” saga, involving the Office for National Statistics (ONS). I wrote about this at [[x]]. Here are some quotes.

“They [the ONS] re-defined the way in which excess deaths are to be calculated. In a way that seems to have greatly reduced the resulting numbers, and thus the apparent size of the ongoing ‘excess deaths’ problem. While breaking the link between their figures and hard evidence from the real world.” Further: “The reactions which Andrew Bridgen received when he first brought up the subject in parliament suggest that someone, or some group, in a very high position does NOT want the full truth about excess deaths in the UK since 2020 brought out into the open.” And I ended with: “The dog has eaten my data. It looks as if the entire world may have stopped providing any excess mortality figures which are founded on real-world evidence! Cynical me does not think this is a co-incidence.”

I strongly suspect that the heavy hand of the WHO has been in here, too.

Why has science become corrupted?

So, why do not only many individual scientists, but also the entire process of science in many fields highly relevant to policy decisions affecting all of us, seem to have become corrupted?

The answer is not far to seek. It is an old proverb: “He who pays the piper calls the tune.” One of the side-effects of ever-increasing government control over education is ever-increasing control by the establishment of funding for universities and their academics. This leads to funds being directed where they will produce most benefit, not for the people whom government is supposed to serve, but for the agendas of the vested interests that are pulling the strings. It is not helped by billionaires, that fund academic groups including key alarmist figures at places like Imperial College.

The scientific establishment in the UK needs, and richly deserves, a major shake-up. All the corrupted heads must roll.

Some good news from across the pond

For a change, there is some good news. It comes from the USA. In May 2025, Donald Trump signed an executive order titled “Restoring Gold Standard Science.” [[xi]]. Here are two quotes.

“My Administration is committed to restoring a gold standard for science to ensure that federally funded research is transparent, rigorous, and impactful, and that Federal decisions are informed by the most credible, reliable, and impartial scientific evidence available.  We must restore the American people’s faith in the scientific enterprise and institutions that create and apply scientific knowledge in service of the public good.  Reproducibility, rigor, and unbiased peer review must be maintained. This order … ensures that agencies practice data transparency, acknowledge relevant scientific uncertainties, are transparent about the assumptions and likelihood of scenarios used, approach scientific findings objectively, and communicate scientific data accurately.”

“For the purposes of this order, Gold Standard Science means science conducted in a manner that is:

       i.          reproducible;

     ii.          transparent;

   iii.          communicative of error and uncertainty;

   iv.          collaborative and interdisciplinary;

     v.          skeptical of its findings and assumptions;

   vi.          structured for falsifiability of hypotheses;

  vii.          subject to unbiased peer review;

viii.          accepting of negative results as positive outcomes; and

   ix.          without conflicts of interest.”

Amen!

You can say bad things about some of what Donald Trump does. But in this instance, he is absolutely, squarely, right on the money. We need similar changes to happen here. I hope Nigel Farage is listening!



Sunday, 22 February 2026

Summary Response to “Informal Engagement” on Farncombe Local Streets Improvements

There is a Surrey County Council (SCC) project called Farncombe Local Streets Improvements. It is currently at Informal Engagement stage, which runs until March 15th.

It proposes imposing 20mph speed limits throughout the Farncombe area, even on main thoroughfares. As well as many instances of raised tables and junction narrowing, and several of carriageway narrowing. There is a project description and questionnaire at [[1]]. This is a summary of my response.

About myself

I have lived in Godalming for 40 years. My home is at the top of the hill, on Twycross Rd near Frith Hill Rd. I am a car driver and pedestrian. At the age of 72, a car is essential to my quality of life.

I am a regular customer of several businesses in Farncombe and Binscombe. And I like to walk frequently in Broadwater Park.

I have a degree in mathematics. And I have acquired, over decades, much expertise in environmental matters. I am known for my strong and principled opposition to net zero. I am strongly against all political policies that harm or restrict car drivers. I am also campaign manager for my local branch, Godalming and Ash, of Reform UK.

Political context

I am well aware of the “Local Transport Plan 4,” published in 2021 by Surrey County Council (SCC). The policies in that plan showed up its promoters as hostile to the needs and desires of everyone who drives a car in Surrey.

I am also aware that Tory-controlled SCC chose to join UK100, a strongly pro-net-zero group of councils. I am aware of the WHO’s “Vision Zero” road safety scheme, being pushed on us by the Liberal Democrat caucus on SCC. As to Labour, they have recently approved the roll-out of “15-minute cities” by local authorities across the UK. And Godalming Town Council, with ten Liberal Democrats, three Greens, three Labour and two Tories, has swallowed the deep green nonsense whole.

It will not surprise you, then, to learn that I am very strongly opposed to all schemes such as the Farncombe local streets “improvements.” If you live, work or shop in Binscombe, Farncombe, or Charterhouse south of Charterhouse School, and you agree with me, I urge you to submit your own response by March 15th.

The “early engagement” phase

I took part in the “early engagement” phase of this project in autumn 2024. I gained the impression that the whole exercise was biased towards highlighting perceived issues that could then be exploited to “justify” anti-car policies, and lowered speed limits in particular. I also noticed that the questionnaire allowed multiple submissions from each individual.

Area Wide Proposals

Here are my responses to the “area wide proposals” recently published.

20mph speed limit

In my view, the current 30mph limits in Farncombe are already perfectly adequate.

Here is the map of the proposals. Every road coloured red or gold is to have a 20mph speed limit slapped on it. These include roads vital to every journey I make, including those to Farncombe and to Godalming.

Moreover, two of these roads, Farncombe Hill and Frith Hill Rd, are extremely steep. And 20mph speed limits have been shown not to work well on steep hills.

Since speeds around Farncombe are already very slow, the effect of a 20mph limit on speeds in central Farncombe would be minimal. So, there would be no gain from 20mph speed limits for people in Farncombe itself, yet a big loss for people in neighbouring areas. This is not in any sense a benefit to the people. The entire 20mph proposal should be ditched.

Raised tables and speed bumps

In most cases, raised tables cause more trouble than any benefit they might bring. And speed-bumps can also cause more trouble than merely repeated strains on cars’ suspension. I also have a friend who recently suffered almost £700 worth of damage to the underside of his car going over a broken speed bump outside Loseley Fields school.

I would advocate removing all the obstructive hardware from our roads, and treating drivers as responsible adults.

Narrowing of junctions to “calm” traffic

Narrowing of junctions can make sense for pedestrian safety in certain places where the side road is unusually wide, or where the pedestrian route is heavily used. But in most places, it will have a negative effect. Junction narrowing looks particularly inappropriate on roads which are also bus routes.

Vehicle activated signs to encourage drivers to slow down

The most likely effect of such signs is to make drivers angry at being treated as if they were naughty children instead of responsible adults.

Improvements to accessibility through consistent dropped kerbs and tactile paving

Generally, this is a good idea. But it needs to be cost-justified in each case.

Pedestrian wayfinding

I don’t think this is of any benefit in a place the size of Farncombe.

Detailed issues

The full version of my response covers a large number of issues I found while looking over the detailed maps. Many of the proposals will have negative impact on those who need their cars in order to get to and from the Farncombe area. They are particularly damaging to people who live higher up the hill, and thus walking to and from Farncombe is not a sensible option.

There is far too much junction narrowing and carriageway narrowing in these schemes. I see these as making what should be a simple road into an obstacle course.

I have responded to many individual points via the consultation website.

To sum up

Most of the proposals will have negative impact on those who need their cars in order to get around the Farncombe area. Yet they will not lead to any benefits for people in Farncombe itself. They are not improvements, but the exact opposite. With just a few exceptions, all the proposed changes should be scrapped.

As campaign manager for Reform UK in the constituency which includes Farncombe, I shall be urging all our candidates, members, and supporters to fight against these proposals with everything they have.

And to voters: If your car matters to you, only one party will stand up for you. And that’s Reform UK. Vote Reform UK in the West Surrey elections on May 7th if you want to keep your freedom to drive.