The political right wing screamed ‘cancel culture’ in reaction to any attempt to correct their lies and disinformation.
Yet, who is doing most of the cancelling?
The very same people, with the loudest and most powerful voices who are part of Trump’s MAGA movement and his administration.
It appears at every level, from the petty to the lethal …
Trump responding to a reasonable question from a journalist by accusing her of being a nasty woman is revealing his nasty character but projects it on to someone who dares call him out.
Trump claimed that an election was stolen and then gets caught out trying to convince an official to “find” some more votes. His administration’s SAVE Act would disenfranchise millions, yet he continues to blame democrats for undermining fair elections.
He doesn’t want to accept the implications of what climate science has established over 200 years of emerging forensic analysis, so what to do? Easy, just defund it or close it down. Then claim they are producing fake science and justify his actions on the basis of disinformation.
Claim that a protestor who was shot dead by ICE was a terrorist, when it is these masked MAGA acolytes being sent into American cities who are the ones doing the terrorising
The list goes on.
Don’t confuse this with some kind of political playground tactic – “you’re a liar”, “No, you’re a liar” back and forth. It has far more sinister roots than that.
The Nazi’s claimed without evidence that Jews were planning a terror campaign against the German people, and used this as a pretext for their Kristallnacht terror campaign. A campaign of terror that was on the path that ultimately led to the Holocaust.
Hutus were encouraged to accuse the Tutsi of planning what the Hutu militias were already planning. It lead, as planned, to the Rwandan Genocide.
There is a term ‘accusation in a mirror’, coined by French social psychologist Mucchielli, in the context of the 1968 protests, which can be applied to this well rehearsed political strategy:
“Mucchielli described accusation in a mirror as imputing to the adversaries the intentions that one has oneself or the action that you are in the process of enacting. Mucchielli explained how the perpetrator who intends to start a war will proclaim his peaceful intentions and accuse the adversary of warmongering; he who uses terror will accuse the adversary of terrorism.”
I had a debate with someone on social media about a phrase that might work better in an Anglophone and particularly American context. We toyed with Mirror Move, Blame Bluff, Project Play, and several others, but in the end settled on Mirror Politics.
Whatever we call it, this is a central plank of the right wing approach to politics, in the US and in the UK, and we need to recognise it for what it is and call it out, because while it may seem an exaggeration to use examples from Germany and Rwanda, there is a warning from history that over time the accusations and therefore the mirrored intentions can escalate.
This is part of a broader range of malign tactics and strategies that has been termed Dangerous Speech. As Susan Benesch (Executive Director of the Dangerous Speech Project) writes:
“This is a time of fear in the world, and fear is an opportunity for autocrats who use it to consolidate power by using dangerous speech. At the same time, large numbers of people are mobilizing against weaponized fear and violence. We can support them, since the best way to make dangerous speech less powerful is to teach people about it. We are here for that.”
With the mid-terms approaching, American democracy at least is now clearly in the firing line.
When Trump, Vance or any of his MAGA entourage make accusations of bad intentions or plans directed at their opponents – or anyone who exercises their free speech to challenge them – be wary!
The chances are that is exactly what they are doing or planning.
The irony of JD Vance suggesting Britain is in the grip of cultural decline then holidaying in the Cotswolds was not lost on the natives who protested his presence, or the staff who refused to serve him at an up-market pub. Given the state of the USA at present, with its rapidly receding soft power, one might suggest he looks closer to home for cultural collapse.
It seems that, much to the surprise of the ill-educated VP, the Cotswolds is not an England of Mary Poppins and country cottages, frozen in aspic. In fact, the Brits have never been like that, except for gullible tourists. Behind a facade of tranquility, we’ve always been a pretty feisty lot when we need to be.
We also have a history of absorbing diversity. Just study the archaeology of the London, that Rome founded, or the tens of thousands of Huguenots who fled to Britain. They were not just sheltered here, but played a significant role in our commercial and cultural development. The diversity we find in London’s cuisine today is just another indicator. Trump’s relentless attacks on London’s Mayor rails at this diversity success story with barely concealed racism.
There is now a racially motivated right wing MAGA movement in the USA. This is an old story, and it never ends well.
It is no different in essence to every other racially motivated project that sought ill-conceived racial ‘purity’ over diversity. The list is a long one, and in no particular order: genocide in the Balkans; Apartheid in South Africa; Hindutva in India; the Holocaust/Shoah in Nazi Germany; and the ethnic cleansing perpetuated across the empires of Britain and other European powers.
Interestingly, exploitation of indigenous land and peoples, with its attendant extractivism and racism, has often been linked to climate change and continues to be so [1].
Eugenics was so popular in Britain that both the left and right promoted it. Francis Galton was not alone. As Adam Rutherford documents in his book Control: The Dark History and Troubling Present of Eugenics, many of our best known cultural figures were supporters. It was establishment thinking for the likes of H G Wells, Winston Churchill, George Bernard Shaw, Marie Stopes, and more.
They based their erroneous beliefs in part on a simplistic hereditarian mindset, which is perpetuated in how we’ve been taught eye colour genetics in school [2]. Some Eugenicists proposed genocide while others proposed ‘humane’ sterilisation. We are ignorant of this history because we choose not to face it.
The need for identity is a strong pull factor in all of us, so erroneous genetic beliefs persist in apparently benign forms, turbocharged by those DNA services that might tell you that you are 10% nordic. “Phew, I made it”, I hear some poor MAGA convert announce.
All nonsense, but almost everyone plays the game “your paintings are really good but then there have always been great artists in the family”, I am told. Nope! I had an interest in art and worked very hard to develop my practice; no freebies [3].
The desire for identity can so easily turn toxic, and it seems the US Administration under Donald Trump now equates diversity with cultural collapse.
David McWilliams shows in his book Money: A Story of Humanity, that diversity is always the route to greater prosperity. He gives many examples but the rich diversity of Norman Sicily is perhaps the most impressive of all.
We can learn much from nature in this regard, because nature abhors monocultures. It withers amongst the neatly trimmed lawns and acres of hard standing in America’s suburbia, where nature is curated almost to extinction.
Nature flourishes in messy diversity, as in a coral reef. Human societies and cultures do too.
So, let’s end our simplistic hereditarian mindset for good, and embrace the diversity that always has, and always will, enrich our lives culturally, commercially and in our communities.
The European colonisation of the Americas killed so many by 1600 (about 56 million indigenous people) that forests grew back where their crops once grew (lowering the carbon dioxide concentration in the atmosphere) that then cooled the Earth. Our contemporary extraction and subsequent burning of 300 million year old fossil fuels is not only warming the planet by putting ancient carbon into the atmosphere, but severely polluting indigenous lands: the water resources in North America polluted due to tar sands mining; the decades long impact of Shell’s oil extraction on the Niger delta; the environmental catastrophe created by the monumental Deepwater Horizon oil discharge; this list goes on. Amitav Ghosh’s The Nutmeg’s Curse: Parables for a Planet in Crisis gives a visceral historical account of the connections between empire, racism, extractivism and climate change.
Gene expression is more complex than the simple Mendelian theory of dominant and recessive genes. For eye colour there’s a gene for colour, but also, a gene that controls the extent of expression of the colour gene. So in practice we get a spectrum of eye colour that includes hazel, for example. While brown is dominant (i.e. the simple rule is that it trumps blue in a partner), in fact two brown eyed parents can sometimes have a blue-eyed child.
I’m a decent painter mainly because of hard work. I’ve always loved art and science, but at school I was forced to choose, and I chose science. My wife and I visited many exhibitions over the years, but always as onlookers. Only in retirement did I find the time to really focus on developing my art. It’s taken 10 years since then to really master it. I reaped the rewards of hard work and great mentors, not some easy “it’s in your genes” freebie. Even accepting that ‘nature’ and ‘nurture’ each play a role, we put far too much weight on ‘nature’ in many cases.
If people are confused about what to do about climate change in their everyday lives, they have every right to be.
Fossil fuel companies have for decades funded disinformation through a network of ‘think tanks’, and commentators, planting stories in the media. This was all helped by PR and Advertising agencies who know how to play with people’s emotions; to create fear, uncertainty and doubt.
Many have explored this issue more deeply than I ever can or will. Notably, Oreskes and Conway showed, in their book Merchants Of Doubt [1], how the same tactics used to promote smoking and deny its harms, were used by tobacco companies.
We might imagine we can now see through their tactics. I’m not so sure. I feel there is a tendency amongst some progressives to almost fall into the trap of amplifying the messages.
I am thinking of how some who claim that heat pumps are for the comfortably well off and it’s not fair to push them for those in energy poverty. The alternative – to stick with the comfort zone of insulating homes – came to be the default. This is not fair to anyone.
Before we get on to that, let’s start with the birth of ‘climate shaming’.
Climate Shaming 1.0: It’s your demand that’s the problem!
It is well established that fossil fuel companies like Exxon and their network decided to make you, the consumer, the problem [2].
The message:
It’s you driving your car and running your gas boiler. We are just meeting your demand, so don’t blame us.
Intended result:
Guilt, denial and inaction.
It is even alleged that BP and their communication agency Ogilvy cooked up the idea of ‘carbon footprint’ [3]. We could all then measure our level of guilt. No wonder people often resorted to tiny actions to salve that guilt, when they felt powerless to do more.
Yet, there is a counter argument that while this was and remains a key plank in the strategy to delay action, measuring things can be useful. What is needed is to shake off the guilt and find ways to act.
Climate Shaming 2.0: It’s all your fault!
Shaming has metastasised into everything we do that we can feel guilty about, where fossil fuels are often out of sight.
There are many voices at work here, but in the background, fossil fuel interests are keen to keep the heat on you, dear citizen, rather than them.
They will claim to be doing their bit, with greenwashing PR and advertising … now over to you people!
While they don’t control every part of this conversation we have amongst themselves, they have the wherewithal to influence it in a myriad of ways. The message we receive is, “don’t do this bad thing” (but we, fossil fuel interests, won’t help you):
Don’t fly to Europe (but we won’t divert fossil fuel investments into trains)
Don’t eat meat (but we’re happy to reinforce your guilt, when the Amazon burns; for cattle feed)
Don’t eat Ultra Processed Foods (but like this behemoth, we work hard to ensure law makers give our fossil fuel interests a free pass)
Feeling guilty? Feeling helpless?
(laughing emoji from fossil fuel boardrooms)
Recognising our agency
We are told by some progressive politicians and commentators that it’s all about system change, and that we should reject the idea that it is our fault. We can’t take an EV Bus if there is a bad bus service (and they are still run on diesel), we need to invest in rural public transport not just in the cities.
There is a lot of truth in this, but it isn’t quite that simple.
We are not separate from the system, and it is hardly ‘systems thinking’ to imagine such a separation. The system includes Government, business, civic society and the natural environment, interacting in numerous ways.
Citizen-consumers have a lot of identities (community members, consumers, voters, parents, volunteers, etc). These identities each have their own form of agency, which we can choose to use. We need the spirit of positive change in the choices we make:
To choose who to vote for.
To chose where we spend our money.
To choose where to go on holiday and how to get there (and if/how often to fly).
To modify our diet (reducing meat if not eliminating it).
To decide to buy quality clothing that is repairable (looking and feeling better).
To decide where we bank and where we invest through our pensions.
Even when an action one would like to take (like getting an EV) is not yet in reach, one can keep exploring options and set a goal for when it does come within reach.
Setting goals too is an achievement.
The shaming tactic of the fossil fuel interests is aimed at breaking our sense of agency. We have to organise and support each other and reclaim our agency, as individuals and as communities.
The Take The Jump initiative [4] espouses practical steps we can take, while recognising we also need system change.
Electrification of energy end-use is a key threat to fossil fuel interests
There are a range of solutions available now to make a serious dent in our carbon emissions. The most significant and relatively easy thing to achieve is to electrify our primary energy and energy consumption. These solutions are so brilliant they have become a threat to fossil fuel interests, notably:
Electric Vehicles (EVs) of all kinds will not only clean up our towns and cities but are so much more efficient than their fossil fuel alternatives. They require only a third of the energy of a petrol/ diesel car to run them.
Heat Pumps are so much more efficient than their fossil fuel alternatives. They require between a third and a fifth of the energy needed to run a gas boiler.
Both EVs and Heat Pumps are powered by electricity. When generated by solar and wind, it is both free and unlimited, because it is derived from the Sun (which deposits 10,000 times as much energy on Earth as humanity is ever likely to need).
There has been an incessant effort by the network of fossil fuel interests to plant stories and create memes aimed at trying to undermine this transition to clean, electrified energy use.
They know they will eventually lose, because the science of thermodynamics and economic reality mean it’s inevitable. Yet they will try to delay the transition for as long as possible. They can then extract as much fossil fuels as they can, and avoid ‘stranded assets’. Whereas, if they truly cared about climate change they would be working to leave it in the ground.
This essay is not the place to enumerate every myth and piece of disinformation that relentlessly circulates on social media about EVs and Heat Pumps. Carbon Brief have done the myth busting for you [5].
Climate Shaming 3.0: It’s ok for you woke well-to-do!
In order to counter this threat a new form of shaming emerged, particularly in relation to personal choice. I’m calling it Climate Shaming 3.0.
If one believed the framing so often evident in right-wing papers like the Mail and Telegraph titles, EVs and Heat Pumps are (paraphrasing)
… for the woke well-to-do – something they can afford but is not any good for most people …
If it was only these usual suspects one might try to shrug off this chatter.
Unfortunately, there has emerged an unholy alliance amongst those who would regard themselves as green progressives (in a non political sense), who are in a way doing exactly what the fossil fuel messaging is intended to promote.
We have politicians of all kinds who have been cowed by toxic reporting on heat pumps who – wanting to show they are addressing fuel poverty – will talk endlessly about the need to insulate homes. Yet they dare not use the words ‘heat pump’ for fear of being accused of elitism (even though a heat pump is a far more cost-effective route to decarbonising heating than deep retrofit [6]).
They must be laughing their heads off in the boardrooms of fossil fuel companies.
Is it really ‘climate justice’ to promote the poorly designed ECO (Energy Company Obligation) scheme that the NAO (National Audit Office) declared [7] has been a total failure? NAO found that external wall insulation, for example, has led to bad and often exceptionally bad outcomes 98% of the time. This has required very expensive re-work in many cases, compounding the injustice.
This is to be contrasted with the BUS (Boiler Upgrade Scheme) that – despite all the claims about a lack of skills in the sector – has helped to really pump prime the heat pump sector and can be regarded as a success.
Communities like Heat Geek are really shaking things up too, to lower installation costs and improve the quality of installations (to the level already practiced by many small businesses with great track records).
The unholy alliance extends to plumbers, retrofit organisations, council officers, architects and politicians who claim you cannot heat an old building without deep retrofit. A disproven and false claim, but repeated as many times as the story about British pilots seeing better in WWII thanks to eating carrots.
Some untruths live on through repetition.
The idea that we can insulate our way out of energy poverty, without also pushing at least as hard on rolling out heat pumps (individually or using shared heat networks) is an illusion, that would mean we’d be stuck with burning gas for much longer than necessary.
More laughter from those boardrooms.
Insulation, replacing windows and other fabric measures are important but you can easily blow so much money on these that you leave nothing in the pot for a heat pump [6].
Here is a diagram from Nesta that was based on one I originally produced and here I have added some further annotations (see [6] for Nesta version):
That is not climate justice, or fair on anyone.
It is not climate justice for those in energy poverty to have to pay for gas that will inevitably go through repeated market crises and cost spikes in its dying decades.
Climate justice is future proofing our electricity supply, the grid, our homes and our streets.
These will then be not only cleaner and more efficient but future proofed. As the late Professor Mackay observed, once you have electrified end use of energy, the electricity can come from anywhere: from your roof, from a community energy project, or from a wind farm in the north sea.
It’s time that those that claim to be progressives stopped falling for the tactics of fossil fuel interests, that time and again are slowing our transition to a clean energy future, and action on climate change.
It started with shaming people for their consumption. Let’s not fall for the new tactic of shaming those who actually care enough to adopt effective solutions.
References
[1] Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming, Naomi Oreskes and Erik M. Conway, 2010, Bloomberg Press.
[2] Exxon Mobil’s Messaging Shifted Blame for Warming to Consumers, Maxine Joselow & E&E News, Scientific American, 15th May 2021.
All the talk of the ‘spark gap’ – the particularly high ratio of electricity unit prices to gas unit prices – might deter people from getting a heat pump, because they think it will mean they will pay more for their heating than they do currently, but this is false in the majority of situations where householders are end-of-lifeing their old gas boiler.
Let’s run the numbers.
Take a building that currently that consumes 30,000 kWh of gas for heating per year.
At a gas unit price of 6p/kWh that totals £1,800 per year (for the moment, ignoring standing charges for simplicity)
Let’s assume the old gas boiler is 75% efficient (in many cases with will be quite optimistic).
So, building actually needs 22,500 kWh of heat reaching radiators (0.75 x 30,000 = 22,500).
So the question is, can a heat pump be cheaper to run with its high relative performance that counteracts the ‘spark gap’? Let’s see …
Let’s assume a reasonable minimum achievable heat pump system SCOP of 3.5
So heat pump needs 6,429 kWh of electricity to produce 22,500 kWh of heat ((22,500 / 3.5) = 6,429)
At a electricity unit price of 22p/kWh that totals £1,414 per year
That is a saving of £386 on running costs
Health Warning: The difference is very sensitive to the ‘spark gap’ (ratio of electricity to gas unit prices), and crucially the SCOP.
Now, I am not saying there is not an issue with the ‘spark gap’. Adoption rates in Europe show that the smaller the spark gap, the high the adoption of heat pumps (see ‘Figure 2.4 Comparison between the heat pump market share, the number of heat pumps installed, and electricity and gas price ratio for countries in Europe in 2023’, Progress in reducing emissions – 2025 report to Parliament, 25 June 2025).
However, when people talk about the spark gap they seem to assume the context is ‘buy a new gas boiler or buy a heat pump’. Needless to say that is a higher bar but not an insurmountable one. Many people who are concerned about climate change and have an ageing gas boiler simply want to know that their heat bills will not rise.
Now back to standing charges. I rerun the numbers for different SCOPs and included standing charges (see NOTES for assumptions). The ‘breakeven’ SCOP is then close to 2.9, which frankly only an incompetent heat pump installer would fail to exceed.
And what is more, for any of these SCOPs the carbon saving is at least 4 tonnes of carbon dioxide equivalent per year. So both the planet and the bank balance can be happy with the choice.
So, let’s fix the spark gap, but stop banging on about it as though it is a reason not to press on with rolling out heat pumps.
(c) Richard W. Erskine, 2025
NOTES
Assumptions used in table: With heat demand of 22,500 kWh and old gas boiler with efficiency of 0.75 (75%), so gas bill showing 30,000 kWh primary energy used by gas boiler. Used standing charges of 28p and 59p per day for gas and electricity, and unit rates of 6p/kWh and 22p/kWh, respectively. The breakeven running costs SCOP in this case is 2.935. Also, a carbon intensity of gas of 184 gCO2/kWh and for UK electricity grid (for 2024) of 124 gCO2/kWh; so even at a SCOP of 2.5 you save 4.37 tonnes of CO2 a year.
Most people have heard about chaos theory, especially as it applies to weather, but may be a little fuzzy about what it all means. They may even hear people claim “if they can’t even predict the weather in a month’s time, how on earth can they tell us what the climate will be in 25 years time?!”.
It’s a fair challenge, but one that has been answered many times by climate scientists [1], but often in ways that perhaps are not as accessible as I feel they could be. When I was recently asked this question I was frustrated I could not share a plain English article with them.
So here is my attempt in plain, non-scientific language to explain how we can project future climate, despite ‘chaos’. I will use the analogy of rolling dice to help explain things – so no equations or mathematical jargon, I promise.
Chaotic Weather
Let’s start with the discovery of ‘chaos’ by Lorenz in 1963 [2]. Weather projections have to start from the current state of the weather and then project forward. The models incrementally step forward to see how the weather patterns evolve over minutes, hours and days. Lorenz discovered that even with the simplest models, if one did two ‘runs’ of the model which had an infinitesimal difference in initial conditions (eg. the temperature in Swindon at 15.0oC and 15.00001oC) the predicted weather can look very different in just a few weeks..
If this was just a trivial observation that errors can magnify themselves in a complex system, one might be tempted to shrug one’s shoulder – and it was not even a new insight [3]. But Lorenz discovered something far more profound: beautiful patterns amongst the chaotic behaviour of complex systems (think of the eddy currents that appear in the turbulent flow of a river). For those interested in learning more about Lorenz’s mathematical legacy, Professor Tim Palmer gave an interesting talk on this [4].
I say ‘errors can magnify’ because sometimes you end up with a chaotic outcome and sometimes you don’t [5]. This is important if you are about to head off to Cornwall for your summer holiday. Weather forecasters now do multiple runs of the models varying the initial parameters [6]. If all the outcomes look similar then the weather system is not behaving chaotically – at least over Cornwall for the period of interest – and the weatherman can say confidently “it will be dry next week over Cornwall”. If, however, out of 100 runs, 20 indicate wet and windy weather, and the rest were dry, they’d say “There is a good chance of dry weather over Cornwall next week, but there is a 20% chance of wet and windy weather”, so take your waterproofs!
Predictable Climate
It really is all about the question being asked, as with most issues in the world. If you ask the wrong question, don’t be surprised if you get a misleading answer.
If I ask the question “will it be sunny in Cornwall on the 3rd of July of 2050?” (wrong question) then it is impossible to say, because of ‘chaos’. If, on the other hand, I ask the question “do we expect the average temperature over Cornwall to be higher in the summer of 2050 as a result of our carbon emissions compared to what it would have been without those emissions?” (longer but valid question) I can answer that question with confidence; it is “Yes”.
This illustrates that when we talk about weather we are interested, as in our holiday plans or a farmer harvesting their crops, in the specific conditions at a specific place and specific time.
Climate is very different, because it is about the averaged conditions over a longer period and typically wider area.
Throwing the dice
I want to illustrate the difference between these two types of question (specific versus averaged) by use of a dice [7] analogy.
If I throw a dice I expect that the chance of getting a 6 to be 1 in 6. If I ask the (specific) question ‘what will the hundredth throw of the dice show?’ (think weather), I am no more certain of the outcome than after 10 throws [8].
Now ask a different question: ‘what will be the average number of 6s after 600 throw?’ (think climate). I would expect it to be around about 100. As the number of throws increases I’d expect the average (number of 6s divided by the number of throws) to get closer and closer to 1 in 6.
This is just how statistics comes to the rescue in the face of the much used, and abused, “chaos” in the climate debate.
You can do this yourself. Make multiple throws of a dice, and after each throw, take the count of the number of 6s thrown and divide by the number of throws – that is the observed odds. You might be surprised to find how long it takes before the odds settles down to close to 1 in 6.
Being lazy, I wrote a little program to plot the result (using a random number generator to do the ‘throwing’ for me).
The averaged number of 6s converges on the expected odds of 1/6 (shorthand for ‘1 in 6’).
I then imagined two dice, one that was ‘fair’ (where the odds of throwing a 6 were 1 in 6) and a ‘loaded’ dice (where the odds have changed to 1 in 5). This is a analogy for a changed climate where carbon emissions have been happening for some time but have now stopped, and there is a raised but stable concentration of greenhouse gases in the atmosphere. This gives rise to a higher averaged temperature, represented by the higher odds of throwing a 6 in this analogy (see next illustration).
Despite the uncertainty in any specific throw (think weather) in both cases, the average chance of getting a 6 can be predicted (think climate) in both cases. We can see the loaded dice clearly in the graph, compared to the fair dice. In both cases it takes a little time for the influence of randomness (chaos if you like) to fade away as the number of throws increases.
However, the emissions have not stopped, and in fact have been growing since the start of the industrial revolution. There has been a significant acceleration in emissions in the last 75 years. So the amount of accumulated greenhouse gases in the atmosphere has been growing, and with it, the averaged surface temperature on Earth.
So, taking the analogy one step further, I created a dice that gets progressively more ‘loaded’ over time (think each year of emissions).
Now, the averaged chance of throwing a 6 will progressively increase, compared to the fair dice. This is illustrated in the next graphic.
Again, we see the averaged odds after a number of throws jump around for quite a while (think chaos), but things settle down after a several hundred throws.
We now see a clear and ever widening gap between the two dice.
This is analogous to what is happening with our climate: our continuing carbon emissions are progressively loading the ‘climate dice’.
No amount of weather chaos can cancel the climate statistics that become more evident with every year that passes.
Extreme WeatherEvents
Now while weather and climate are different, because climate is an average of what the weather is over time, there is an interesting flip-side to this. Since the climate changes due to our carbon emissions, that means the spread of possible weather must have also shifted, to generate a new average.
This means that extreme weather events become much more likely.
Once again, this is just basic statistics. So events that may have been “one in a hundred years” become much more frequent, and very extreme events, like the 40oC we saw in England in 2022, that were “basically impossible” without our carbon emissions [9], now start to happen.
I don’t want to make this essay longer explaining how this works, and the Royal Statistical Society have done a great job on this, so please visit their explainer [10].
Extreme weather events are now popping up all over the world, almost on a weekly basis, and thanks to the statistics and associated modelling, scientists can now put a number on how much more likely each event has become due to our carbon emissions [11].
We have already loaded the climate dice, the question now is, how much more do we want to load it, and make the odds even worse?
Over the specific place and time period of interest, of course.
This is called ‘ensemble modelling’.
For the grammar police: common usage now prefers ‘dice’ for singular and plural cases.
In this sense, the dice analogy is somewhat different to climate, because climate change is conditional on what came before, but this does not change the point of the analogy – to distinguish between specific and averaged questions.
I respect those wishing to protect nature who are worried about unrestrained infrastructure projects, but the ‘unrestrained’ bit was never part of the plan, and strawman arguments now abound, such as the claim we will be building solar farms on prime arable farmland.
An astonishing 30% of UK land is devoted to grazing, and raised solar arrays can co-exist with grazing, even providing shade during heatwaves. It may even pay back some of the carbon impact of those methane burping ruminants. Solar grazing (or agrivoltaics) is now a thing in some countries so why is it not supported by organisations like the CPRE in the UK?
I have concerns about the impact of progressive weakening of the Government’s new infrastructure policies that may continue the blocking or delaying of essential on-shore renewable energy projects.
In his seminal book over 15 years ago, Professor David Mackay wrote1:
If the British are good at one thing, it’s saying “no.”
No to this solar farm; no to that wind turbine on that hill; no to that wind farm off my coastline; etc.
This, despite the fact that the Government’s most recent public opinion survey2 shows 80% are in favour of renewables; although when it comes to on-shore wind and solar farms in one’s locality, this drops to 37% and 47%, respectively.
Is this because the public are not aware of the benefits of local energy production? Or because not enough of it is community owned? Is it that people do not understand the nature of the emergency we face and the imperative to act?
We’ve seen over the sequence of three heatwaves3 recently (heatwaves that have been made much more likely due to man-made global warming4) that our beloved commons around Stroud now look more like the Serengeti than our green and pleasant land. This will be the new norm by 2050 if we don’t urgently address our emissions.
At this stage in the climate emergency, climate inaction is tantamount to climate denial.
The Climate Change Committee has made it abundantly clear that we need to electrify most of our economy to get to net zero expeditiously and affordably5: This applies to both generation and consumption:
“In many key areas, the best way forward is now clear. Electrification and low-carbon electricity supply make up the largest share of emissions reductions in our pathway, 60% by 2040. Once the market has locked into a decarbonisation solution, it needs to be delivered. The roll-out rates required for the uptake of electric vehicles (EVs), heat pumps, and renewables are similar to those previously achieved for mass-market roll-outs of mobile phones, refrigerators, and internet connections.”
and really at a much lower costs than many have claimed:
“We estimate that the net costs of Net Zero will be around 0.2% of UK GDP per year on average in our pathway, with investment upfront leading to net savings during the Seventh Carbon Budget period. Much of this investment is expected to come from the private sector.”
Much has changed since David Mackay wrote his book. The costs of renewables has dropped, so they are now the cheapest form of energy (and onshore cheaper still).. Yet I believe another kind of “No” has developed in the dialogue around renewables infrastructure.
There has emerged a false dichotomy between green energy infrastructure and nature. The case often presented is that to protect nature we have to limit infrastructure to only those places which no one cares about, like brownfield sites, which of course would completely undermine any attempt to reach the levels of onshore wind and solar that are needed to supplement off shore development. Whereas there are many things harming nature which are much worse including farming systems, tidy gardens, and climate change itself.
Take the rewiring of our electricity grid that is needed for an electrified economy. The case is made for burying cables as opposed to pylons because it is assumed they are environmentally less harmful, and despite the enormous increase in capital costs (and hence delays) that would result. In fact, burying the quite different ultra expensive cables needed in wide trenches can have impacts on flora and fauna, such as harm to tree roots and subsoil ecology, that can exceed those arising from pylons.
Isn’t the honest truth that people simply don’t like their view being changed by the addition of renewables to the landscape and some use the nature card to avoid being labelled NIMBYs? I fear so.
Rodborough Common 19th Juky 2025 by Richard Erskine
Conversely, we can fail to act and our grandchildren will see a landscape changed forever by our inaction. The MetOffice’s most recent State of The Climate report6 states that under the intermediate pathway scenario (RCP4.5) “years 2022, 2023, and 2024 would likely be considered average by the 2050s and cool by around 2100”. Is that preferable to some wind turbines today offering local energy security and resilience, helping the local community do its part in decarbonising our economy?
The good news is that because of the enormous efficiencies of electrification and the end of burning fossil fuels, the primary energy required from renewables – about 800 TWh per year – would be about one third of the primary energy hitherto required from fossil fuels. Even if we almost double this – to allow for new demands like synthetic meats, AI, minerals recycling, etc – to about 1500 TWh, an Oxford University study7 shows wind and solar can power the UK. As Hannah Ritchie summarises the findings8:
“They think there is a large potential for offshore wind. This would be spread over 10% of the UK’s exclusive economic zone. Onshore wind could be used on 5% of British lands, and combined with farmland. 2% of British land would be used for solar PV, and could also be combined with farmland using a technique called ‘agrivoltaics’. Rooftop solar doesn’t add much – the output is quite small, even if 8% of British rooftops are covered. Definitely still a good option for individuals, but maybe not for the nation as a whole.”
For those that say let others do it, because we are special, don’t be surprised if everyone claims the same. It is analogous to a parent who says let other children take the vaccine (while their child benefits from community immunity so they can avoid the very small risk of side effects of inoculation). If everyone made that choice, everyone is at risk.
Have we, in short, become too selfish to take the steps to act with the urgency needed to actually take declarations of a climate emergency seriously; to go beyond laudable actions like recycling to really substantive endeavours?
We need to make the difficult decisions needed but work hard to take people with us, rather than stoke fears as some political parties choose to. The political debate has created some surprising bedfellows amongst those opposing onshore renewables projects.
UK news coverage just triggered me so please excuse me but really …
Good news: the coverage of heatwaves is drawing the link with climate change on BBC and C4.
Bad news: there seems to be a lot of surprise at this! The dry conditions and repeated heatwaves, causing head scratching on questions like ‘who knew?’, ‘does this herald worsening heat extremes?’, etc.
Well hello people, this has all been completely obvious to scientists studying climate since at least the 1970s, but society has gone along with denial (yep, we’re all in denial, to some degree).
People talk about the elephant in the room – the thing no one has mentioned but really should not have been ignored. Well, here we have the scientists in the room, including the news room, and now regularly demonstrating the long prediced link between man-made global warming and extreme weather events and episodes..
The Metoffice produces frequent decadel forecasts that few read, and then people get surprised when we have another 100 year heat wave or 100 year flood (following the last one 5 years ago; remember 40C in UK in 2022).
When the odds keep changing the use of the phrase “100 year event” we heard from ‘the orange one’ in relation to the deadly Texas floods, is meaningless, and misleading, but unsurprising from someone who is well into his mission to dismantle the USA’s climate science capacity, weather forecasting, and ability to adapt and respond to extreme weather events (driven by man-made climate change that is the underlying driver).
Switch off if you want to, but the simple truth is that every tonne of carbon dioxide we emit cumulatively turns up the climate one-way ratchet and increases the risk of extreme weather events (at both ends of the hydrological cycle, because warmer air holds more water).
More emissions. The dice gets loaded a bit more. The odds get changed a bit more. Repeat.
At this rate, by 2100, my great grandchildren will yearn for the (relatively) cool summers of the 2020s.
And because CO₂ is a long lived greenhouse gas, don’t expect the atmospheric concentration of it to fall anytime soon. Ratchets turn in one direction. Give it hundreds to many thousands of years before long-term carbon cycles begin to reduce atmospheric concentrations to comfortable levels for humanity, but by then on a changed planet.
Prevention is better than cure with a vengeance in this case.
Worried about heatwaves? You should be but please, don’t be surprised.
“We estimate that the net costs of Net Zero will be around 0.2% of UK GDP per year on average in our pathway, with investment upfront leading to net savings during the Seventh Carbon Budget period. Much of this investment is expected to come from the private sector.”
And 0.2% of roughly £3 billion of GDP is just £6 billion a year (and most coming from industry), less than what the UK spends on fizzy drinks. Even the Government’s spending watchdog agrees. And what a fabulous investment with huge ROI (Return On Inhabitability). The costs of inaction make the costs of action look small by comparison.
Reject the populist, science rejectionists, who think denial wins votes.
I’ll always vote on behalf of those who come after us who I hope will be wiser, less selfish and less ignorant than our generation have been, yet will feel the full force of our failure to take urgent action when we should have.
Yet, it is not too late for us to reduce harms. The harm-free-option ship has sailed, but every tonne avoided makes a difference, and reduces the level and frequency of extremes to come.
I was excited to get my hands on Jean-Baptiste Fressoz’s latest book More and More and More – An All-Consuming History of Energy [1]. He offers up a very lively critique of the notion of historic energy transitions – from wood, to coal, to oil and gas.
His methodology aims to show how material flows are intimately linked to energy production in often surprising ways over time. For example we needed wood as pit props to mine coal, and in surprising quantities. Most of the book is devoted to examples of the symbiosis that has existed between the successive materials required to meet our energy needs. He mocks the idea of energy transitions with numerous well researched anecdotes, awash with surprising numbers. It is an entertaining read I would recommend to anyone.
However, I was expecting the book would close with some prescriptions that would show how the “amputation” the blurb called for could be achieved, but in the end he tells us he offers no solutions, or “green utopias”, as he discussed in an interview [2].
In the finale, he presents the newest energy transition – towards a world powered by renewables – as just the latest incarnation of a delusional concept, but largely abandons his methodology of using numbers to prove his case. I wonder why?
He does not deny the reality of a need to reduce carbon emissions, or the science of climate change, but it is clear he sees humanity’s insatiable appetite for energy as the central issue that must be addressed. He could have written a different book if that was his objective.
There are fundamental flaws in Fressoz’s scepticism of the renewables transition.
Solar abundance
The first of these is that the new source of energy that supplies our energy in a renewables future is our sun. Energy from the sun is a quite different category to that we extract from the ground.
The most pessimistic projection is that humanity, or what we may become, will have hundreds of millions of years left of usable energy from the sun [3]. No digging or extraction required. I’d call it functionally infinite on any meaningful timescale.
Not only that, but the sheer power of the sun’s energy is awesome, which we capture as wind, through photovoltaics, and the ambient energy harvested by heat pumps. As Frank Niele observed 20 years ago [4]:
“The planet’s global intercept of solar radiation amounts to roughly 170,000 TeraWatt [TW] ( 1 TW = 1000 GW). … [man’s] energy flow is about 14 TW, of which fossil fuels constitute approximately 80 percent. Future projects indicate a possible tripling of the total energy demand by 2050, would correspond to an anthropogenic energy flow of around 40 TW. Of course, based on Earth’s solar energy budget such a figure hardly catches the eye …”
It is clearly a category error to compare renewables with fossil fuels.
False equivalence
Ah, but what about the lithium and all those (scare story alert) “rare earths” needed to build the renewables infrastructure. This is the second flaw in the Fressoz thesis. The example of wood consumption for mining staying high even after the ‘transition’ to coal, is an example of an essential material relationship between the kilowatt-hours of energy produced and the kilograms of material consumed. This link does not exist with renewables to any meaningful degree.
It has nevertheless become a popular belief amongst those questioning the feasibility of renewables. For example, Justin Webb on BBC Radio 4 [5] posed this question:
“Is it also the case of us of us thinking whether we can find some other way of powering ourselves in the future … [we are] just going from taking one out of the ground – oil – into taking another thing or another set of things just isn’t the answer, isn’t the long-term answer for the planet.”
This is another category error that unfortunately Fressoz seems happy to go along with. The quantities of minerals required is minuscule compared with the huge tonnage of fossil fuels that has powered our carbon economy, as CarbonBrief illustrated as follows, as part of a debunking of 21 myths about Electric Vehicles [6]:
Credit: CarbonBrief
This false equivalence between minerals extraction and fossil fuels extraction is now widely shared by those who prefer memes to numbers.
A detailed published analysis of the demands for minerals required to build out renewables infrastructure by mid century shows we have enough to do this, without assuming high levels of recycling [7]:
“Our estimates of future power sector generation material requirements across a wide range of climate-energy scenarios highlight the need for greatly expanded production of certain commodities. However, we find that geological reserves should suffice to meet anticipated needs, and we also project climate impacts associated with the extraction and processing of these commodities to be marginal.”
Yet many commentators claim we are in danger of running out of ‘rare earths’ (which they conflate with minerals in general).
Beyond that, it is true that for many minerals it is cheaper to mine them rather than recycle them but Fressoz claims (p.218) “recycling will be difficult if not impossible”. There is no scientific basis for that claim. By 2050, one can expect that better design, improved technologies, economic incentives, and global coordination will become widely effective in tilting the balance to recycling rather than fresh extraction (and energy inputs to do this will not be an issue, as noted earlier).
And once you have built a wind farm it will continue to provide energy powered by the wind for a few decades (which is powered by the sun), without the need for material extraction or material inputs, and the faster this is done, the cheaper it gets, saving trillions of dollars [8].
A renewables circular economy is perfectly feasible, following the initial build out of the new infrastructure by mid century, with abundant energy from the sun powering the recycling needed to maintain and refresh that infrastructure.
Intermittency and grid stability
It is sad that Fressoz decides to play the it-doesn’t-always-shine card when he writes (p. 212):
“At the 2023 COP, the Chinese envoy explained that it was ‘unrealistic’ to completely eliminate fossil fuels which are used to maintain grid stability”.
… as though that settled the argument. They may have said this for UNFCCC (UN Framework Convention on Climate Change) negotiating reasons, but it is frankly pretty depressing that Fressoz shared this quote as though it reflected current informed opinion on power systems.
Firstly, even fossil fuelled generation in the early 20th Century needed flywheels to level out energy supply, and in so doing, maintain grid frequency. Such devices can live on in a renewables dominated grid. More likely is the emergence of ‘grid forming inverter’ technology that can replace inertial forms of frequency response such as flywheels and turbines.
Secondly, there are several other ways in which a grid that is 100% based on renewables can remain stable, including what is called ‘flexibility’ (including demand shifting), and distributed energy storage.
The UK is rolling out a lot of battery storage, and these have the benefit of being able to be both large and small to support the network at local, regional and national levels. Battery Energy Storage System (BESS) technology is already making an impact in the UK, Australia and elsewhere [9] demonstrating the resilience that can be achieved in a well designed and well managed grid:
“Recently, a major interconnector trip sent the UK’s grid frequency plummeting. At around 8:47am on a morning in early October [2024], the NSL [North Sea Link] interconnector linking the UK and Norway, suddenly and with no warning, halted … with immediate and potentially disastrous impact on the UK’s electricity grid … battery energy storage systems (BESS) answered the call. Across NESO’s network [National Systems Energy Operator], 1.5GW of BESS assets came online to inject power into the system, bringing frequency to strong levels within two minutes.”
Far from renewables infrastructure causing a blackout, it prevented it. Other countries can learn from this (side eye to Spain!).
A near 100% renewables grid is well within the reach of countries like Australia, and others are not far behind [10]
As the infrastructure scales up, additional storage will be added, to deal with rare extended periods of poor sunlight and low wind. The Royal Society has provided recommendations [11] on how to handle such extreme episodes.
The Primary Energy Fallacy & Electrification
While Fressoz does talk about the efficiency arising from new forms of production and consumption, he does not really chose to provide any numbers (which is in stark contrast to the slew of numbers he uses when talking about wood, coal, oil, etc.).
He then makes the point (p. 214):
“In any case, electricity production accounts for only 40 per cent of emissions, and 40 per cent of this electricity is already decarbonised thanks to renewables and nuclear power.”
He channels arguments that readers of Vaclav Smil will be familiar with. Telling us how hard it will be to decarbonise steel, fertiliser production, flying, etc.; no solutions, sorry.
Even S-curves (that show how old technology is replaced by new) are disallowed in Fressoz’s narrative, because they are too optimistic, apparently, even though there is empirical evidence for their existence [12].
Just a ‘too hard’ message.
What he fails to mention is that the energy losses from using fossil fuels are so large that in electrifying the economy, we will need only about one third of primary energy hitherto needed (using renewables and nuclear). So, in the UK, if we needed 2,400 TWh (Terawatthours) of primary energy from fossil fuels, in an electrified economy powered by renewables, we’d only need 800 TWh to do the same tasks.
The efficiencies come both from power production, but also from end use efficiencies, notably transportation and heating. By moving to electric vehicles (trains, buses, cars) and heat pumps, we require only one third of the energy that has hitherto been used (from extracted coal, oil and gas). This is massive and transformational, not some minor efficiency improvement that can be shrugged off, as Fressoz does,
Green production of steel, cement and fertiliser is possible and in some cases already underway, although currently more expensive. Progress is being made, while flying is more difficult to crack. Together these sectors account for about a quarter of global emissions. Yet, road transport and heating together also represent about quarter of global emissions [13], and are easy to decarbonise, so I guess don’t fit into the book’s narrative.
The surprise for many, who are effectively in thrall to the primary energy fallacy, is that we can raise up the development of those in need while not necessarily increasing the total energy footprint of humanity. We can do more and more, with less!
Who is deluded?
In his essay The Delusion of “No Energy Transition”: And How Renewables Can End Endless Energy Extraction, Nafeez M Ahmed offers an eloquent critique of Fressoz’s book [14].
A key observation Ahmed makes is that Fressoz’s use of aggregate numbers masks regional variations in a misleading way:
“Because he fails to acknowledge the implications of the fact that this growth is not uniform across the globe at all, but is concentrated in specific regions. The aggregate figures thus mask the real absolute declines in wood fuel use in some regions as compared to the rise in others. Which means that oil and wood fuel growth are not symbiotically entwined at all.”
Ahmed goes on to present the arguments about the different nature of the move to renewables, electrification of end-use and so on, in an eloquent and persuasive way. I strongly recommend it.
Fressoz is right to claim that many have been seduced by a simplistic story about past transitions. His book is very entertaining in puncturing these delusions, but he overplays his hand. Ahmed argues convincingly that Fressoz has failed to demonstrate that his methods and arguments apply to the current transition.
Fressoz’s attempt to conjure up a new wave of symbiosis fails because he misunderstands and misrepresents the fundamentally different nature of renewables.
Is there a case for degrowth?
Of course, we do live in a world of over consumption and massive disparities in wealth (and over consumption does not seem to be a guarantee of happiness).
The famous Oxfam paper on Extreme Carbon Inequality from 2015 [15] showed how the top 10% of the world (in terms of income) were responsible for 50% of emissions, and the bottom 50% were responsible for 10% of emissions. An obscene asymmetry. As Kate Raworth argues in Doughnut Economics, we need to lift up those in need, while reducing the overconsumption of some that threaten planetary boundaries.
Yet we do not help those in poor countries by getting them hooked on fossil fuels. Indeed, renewables offer the opportunity to avoid the path taken by the so called ‘developed world’, and go straight to community-based renewable energy. This can be done – at least initially – without necessarily needing to build out a sophisticated grid: solar, wind, storage and electrified transport, heating and cooking is a transformative combination in any situation. We can increase the energy footprint of the poorest (providing them with the development they need), while reducing their carbon footprint.
Yet many want to play the zero sum game. True, there is a carbon budget (to remain below some notional global target rise in mean surface temperature, we cannot burn more than a quantity of carbon; the budget). We should share it out this dwindling budget fairly, but honestly, will we?
The game is nullified if people simply stop burning the stuff! The sun’s energy is functionally infinite (in any meaningful timeframe), so why not reframe the challenge? How about the poorest not waiting for, or relying on, the ‘haves’ suddenly getting a conscience and meeting their latest COP (Conference Of the Parties) promises? Countries like Kenya are already taking the lead [16].
Energy Independence and Resilience within our grasp
There are of course multiple interlocking crises (climate, nature, migration, water, and more). They are hard enough to deal with without claiming that energy should join them.
The land use needed for our energy needs is small compared to what is needed for agriculture and nature, so again, renewable energy is not part of another fictitious zero sum game involving land use.
A paper from the Smith School in Oxford [17] has found that wind and solar power could significantly exceed Britain’s energy needs. They found that even if one almost doubled the standard estimates of the energy needs (to cater for new demands such as circular economy, AI and synthetic meat in 2050), there were no issues with the area of land (or sea) required:
Solar PV 4% of British Rooftop
Solar PV 1% of British Land*
Wind Onshore 2.5% of British Land
Wind Floating Offshore 4% of UK’s exclusive economic zone.
… and bearing in mind that 30% of land is currently used for grazing.
The scare stories about prime arable land being covered in a sea of solar panels is politically motivated nonsense.
I gave a talk Greening Our Energy: How Soon, on how to understand how the UK has made the remarkable transition from a fossil fuel dominated energy sector to our current increasingly decarbonised grid, and how the journey will look going forward (and in a way that is accessible to lay people) [18].
In a world of petrostates and wars involving petrostates, there has indeed been repeated energy crises, and they will get worse while we remain addicted to fossil fuels.
Transitioning to a green energy future is the way out. It is already under way, we have the solutions. We just need to scale them up, and ignore the shills and naysayers.
Let’s not say or imply that solving the many injustices in the world is a pre-condition to addressing the energy transition. This is the false dilemma that is often presented in one form or another, often from surprising quarters, including ostensibly green ones. It is a prescription for delay or inaction.
Achieving green energy independence and resilience might actually undermine the roots of many of those power structures that drive injustices, because energy underpins so much of what communities need: education, health, food, and more.
John Lennon seems to says it right in his song “Power to the people”.
My short review of ‘The Many Lives of James Lovelock: Science Secrets and Gaia Theory’, Jonathan Watts
If you have been variously inspired, confused and infuriated over the years about James Lovelock, then this wonderful biography is a revelation. It’s a book that is impossible to put down. It explores the deep roots of Lovelock’s brilliant but often idiosyncratic character.
It doesn’t try to offer trite answers to this complex character, but does reveal surprising insights you won’t find anywhere else. It is so revealing that Lovelock’s undoubted brilliance in matters of science, was not matched by an equally advanced emotional intelligence.
Instead, we see an emotional vulnerability that was exploited by dark forces to co-opt him to an industry narrative on several occasions. Ultimately, he acknowledged this. I wonder if it was in part due to his fiercely declared independence, and not to be seen as a leader of a green movement he saw as too susceptible to wooly thinking?
In my mind, his reaction to the green movement was rather in keeping with Le Chetalier’s Principle: a system will react to any constraint so as to oppose the constraint. He might have appreciated that chemistry metaphor! He seemed happier to express contrarian opinions, almost because it ruffled feathers. Unfortunately this then served the needs of those arch Machiavellian manipulators – notably Rothschild and Lawson – who played him, time and again. His need to please in such cases seemed to override his critical faculties in political matters, which he was so ill equipped to deal with.
The lasting feeling I had on finishing the book was one of poignancy. Lovelock achieved so much, and recognition aplenty, but he was never quite rewarded with the recognition of mainstream science he seemed to both recoil from but also craved.
At first I wondered how a biography whose chapters were titled by the key people in his life could work, but it worked brilliantly. The themes – a love of nature, invention, multidisciplinary problem solving, bombs and more – run through the book like a piece of Brighton rock, as does the evolution of the Gaia vision, from a formative idea to a fully fledged form that finally achieved scientific respectability; and continues to resist being pinned down.
I thoroughly recommend this biographical masterpiece.
A group of 6 ‘heat pump curious’ visitors, organised through our local climate group, cycled from Woodchester to Nailsworth to visit our heat pump and get their questions answered. My wife took some videos of me extemporising. It was a cold day (about 6C outside).
The house is over 200 years old, Grade 2 Listed, and with a floor area of about 250 square metres. Instead of having one large heat pump, Cotswold Energy Group, who installed the system 3 years ago, provided two smaller units. This had the benefit that for much of the year only one is running, as was the case on this visit (if, as one of the visitors pointed out, it had been -6C, then both would have been in operation).
Explaining the heat pump in plain English
So here is the first of the short videos – a plain English short talk (11 mins) explaining how a heat pump works and answering their questions.
You will notice that at one point I had to crouch down to see if one or both units was running – rather demonstrating the low level of noise they produce. At another point in the video, some of the visitors had to shuffle sideways as they were experiencing the cold air from the heat pump (which extracts heat from the air, as explained in the video). I also mention a figure of 20 litres of water a minute as the rate of flow through the radiators. This was illustrative only and not a fixed number, as it depends on a number of factors, and may have been more than that on this occasion.
Explaining why underfloor heating is not required, using a simple model
In this video (2 mins) I explained how radiators can deliver the same heat as underfloor heating, by using a simple paper model for explanation.
Everyone commented on how cosy the room was. We looked at a thermometer that showed it was at 21C. I then got someone to put their hand on the radiator and I asked if they thought it was on, and they weren’t sure. I used a thermometer gun to see how hot it was – its was only 30C. Then I got them to turn their hand palm upwards and found the temperature was 28C, and so of course the radiator didn’t feel “hot”. But, it doesn’t need to be 60C, or 70C, to heat the room, just greater than the target temperature (21C) and sufficiently higher than that to deliver the heat at the rate that balances the rate at which heat is lost from the wals etc.
The flow temperature was a bit higher than this (as this was the surface temperature of just one radiator), but it illustrates the point. ‘Low and slow’ heating works, and delivers greater comfort.
Explaining why the bills don’t need to rise going from an old boiler to a heat pump
In this video, I used simple maths to show why a heat pump shouldn’t raise electricity bills even in an old house (if properly installed), and even with our high UK electricity prices.
I slightly rushed the last part on the relative costs explanation. In the 3 bed semi example, I needed 4,000kWh of electricity say at 28p/kWh totalling £1,120/yr to run the heat pump. Assuming a 70% eff. oil gas boiler we’d would need to consume 17,143 TWh of gas to deliver 12,000 kWh of heat (as 70% of 17,143 equals 12,000). So I’d pay 17,143 kWh x 7p/kWh = £1,200 with the old gas boiler, and that is a little but more than with the heat pump.
As the ratio of the electricity unit price to the gas unit price comes down, as it assuredly will, the economic advantage of moving to a heat pump will only grow (let alone all the other ones: massive carbon emission reductions; more comfortable home; independent of petro-states for energy needs, as grid itself is increasingly dearbonised).
Postscript
The organiser of the visit, Sylvia, sent me a lovely message afterwards:
“Hi Richard, just wanted to thank you for a really interesting tour yesterday… It’s so generous of you to share your home and your experience like that, we were all impressed at how well it was working in a house like yours. Shows how much bad info there is around! You did a good job of explaining some difficult concepts too! I think we all came away inspired… Even though we may not be able to make the change at once… So many thanks from us all!”
I really enjoyed the experience too, with so many great questions.
A question came up about microbore and I gave a reasonable answer I felt, but Heat Geek provides an expert explanation of the issues and potential solutions here > https://www.heatgeek.com/what-to-do-with-microbore-pipework-on-heat-pumps/ and whether you are heat pump curious home owner, but especially a heating engineer, Heat Geek will have answers to most of your questions, and also provides training and support for those in the industry wishing to move from gas boilers into heat pumps.
It’s worth noting that NESTA provide a ‘Visit A Heat Pump’ scheme that connects those like me willing to host house visits, and those who would like to hear from someone who has a heat pump. I have hosted visits using this scheme and will do more, but its also nice to use local networks to organise visits e.g. through climate groups, churches, Rotary, or whoever.
Technical Note
For a deep dive on how radiators deliver their heat, an scientific explanation is provided in Using Radiators with Heat Pumps by Michael de Podesta.
I’ve been listening to coverage of Holocaust Day on BBC Radio and elsewhere. A lot of the coverage rightly centres around the stories of brave survivors who somehow lived to tell their stories of life and death at Auschwitz-Birkenau.
I heard no mention of the camps whose only function was murder soon after arrival of the trains, the Extermination or Death Camps built for the Nazi’s Aktion Reinhard plan: Belzec, Sobibor and Treblinka.
Why is this? I want to try to address this question.
In Concentration Camps like Buchenwald and numerous others, dehumanisation and ‘death by work’ was the goal. The Nazis wanted a financial outcome to run alongside their goal of genocide. This required inhumane care and lodging, but as a result there were buildings and other physical records of life and death at the camp.
For those of us trying to make sense of this still recent horror, it also meant that the few that did survive could offer some kind of hope. An emotional release from the darkness. In the words of Primo Levi “To survive is to defy those who would wish to see you erased from existence.”
The greatest focus is naturally on Auschwitz-Birkenau, which was numerically the most deadly of the camps, but it is also unique in having been both a Concentration Camp and an Extermination Camp. This creates an ambiguity about how to talk about this particular camp. It can be difficult to navigate (and explains why some commentators refer to it as a Concentration Camp and fail to mention the Extermination part). It can also enable Holocaust deniers to create their own wicked narratives.
As Channel 4 News tonight said, the story of the holocaust was not a story of those that lived, because the norm was that most died: the mass shootings all over Europe that preceded the camps (often eagerly supported by local antisemitic neighbours), and the build-up to the industrialisation of genocide. In all, the Holocaust created six million stories of lives brutally taken.
Laurence Rees remarked on the lack of attention to the purely Extermination Camps in his seminal book on the Holocaust (Auschwitz: The Nazis & The ‘Final Solution’, BBC Books, 2005):
“Visitors to the sites of Belzec, Sobibor and Treblinka (of who there are far, far fewer than travel to Auschwitz) are shocked by how tiny these killing camps were. A total of around 1.7 million people were murdered in these three camps – 600,000 more than the murder toll of Auschwitz – and yet all three could fit into the area of Auschwitz-Birkenau with room to spare. In a murder process that is an affront to human dignity at almost every level, one of the greatest affronts – and this may seem illiogical unless you have actually been there – is that so many people were killed in such a small area.”
I simply think that news outlets, and most of us actually, find it simply unbearable, and beyond our comprehension to even think about the monstrosity of the industrial murder of so many. Belzec, Sobibor and Treblinka seem to leave a blank space in our historical remembrances, because there are apparently no stories to tell of survival amongst the horror, only a blank sheet. Yet even that is not quite true.
One of the most remarkable stories of survival was the ‘Treblinka Revolt’ by Samuel Willenberg and many others, on the 2nd August 1943. These were victims who actively rebelled, with great purpose and planning, against their enforced passivity.
Why is this story not told each year?
Even while their numbers were small when tallied against the huge numbers that were murdered, we see in this story a willingness to stand up and be counted. A lesson to all of us who have infinitely more agency to confront hate and division.
Ultimately, the Nazi’s were confronted and defeated in their war against humanity, and there lies hope too. The good guys won.
Those who seek to divide in the name of an ideology, to achieve absolute autocratic power, will always dream of some distorted vision of a homeland: one that is ‘cleansed’ and made uniform in many ways, even in terms of artistic expression. These people hate diversity.
Yet we know that nature is most successful at its most diverse. Monocultures wither and die. Human society is no different. David McWilliams’s recent brilliant book ‘Money – A Story of Humanity’ gives numerous examples of how cultural plurality provides the spur for wealth and happiness. The cultural monoculture that Hitler dreamed of led to destruction not hope and happiness.
Is hope possible in the shadow of the Holocaust?
Assuredly it is, because humanity has shown that open societies with cultural diversity are the most successful, providing the basis for wealth and happiness. We must continually work to make this a reality, in our own time, faced with the latest incarnations of monoculturalists and autocrats. History teaches us that open, multicultural societies will always prevail in the end, and are worth defending.
‘Climate Models Can’t Explain What’s Happening to Earth: Global warming is moving faster than the best models can keep a handle on’ is the headline of an article in The Atlantic by Zoë Schlanger [1]
The content of the article does not justify the clickbait headline, which should instead read
‘Climate Models Haven’t Yet Explained an anomalous Global Mean Surface Temperature in 2023’.
Gavin Schmidt authored an earlier comment piece in Nature [2] with a similarly hyped up title (“can’t” is not the same as “haven’t yet”). He states very clearly in a discussion with Andy Revkin [3], that he fully expects the anomaly to be explained in due course through retrospective modelling using additional data. It’s worth noting that Zeke Hausfather (who also appears on Revkin’s discussion) said in an Carbon Brief article [4] that 2023 “is broadly in line with projections from the latest generation of climate models” and that there is “a risk of conflating shorter-term climate variability with longer-term changes – a pitfall that the climate science community has encountered before”.
It is not surprising there are anomalous changes in a single year. After all, climate change was historically considered by climate science as a discernible change in averaged weather over a 30 year period, precisely to eliminate inter-annual variability! Now, we have been pumping man-made carbon emissions into the atmosphere at such an unprecedented rate we don’t have to wait 30 years to see the signal.
If you look at the historical record of global mean surface temperature, it goes up and down for a lot of reasons. A lot of it has to do with the heat churning through the oceans, sometimes burping some heat out, sometimes swallowing some, but not creating additional heat. So the trend line is clearly rising and the models are excellent in modelling the trend line. The variations are superimposed on a rising trend. Nothing to see here, at this level of discussion.
The climate scientists are also, usually, pretty good at anticipating the ups and down that come from El Nino, La Nina, Volcanic eruptions, etc. (Gavin Schmidt and others do annual ‘forecasts’ of the expected variability based on this knowledge). Which triggered the concern at not seeing 2023 coming, but why expect to get it right 100% of the time?
Don’t confuse this area of investigation with extreme weather attribution, which addresses regional (ie. sub-global) and time limited (less than a year) extreme events. Weather is not climate, but climate influences weather. So it is possible using a combination of historic weather data and climate models to put a number on the probability of an extreme event and compare it with how probable it has been in the past. So, 100 year events can become 10 year events, for example. This is what the World Weather Attribution service provides. The rarer the event, the greater the uncertainties (because of less historic data to work with), but it is clear that in many cases extreme weather events are becoming more frequent in our warming world, which is no surprise at all, based purely on statistical reasoning (The Royal Statistical Society explain here.)
So back to The Atlantic piece.
The issue I feel is that journalists and lay people can’t abide uncertainty. What are the scientists not telling us! In general people want certainty and often they will choose based mostly on their own values and biases rather than expert judgment. In the case of the 2023 anomaly, the choice seems to be between “it’s certainly much worse than the modellers can model”, “it’s certainly catastrophic”, “it’s certainly ok, nothing to see here”, or something else. All without defining “it’s” or providing any margin of error on “certainty”. Whereas scientists have to navigate uncertainty every day.
The fact is that we know a lot but not everything. There is a spectrum between complete certainty and complete ignorance. On this spectrum, we know:
a lot ‘that is established beyond any doubt’ (e.g. increasing carbon dioxide emissions will increase global mean surface temperatures);
other things that ‘are established outcomes, but currently with uncertainties as to how much and how fast’ (e.g. sea-level rise as a result of global warming and melting of ice sheets, that will continue long after we get to net zero; before it reaches some yet to be determined new equilibrium/ level);
and others that ‘currently, have huge uncertainties attached to them’ (e.g. the net amount of carbon in the biosphere that will be released into the atmosphere through a combination of a warming planet, agriculture and other changes – we don’t even know for sure if it’s net positive or negative by 2050 at this stage given the uncertainties in negative and positive contributions).
So we can explain a lot about what’s happening to Earth, we just have to accept that there are areas which have significant uncertainties attached to them currently, and in some cases maybe forever. Not knowing some things is not the same as knowing nothing, and not the same as not being able to refine our approaches either to reduce the levels of uncertainty, or to find ways to address those uncertainties (e.g. through adaptation) to mitigate their impacts. Don’t put it all on climate models to do all the lifting here.
The current climate projections are much more precise than say the projections on stock market prices in 5 or 10 years, but we don’t use the latter as angst ridden debate about the unpredictability of the markets. We consider the risks and take action. On climate, we have enough data to make decisions in many areas (e.g. when it would be prudent to build a new, larger Thames Barrage), by using a hybrid form of decision making within which the climate models are just one input. Even at the prosaic level of our dwellings, we manage risk. I didn’t wait for certainty as to when the old gas boiler would pack up before we installed a super efficient heat pump – no, we did it prudently well beforehand – to avoid the risk of being forced into a bad decision (getting a new gas boiler). We managed the risks.
Climate models have been evolving to include more aspects of the Earth System and how these are coupled together and to enhance the granularity of the modelling (see Resources), but there is no suggestion that there is some missing process that is required to explain the 2023 uptick but probably missing data; not the same thing. Although there is a side commentary in [4] involving input from Professor Tim Palmer calling for ‘exa-scale’ computing, but Gavin Schmidt pushes back on the cost-effectiveness of such a path; there are many questions we must address and can with current models.
There are always uncertainties based on a whole range of factors (both model generated ones, and socio-economic inputs e.g. how fast will we stop burning fossil fuels in our homes and cars; that’s a model input not a model design issue). There is possibly nothing to see here (in 2023 anomaly), but it could be something significant. It certainly doesn’t quite justify the hyperbole of the The Atlantic’s headline.
If we globally are waiting for ‘certainty’ before we are prepared to act with urgency, we are completely misunderstanding how we should be managing the risks of man-made global warming.
We certainly should not, at this stage at least, be regarding what happened in 2023 as an extra spur to action. Don’t blame climate models for not having raised a red flag before or urgently enough – which is the subtext of the angst over 2023.
The climate scientists will investigate and no doubt tell us why 2023 was anomalous – merely statistical variability or something else – in due course. It is not really a topic where the public has even the slightest ability to contribute meaningfully to resolving the question. It might be better if instead The Atlantic was publishing pieces addressing the issue of what questions climate models should be addressing (e.g. constrasting the building of sea walls, managed retreat and other responses to sea level rise), where everyone can and should have a voice (as Erica Thompson discusses in her book [5]).
Climate scientists have been issuing the warning memo for decades, at least since the 1979 Charney Report, with broadly the same message. We read the memo, but then failed to act with anything like the urgency and agency required. Don’t blame them or their models for the lack of action. Ok, so the advance of models has allowed more diverse questions to be addressed (e.g. trends in flooding risks), but the core message remains essentially the same.
And please, don’t use 2023 as another pearl clutching moment for another ‘debate’ about how terrible things are, and how we need more research to enable us to take action; but then turn our heads away again. Until the next headline, of course.
(c) Richard W. Erskine, 2025
REFERENCES
‘Climate Models Can’t Explain What’s Happening to Earth: Global warming is moving faster than the best models can keep a handle on’, Zoë Schlanger, 6th January 2025, The Atlantic.
ANDY REVKIN speaks with longtime NASA climate scientist GAVIN SCHMIDT about his Nature commentary on what missing factors may be behind 2023’s shocking ocean and atmosphere temperature spikes, Youtube, https://www.youtube.com/live/AYknM2qtRp4?si=fsq0y-XkYG58ITw5
‘Escape from Model Land: How mathematical models can lead us astray and what we can do about it’, Erica Thompson, 2022, Basic Books.
I have written a while ago about the project to replace a 25 year old creaking gas boiler with an Air Source Heat Pump. Today we had our annual service provided by Cotswold Energy Group, the original installer. All clear for another year.
The main advice we follow is not to fiddle with the as-installed setup at all – we let it do its thing! We don’t even adjust the controls (TRVs) on radiators because the system was well ‘balanced’ as part of the commissioning of the system. The only thing I look at periodically is the performance data via my phone or PC. If there was some malfunction it would no doubt show up in a drop in weekly performance data. Mostly we forget the system is there.
So I thought I’d just provide a summary of the 2024 performance and running costs.
Summary
A recap. Our house is a large semi-detached dwelling with a total floor area of 251 m² over three floors. It has solid walls [1], and mostly sash single glazed windows. Only the loft insulation and brushes on sash windows are additional retrofit ‘fabric’ measures [2].
The total heat delivered to the house over the last 12 months (directly metered from pipes flow and return gauges) was 29,236 kWh (kilowatthours), and that was achieved with the input of 7,942 kWh of electricity (again, using dedicated metering). So the annual performance (the so called Seasonal Coefficient Of Performance, Seasonal COP or SCOP) is found by dividing the first number by the second, giving a SCOP of 3.68 for 2024. That can be thought of as an efficiency (output divided by input) of 368%. This apparently magical feat (obtaining an efficiency of greater than 100%) is achieved because the heat pump harvests energy from the ambient environment (in our case, the air), and concentrates it to raise its temperature.
Looking at data on a monthly basis, I found that the worst month was January with a COP of 3.06. There will be days when it was worse than this but even on a daily basis it rarely drops below 2.5; for just a handful of days in the year.
If we’d stuck with our old boiler which optimistically ran at 72% efficiency [3], then the primary energy required (in form of gas) would be equivalent to 40,606 kWh (that is 29,236 divided by 0.72).
The result of this is that we are saving about £480 a year as a result of ditching the old boiler, and also achieved a more comfortable evenly heated home (rather than the roller-coaster heating we had with the gas boiler).
With the old system, hot water to our shower came via a gravity fed system and needed a little pump to improve the pressure (noisy, and pressure not that great). With the heat pump and new water tank we now get our hot water under mains pressure. This was one of the most surprising benefits of our move to a heat pump system.
Running cost comparisons
Taking the unit prices for gas and electricity that applied for us for the most of 2024 (5.9p/kWh and 22.7p/kWh, respectively), and the standing charges (28.21p/day and 58.63p/day, respectively), the cost of heating (mainly space heat, but some water heat too) was £2,017 in 2024.
Had we stayed with our old gas boiler, it would have been about £2,500 to do the same job. Probably more because the system was creaking and unlikely to have performed according to the published performance figures [3].
Conclusion
Yes, you can heat any old building with a heat pump without having to make any significant or disruptive changes to the insulation.
Of course, where you can add insulation to a house heated by a gas boiler you can reduce the rate of heat loss and therefore the heating bills. The same is true of heating with a heat pump. But you will find that as you try to reduce the heat loss further and further, the costs will escalate, as I discussed in Insulate Britain: Yes, But by How Much?.
No, it wasn’t difficult to install (whatever ‘noises off’ you may hear from the perenially sceptical ‘You & Yours’ [4], and other naysayers), if you engage professionals with the experience, as you would do for any important job.
We remain very happy with our Air Source Heat Pump and our suppliers. We have a more comfortable house, that is cheaper to run than the boiler it replaced (even given the unjustifiable ratio of electricity/gas unit costs), is very reliable, and we have better showers.
No fiddling, or ‘intelligent home’ tech, required. Keeping it simple.
What’s not to like. You won’t regret it.
(c) Richard W. Erskine, 2025
NOTES
[1] The 200 year old walls are termed ‘solid’, but are actually two course of Cotswold stone with in-filled rubble, providing an element of air gap. The overall wall thickness at ground level is about 600mm. This kind of wall tends to perform better than is often assumed.
[2] I get a little frustrated with the question “is your home insulated?” If one lives in an imaginary house with no roof or walls then the answer would be no! The fact is that any structure that is enclosed provides insulation. The question is really shorthand for “has any insulation been added to the fabric of the building above and beyond the original construction?”. Most people now have some form of loft insulation which wasn’t original, but it is worth ensuring you have it up to the recommended depth (but going much beyond that is not really needed as there is a law of diminishing returns). Draught-proofing is a really good idea, as it reduces the air turnover in the house, improves comfort levels near windows and doors, and is relatively inexpensive. Extraction fans in kitchens and bathrooms are also important, both in reducing the risk of mould, but also because moist are needs more energy to warm it!
[3] You can find out the estimated efficiency of your old boiler using the Product Characteristics Database search function. Our old boiler was a Glow-worm Hideaway 120B. I’ve taken the slightly higher figure presented for winter of 72% (rounded), even though this is likely to be optimistic for a 25 year old boiler.
[4] ‘You & Yours’ is BBC Radio 4’s consumer affairs programme https://www.bbc.co.uk/programmes/b006qps9 – Today (6th Jan 2025), during a segment about heat pumps we heard about the forthcoming new homes standard, and research being done with Barretts and Salford University. I was astonished to hear that with their new build standards they were seeing a performance, for the air-source heat pumps used to heat the new home (they were referring to the SCOP), of 3. That’s right, worse than our 200 year old house! For new homes, I would suggest a SCOP of 4 is an absolute minimum target, and I’m sure that those clever chaps at Heat Geek would be aiming for 5.
The researchers also “discovered” that the efficiency of heat pumps improves if they are kept on rather than used like gas boilers (being put on on the morning for a few hours and then again in the afternoon). Who knew? Anyone who has any knowledge or expertise in heat pumps, that’s who. With our house, the thermostat in the living room is set to 21C from 0630-2230 and setback to 18C from 2230-0630. The heat pump works only as hard as it needs to (based on the external temperature) to achieve this goal, and does this by changing the ‘flow temperature’ as needed. In our system the maximum design flow temperature of 50C is only for the very coldest days (perhaps a few days a year). In my house over the last month it has averaged about 35C and only once gone above 40C (a few days ago it was 42C).
It then discussed the use of radiant heating for those small dwelling “unsuitable” for a heat pump! I know that Nathan Gambling of BetaTalk would probably be jumping up and down at this point! The Air to Air kind of Air-Source Heat Pumps can be quite compact systems, and can be fitted to any dwelling. For a small flat, at a lower cost than a gas boiler. Direct electric heating may have a niche role is super-efficient PassivHaus’s, we’ll see, but it is not true to claim that heat pumps cannot be used in small dwelling.
So I haven’t a clue about the quality of the research referred to, but based on this admittedly brief segment, it did raise some concerns as to the research brief. And it is clear that the producers and lead presenter on You & Yours are still unable to accept that heat pumps are the primary game in town, so they will continue to find ways to sow doubts.
I was flabbergasted this morning (3rd January 2025) to hear a segment on BBC Radio 4 Today, where there was a discussion prompted by widespread calls by X users, and Elon Musk himself, calling for National Inquiry into rape gangs in Oldham. We are used to the Daily Mail often seeming to set the agenda for news commentary, but X, the platform that all right-minded people are fleeing from?
This prompted me to write this piece about a concern I have had about BBC news coverage for some time.
If you watch or listen to BBC news coverage and political commentary, on TV or Radio, and compare it with, say, Channel 4 News, you will for the most part detect a distinct difference in tone and content.
On Channel 4 News, if there is a news flare up about some talking point that is being pushed by the right wing media, Elon Musk’s X, or wherever, there will be a strong push back, but not so much on the BBC.
We have seen this with the riots in the summer and now with calls for a Public Inquiry into sex gangs in Oldham. In this case it is not about the merits or de-merits of a national inquiry but the way this story has been handled, even though it is clear that malign actors are using the terrible experiences of young girls in Oldham to pursue a far-right agenda. Instead of the picking up the framing “Why is there not a National Inquiry?” (with the dog-whistle of implied cover-up), the BBC journalists could reframe it as “We’ve had multiple reports on grooming gangs, let’s explore what success there has been, if any, in implementing the many recommendations?”. But no, the BBC News coverage seems to cut-and-paste the headlines screaming from the right wing papers whose primary interest seems to be undermining the Government.
How is that “balance”?
Impartiality is not simply taking opinion at face value, but should involve challenging the underlying assumptions and framing that is being promoted. Sharp interrogation of the assumptions and framing are the norm on Channel 4. On the BBC too often it comes across as an acceptance of the framing, followed by a bland commentary and exchange of opinions. This can have the appearance of a robust discussion but is anything but if it fails to challenge underlying assumptions and motivations. It comes across as ‘he said, she said’.
The BBC will counter by saying they are different because they have a mandate to ensure impartiality of its coverage, but how do they interpret this duty?
We saw how on climate change reporting over a decade ago, that ‘false balance’ was practised, as Professor Steve Jones pointed out in his July 2011 report on science reporting at the BBC. He noted:
“in their desire to give an objective account of what appears to be an emerging controversy…face the danger of being trapped into false balance; into giving equal coverage to the views of a determined but deluded minority.’ This problem of false balance was particularly pronounced when it came to climate change because ‘denialists’ use rhetoric ‘to give the appearance of debate”
and as a result pursued an “over-rigid” insistence on due impartiality and risked giving ”undue attention to marginal opinion” on scientific questions. A commitment to accuracy cannot be overridden by the claim of impartiality.
Three years later, the Science and Technology Select Committee conducted a year long inquiry on the BBC’s coverage of climate science and found that:
“BBC News teams continue to make mistakes in their coverage of climate science by giving opinions and scientific fact the same weight”.
We saw how the coverage of Brexit was too often a parade of opinions, avoiding any real substantive argument or evidence base. Claims that there would be no harm to our economy were handled very much in this fashion. Only well after the referendum, when the Conservative Government was embroiled in increasingly convoluted attempts to avoid the inevitable, did a real interrogation occur of the flawed claims (such as the oft-repeated claim that WTO rules could be used to side step EU rules).
There were no shortage of experts challenging the Pro-Brexit claims about trade, such as Bristol University Law School, but these never got a proper airing on the BBC, which was once again desperate (do you spot the pattern) to appear balanced but achieving exactly the opposite. It became a competition of different slogans, and the devil, as we all know, always has better tunes; and better slogans (‘Take Back Control’ won the day).
After their success in getting the UK to leave the EU – our nearest and largest trading block – and all the harm that has ensued (as warned of by experts), the same right-wing actors have now gained confidence. They’ve completely stopped the pretence: they are now nakedly espousing a far right agenda. They want to use populist attacks on the Government to undermine our democracy. Now, Trump and Musk – who no one in the UK have voted for – are putting their considerable weight behind Farage’s Reform Party.
Of course, any comparison with 1920s Germany is regarded as alarmist and scaremongering.
“But that could never happen in the UK?”. Do we really believe that we are more cultured and sophisticated, and less susceptible to demagogues, than the Germany that sleepwalked into authoritarian rule?
In their rise to power, the Naziz made the same use of disinformation, attacks on institutions and ‘othering’ of minorities (the Jews). Remember that the Nazis used the The Protocols of the Elders of Zion (1903) in their campaign of antisemitism, even while they knew it was not true. The meme has never really left us, even after the horrors of the Holocaust.
This toxic conspiracy theory has transformed into a modern form: a conspiracy of a world order that controls everything and is led by Jews. It is a core belief amongst the far right conspiracy theorists who Trump has empowered, and who lurk in the wings, aiming to undermine every Western democracy, including ours.
It is no surprise then that Nigel Farage was called out in the Guardian (2019) for criticising George Soros, an emblematic placeholder in this imagined world Government:
Farage said Soros sought “to undermine democracy and to fundamentally change the makeup, demographically, of the whole European continent”. The latter claim directly echoes conspiracy theories against Soros made by far-right groups such as Generation Identity.
His tactic of using conspiracy claims was evident again in comments he made following the murder of 3 girls in Stockport in the summer of 2024 (as reported in The Independent, 19th August 2024), and half of voters held him directly responsible for the riots, when
.. he accused the police of withholding the truth from the public and repeated misinformation which claimed the suspect was under surveillance by security services.
“amplifying false information” by spreading a theory first suggested by influencers like Andrew Tate, and then failed wholly to condemn the riots. “I want to be clear: this is not leadership. It is deeply irresponsible and dangerous,” he said.
It is no wonder there was much despair at Mishal Husain leaving the BBC, because she seemed to be an outlier – a journalist on BBC Radio 4 Today who departed from the formulaic banality of faux balance, and instead engaged in substantive argument. See how she conducted interviews with Nigel Farage on his claim that no one speaks English in Oldham or on the Reform Party’s policy to freeze non-essential migration. If she can do it, why can’t the others? Victoria Derbyshire on BBC Newsnight is another in an all too small group.
Yet overall the BBC is failing in its duty as our national broadcaster, by enabling marginal opinion and not holding malign actors to account. It’s approach to news reporting requires a fundamental review. They could start by endeavouring to emulate Mishal Husain’s methods, by simply not putting up with those that engage in dogwhistle politics. Call it out!
In the light of the UK Government’s new report Clean Power 2030 Action Plan: A new era of clean electricity (13th Dec 2024), and the UK’s commitment achieve netzero by 2050, this talk looks at the UK energy system and how it has evolved from 2010 to today, and how it will evolve through 2030 towards 2050.
It is aimed at a non specialist lay audience, and avoids graphs and technical jargon.
It provides a new approach to communicating the complexities of the transition from a fossil fuel dominated energy system to one dominated by renewables.
For those who are well versed in energy matters, it is hoped it will support their efforts in engaging with a wider audience on the path to net zero.
[This NEW VERSION is one that was presented in person to the Cam & Dursley branch of the University of the 3rd Age (U3A) on 28th November 2024. It is a complete reworking of a talk I gave in February 2024 https://essaysconcerning.com/2024/02/22/greening-our-energy-how-soon/ . The recorded version included here is a slightly amended version of the U3A talk, including a little more on the Government’s Clean Power in 2030 plans, for example].
History taught in my young days often gave us a cartoon version of the past. A great example is that of King Canute who showed to his flattering courtiers that he could not turn back the tide; that his secular powers were no match for the almighty. It was a teaching moment by a wise King.
But of course this was mutated into a comical converse version of history: it became a poor old King Canute tried to turn back the tide, ha ha version of history.
King Donald, as he no doubt sees himself, also surrounded by flattering courtiers, believes he can turn back the tide of the green transition, but he can’t and he must not.
“Clean energy is like a giant boulder that’s already reached its tipping point and is now rolling downhill toward a greener future. It’s got millions of hands on it, from individuals to some of the biggest countries, cities, and companies in the world. It could still be slowed by actions of governments and corporations—delays that will have serious consequences for people and planet alike—but it can’t be stopped. Gravity, history, and progress are on our side.“
The cost of renewables has fallen exponentially, and while there is a long way to go the boulder has passed a tipping point – we are on a rising trend on a typical S curve of transition:
“The S-curve is a well-established phenomenon where a successful new technology reaches a certain catalytic tipping point (typically 5-10% market share), and then rapidly reaches a high market share (i.e. 50%+) within just a couple more years once past this tipping point.”
So, like the tide the green transition cannot be turned back.
I wonder what future historians will make of King Donald?
Schoolchildren will no doubt laugh at his scientific illiteracy and attempts to hold back the tide, but in this case they will reflect an accurate interpretation of history.
Unlike the wise King Canute, the foolish King Donald truly believes he can hold back the green tide, but while he can throw spanners in the works – and S curves always have ups and downs along the way – he cannot hold back the tide.
Yet that is no reason now for all of us not to give that boulder a helping hand, to speed it on its way.
This post is a section / extract from my book Trusted Knowledge in a digital and fragmented world of work which is available as a hardback, paperback or e-book on Amazon.Several of the previous sections in the book provide the underlying know-how on how to deal with difficult areas such as access to sensitive information. I believe that the book has relevance to all sectors but given the UK Government’s call for a national conversation in its aims towards Creating a new 10-Year Health Plan.Here follows the section Healthcare’s Knowledge Architecture, including (with her permission) the treatment pathway of my wife’s successful treatment for breast cancer from 25 years ago at Cheltenham General Hospital, starting with a description of key principles ...
I want to focus on the question of knowledge as it is shared within a healthcare system, as an exemplar of the opportunities to improve information and knowledge management. If we look at the regular clinical experience of patients within the healthcare system of the UK, with a National Health Service (NHS), one might expect that someone on holiday taken ill could walk into any hospital at the other end of the country, and the clinicians would be able to immediately access information needed to assess a patient. This is not the case, despite many years of trying to fulfil this outcome, and billions spent on IT, including the disastrous National Program for IT (NPfIT) failure.
Fragmentation of data and systems exists in many countries for a variety of reasons. In the USA, it is down to the privatised, metric driven, insurance funded fiefdoms which dominate the sector. Some countries, such as Finland, have managed to crack the problem, but this is an exception, not the rule.
Healthcare is a sector that is a great exemplar for both the challenges and opportunities of improved information and knowledge management, and I wanted to use it as a vehicle for illustrating several of the key principles that have been central to this book.
IT delivery across healthcare has been marked by a highly fragmented approach. Many healthcare organisations find that they have accumulated several hundred systems, creating a large number of data and document silos. Often, because there are not the funds to do a proper fix, another ‘bolt on’ remedy will be implemented.
While this has caused operational issues, it has fundamentally hampered the ability of clinicians to have a unified view of a patient or patient pathway, and this can have a significant impact on the quality of patient care.
Even when a basic unified medical record has been achieved, much of the associated data has remained inaccessible, and there is still an over reliance on paper or on relatively primitive digital approaches to ‘unstructured’ data, which has remained stubbornly siloed. In many practices it is not a lack of IT solutions that is the issue, but a fragmented approach that has led to a lack of joined up data and information, let alone knowledge.
In the section The Data Landscape, I discussed how transactional systems have tended to get the most attention and funding in organisations, whereas the more ‘knowledge oriented’ processes and systems are often neglected. This is also true in healthcare, but fragmentation has been common for all types of systems, particularly in cash-strapped hospitals; they have fallen back on a cottage industry approach to IT systems that has further fragmented the delivery of solutions.
This is not to advocate a top-down approach to systems. In the NHS this was tried and singularly failed because it imposed a rigid approach in areas where there should arguably be more freedom (such as in the procurement of software).
It could instead have had a more fundamental and achievable goal, to create standards (such as an information architecture) that facilitated data and information sharing, without being prescriptive on solutions delivery. What would a process look like that delivered such standardization to the NHS?
I want here to sketch out some aspects of the methodology that is outlined in this book, and I am interested in focusing the exposition on the ‘patient pathway’. The analysis starts with the identification of Essential Business Entities (EBEs); those entities that would exist however the management decided to reorganise or despite several generations of systems. For healthcare, the following are an incomplete list of EBEs:
Admission
Appointment
Clinic
Clinical/Medical Image
Clinical Pathway
Consultant
Consultation
Discharge
Doctor
General Practitioner
Examination
Healthcare Provider
Hospital / Clinic
Investigative Procedure
Laboratory Test
Medical Record
Medical Specialist
Medication
Nurse
Observation
Patient
Payer
Pharmacy
Prescription
Regulator
Referral
Report
Treatment
For some EBEs, such as ‘Regulator’, there is no work related to it from the organisation’s perspective (because it is a pre-existing entity and not something that requires anything from the healthcare provider to create or manage it). For this reason, we are only interested in EBEs that generate work within the scope of the organisation (e.g. within a healthcare provider).
In addition, because we want to analyse patient pathways, the focus is on those EBEs that are most relevant in the current context, such as: Referral, Observation; Patient; etc. The Process Architecture below reflects this focus.
Figure – Patient Pathway Process Architecture
The aim of a process architecture is to indicate how each process ‘invokes’ possible subsequent processes; this is not a dataflow, or workflow, but a graph showing the possible sequence of invocations.
On a patient pathway such as cancer treatment and after-care, a particular process such as Make Examination can be invoked many times over several years, as the patient passes through the different interlocking cycles of referral, examination and treatment.
My wife (who I thank for giving permission to share her story) was diagnosed with breast cancer in 1999, soon after we had moved house. It was a terrible shock to the whole family. We visited the senior breast cancer consultant at Cheltenham Hospital and recall being the last to see him, late on a Friday afternoon. He sat with my wife and I for over an hour, and never once looked at his watch. He explained the options clearly. He calmed us down. The path ahead would be difficult but we could make it. He gave us the confidence we needed. I had read that the key to successful cancer care was not necessarily the things we imagine – the specific drugs used and other technicalities – but the communications and particularly the team-work amongst the healthcare professionals.
Luckily for us, Mr. Bristol and the team at Cheltenham were exceptional. I know that there were occasions when how I acted, as part of the extended team, would be critical. For example, when my wife became drowsy and I knew from my briefings from the Breast Cancer Nurse that I needed to be alert to the possibility of low white cell count; and that a possible dash to the hospital and isolation ward might be called for. This happened twice during the chemotherapy, and we are forever grateful for the knowledge that the nurse instilled in us.
The usual process with chemotherapy for breast cancer was to use the milder drug first and only use the harsher drug for subsequent treatment. My wife agreed to take part in a clinical trial for a new protocol where these treatments were reversed; there would be a hard hit first, followed by a longer sequence with the milder drug.
The involvement in the trial meant that my wife was monitored and assessed long after most patients would have been fully discharged by the hospital – even those who had moved onto a post-chemo hormone suppressant treatment, such as Tamoxifen. In my wife’s case, she was only fully discharged after 10 years, and while the trial was completed some time ago, the data from her case continues to inform scientific research on Breast Cancer, more than 20 years after she was diagnosed.
In the summer of 2019, she received a request to access samples of tissue, held in storage, as part of continuing research; the documentary records themselves will be equally valuable for current and future research.
Figure – The Long Pathway of a Breast Cancer Patient
Protecting personal confidentiality is important, which is why some records need to be transferred in a way that does not identify the patient; this is preferrable to simply deleting records after arbitrary retention periods. This would limit the ability to carry out longitudinal studies, and assess the efficacy of treatments over long periods.
The duration of this pathway is very long compared to the waves of change that occur in IT. This emphasises the need for an approach to information management and the handling of records that is resilient to these changes, to minimise or even eliminate the need to transcribe and convert data and documents between systems on ever shorter cycles.
At the heart of the approach to handling this change, is the ‘Electronic Health Record’ (EHR) – which is usually conceived of as the complete history of an individual’s clinical encounters: examinations, diagnoses, treatments – so perhaps better thought of as a dossier of health records. It is essentially the digital equivalent of the manilla file that a General Practitioner would previously retain in your local surgery (at least in the UK), including correspondence with specialists and hospitals, referred to during treatments that could not be dealt with locally. Now, however, the EHR is seen as including literally everything including, for example, the digital X-ray files created during an examination.
The idea of a single system being responsible for all aspects of a person’s clinical pathway is not realistic: there are specialist, dedicated systems for MRI (Magnetic-Resonance Imaging), and each discipline. What is possible is that each system such as this can share the records it generates on a secure network, and some central repository can add each record into a patient ‘dossier’ without interfering with the source system.
Figure – Conceptual Information Model
The dossier itself could be stored, and if necessary moved, in a non-proprietary form, such as XML using a standard structure applicable to health records. This can be modelled conceptually as in a Conceptual Information Model, as depicted above.
An individual’s health record will grow and grow, and in addition to the pathway discussed above will include interventions small and large – the cut on the knee, the treatment for a skin condition, and so on and so forth – from birth to death. So how would a health professional interact with this information?
It must be possible to filter and segment access according to the role of the health professional. A technician who is undertaking an X-ray does not need full access to the whole health record, whereas a general practitioner does. If it is a team meeting to discuss next steps for a cancer patient, then they all need to see the diagnosis, treatments and observations made by all professionals who have been part of the treatment pathway. They might interact with the information through a timeline view of the information, that could reveal something like the long pathway shown earlier, but allow them to zoom in and out of the timeline, and move back and forth in time.
To make this as much a knowledge platform as a data and information one, they need the ability to add notes and annotations to this timeline, to make sense of decisions made, and perhaps course changes made on the pathway. The pathway would reference the current protocols and standards being used, but also, in the case of an innovative trial, the basis for the decisions being made.
Quite separate from the particularities of one patient, work is done to manage and maintain those protocols and standards. There will be the development of new protocols based on evidence from published research as well as on local knowledge of the team, or health trust, or at a national level. It is a continual process or review and challenge. In the UK, we have the National Institute for Health and Care Excellence (NICE), whose job is to evaluate the efficacy of drugs and treatments, based on the shared national experience. NICE also will assess the relative benefits of one treatment compared to another in terms of health outcomes as a function of costs. This sounds as if it could be a difficult role, and it is, but the NHS does not have an infinite budget, and so if a treatment will extend life by 3 years and costs £50 a week, and another that will extend it for 5 years but costs £1000 a week, there is a utilitarian argument that says that for 10,000 people needing treatment, the first treatment is actually more cost effective, and if budgets are capped, will deliver more years of life extension.
A body like NICE is powerful, because it is often difficult for one professional or even a healthcare unit, to have the breadth of experiences required to evaluate options, even with access to the published literature. Local knowledge sharing is crucial, but it should not be an excuse for an insular approach. The knowledge must flow laterally, between professionals, and vertically, to inform regional and national good practices.
Specific disciplines will have their own knowledge repositories that provide the commentary, stories and narrative that are essential in making sense of the hard numbers, or received wisdom. The radiologists who discuss their methods and approaches, and how they collaborate with other specialists, will contribute to the performance of the diverse teams, such as in cancer care.
Genomics data linked to health records for ‘big data analytics’ – and machine learning to find patterns in that data – will make increasingly large impacts on the choice of treatment protocols. However, as I have argued earlier, this must be seen as a tool that supports the collective knowledge of teams and disciplines. Humans can contextualise knowledge arrived at from personal experience, drug trials or big data. Only humans can summarise that knowledge in a way that combines science, ethics and the wishes of individual patients and families.
Ultimately, health professionals must make choices on treatment pathways, and it is they who must collectively curate a body of knowledge that makes sense of data and information and how to apply that knowledge in a specific context.
A fully developed architecture of knowledge – that respects the principles and practices outlined in this book – is therefore an essential feature of any healthcare system, where knowledge sharing and learning are valued.
Trusted knowledge is crucial to any organisation – working in healthcare or any other sector we might wish to explore – and I hope that this book will make a significant contribution to its advancement.
(in your own/country market on Amazon, search on ‘Trusted Knowledge in a digital and fragmented world of work’ by Dr Richard William Erskine)
I spent 30 years providing strategic advice and practical implementation in the field of enterprise information and knowledge management, working for leading global consultancies, alongside many colleagues and clients to whom I am indebted.
The work took me to many countries and to almost as many sectors, enabling me to see many of the common issues that arise in organisations of all shapes and sizes. Since my retirement in 2016 I have spent time reflecting on my experiences. I have now written a book Trusted Knowledge in a digital and fragmented world of work that provides a synthesis of the ideas and methods developed and deployed over the years.
The term ‘trusted knowledge’ is not intended to imply some immutable truth on a topic, even within an organisational context, but knowledge whose provenance is clear and can be challenged and refined over time, to enrich and improve an evolving body of knowledge.
The book is concerned with ‘communities of practice’, typically within some organisational or collaborative setting. It addresses the fundamental question: How is knowledge captured, shared and acted upon by practitioners in a field of work?
The reality is that few organisations have achieved mastery in answering this question. Too often silos are the reality, and lessons are neither captured nor learned. Information does not flow seamlessly through the value chain. Wheels are reinvented. Access to information that a practitioner needs is often too hard to find, whereas confidential information is carelessly released.
The book sets out the principles, practices and capabilities needed for enterprises to enhance the value of an evolving body of knowledge in what is an increasingly digital and fragmented world of work. This is in part about putting in place a consistent information management platform, but that is only the start. It is also about creating a culture where there is respect for the curation of knowledge, and its take up, which becomes embedded into a ‘learning organisation’.
To enable past colleagues and clients internationally who I have worked with to have easy access to the book, it is published as an eBook and as Print on Demand book on Amazon KDP. Others are encouraged to take a look and if they like what they see, to pass the word on.
The book starts by covering the current state of affairs as it exists in many organisations, then sets out the foundations required of a sound approach to information management. For example: finding the right balance between accessibility and confidentiality; ensuring there is a shared language in use across the value chain; enabling effective reuse of intellectual capital – to name just a few. The book then brings the elements together by setting out an approach to creating an architecture of knowledge, and illustrates this with reference to the healthcare sector.
The book includes many anecdotes from client experiences (suitably anonymised), and nearly 100 figures I have used during my consulting career to help in articulating ideas and recommendations to assist clients.
In the Afterword, I conclude the book with the following reflections:
There is a lot of debate about the failure to improve productivity, in the UK at least. As we now live in a knowledge economy, I am convinced that adopting the approaches set out in this book would have a huge impact on personal and organisational productivity. Done well, it could also have a transformative effect on trust in the organisation. Trust in the knowledge curation, and as used by the enterprise and its partners.We have to stop imagining that the management and take-up of knowledge can be fixed by yet another silver bullet (the latest being AI). It will require sustained effort from management and a diversity of disciplines, with technology playing its part, but not dominating the conversation. There will be lots of achievements to celebrate and gain benefit from along the way; this is not about a big bang transformation. It will be a journey of continuing improvement, with numerous opportunities for innovation along the way.I feel privileged if you have accepted me, through this book, as a guide and mentor on that journey.
(c) Richard W Erskine, 2024
The two pages from the book listing the chapters/ sections are reproduced below to give you more of a teaser for what is covered.
Today, World Environment Day, the UN Secretary-General António Guterres made a special address on climate action “A Moment of Truth” in New York. In a speech that covered the impacts already being felt from the delays in taking action, and the injustices this gives rise to, he turned his ire on fossil fuel companies and their enablers (my emphasis):
“Fourth and finally, we must directly confront those in the fossil fuel industry who have shown relentless zeal for obstructing progress – over decades. Billions of dollars have been thrown at distorting the truth, deceiving the public, and sowing doubt. I thank the academics and the activists, the journalists and the whistleblowers, who have exposed those tactics – often at great personal and professional risk. I call on leaders in the fossil fuel industry to understand that if you are not in the fast lane to clean energy transformation, you are driving your business into a dead end – and taking us all with you. Last year, the oil and gas industry invested a measly 2.5 percent of its total capital spending on clean energy.”
He then went on to say:
“Many in the fossil fuel industry have shamelessly greenwashed, even as they have sought to delay climate action – with lobbying, legal threats, and massive ad campaigns. They have been aided and abetted by advertising and PR companies – Mad Men – remember the TV series – fuelling the madness. I call on these companies to stop acting as enablers to planetary destruction. Stop taking on new fossil fuel clients, from today, and set out plans to drop your existing ones. Fossil fuels are not only poisoning our planet – they’re toxic for your brand. Your sector is full of creative minds who are already mobilising around this cause. They are gravitating towards companies that are fighting for our planet – not trashing it. I also call on countries to act. Many governments restrict or prohibit advertising for products that harm human health – like tobacco. Some are now doing the same with fossil fuels. I urge every country to ban advertising from fossil fuel companies. And I urge news media and tech companies to stop taking fossil fuel advertising.”
The on-going activities of organisations, individuals and PR companies funded by fossil fuel interests did not end in the mid 1990s (even <shocked emoji> in the UK), and has continued in many ways unabated, as Desmog has documented on an almost daily basis https://www.desmog.com. However, now the emphasis is on trying to undermine climate solutions, so as to justify carrying on using fossil fuels, either in electricity generation, or in end-use such as transport and heating. But as the alternatives are now so good, the PR and greenwashing has to be world-class to try to undermine them.
So it was astounding to hear Nick Butler – a Visiting Professor at King’s College – being interviewed on BBC Radio 4’s PM today (5th June 2024) by Evan Davis, being highly critical of the Secretary General’s speech. When asked about fossil fuel companies obstructing public discourse with their lobbying, public affairs, and so on, he said:
“… I think that was the case in the past but from the middle of the 1990s that has changed, certainly for the European companies, certainly BP and Shell, are going in a different direction …” <my jaw drops emoji>
Well being an ex-BP employee he would say that wouldn’t he. He is just one example of what might be called an apologist for climate greenwashing.
And it is incredibly disingenuous to say that adverts for oil and gas don’t appear on TV anymore in the UK. No, but adverts and PR for petrol powered SUVs, or Hydrogen Boilers, or … the list goes on. And to say that its all our fault for making the wrong choices, as Nick Butler suggested, is really the equivalent of victim blaming. I can’t take an EV Bus if there are no EV Buses (or indeed no bus service worth talking about), because car manufacturers and fossil fuel interests have been in cohoots to promote gas guzzlers (and are now whining because the China actually invested in an EV supply chain and market).
The truth is that between 2010 and 2018, Shell dedicated just 1% of its long terms investments to renewable energy, and paying creative agencies to target influencers to improve the brand’s image, etc, as Client Earth’s expose ‘The Greenwashing Files’ reveals. BP and the rest are no different.
You see they have moved on from the mid-1990s. Then the focus was on full front climate science denial, through a myriad of think tanks, influencers writing for the Daily Telegraph, Wall Street Journal, and wherever. Now they are more subtle, more devious. “Oh yes we love renewables”, they will say, but “when the wind doesn’t blow or it doesn’t shine our gas will be needed to generate your electricity”. Gas, I should stress, which they want to grow as a proportion of their business, not phase it out at all. It’s almost as if they are trying to gaslight renewables.
We have an example in the UK of fossil fuel interests – the gas network – producing hit pieces on heat pumps, and claiming that green hydrogen is better, even though all the science shows this is not the case (and in any case, its a ruse by them to carry on extracting natural gas to turn into hydrogen, which will never be green, because they will never be able to afford to bury the carbon dioxide produced in the process). Yet even the Bosch executive vice-president Stefan Thiel now accepts that hydrogen is a lost cause for heating homes. The delays caused by the industry’s disinformation campaign on just this one attack line has come at a cost – being delays in decarbonising UK home heating.
And the greenwashing has been getting worse as the fossil fuel companies try desperately not to be in possession of stranded fossil fuel assets. But they, and their PR / Advertising agencies, are now feeling the heat as one Desmog story Litigation Over Misleading Climate Claims Has ‘Exploded’ Over the Past Few Years reveals:
“Companies are increasingly facing legal action over their false or misleading climate communications, according to a new report examining trends in global climate litigation. That report, released late last week, highlighted a surge in litigation around climate-related greenwashing — what researchers have termed “climate-washing” — over the past few years.”
And to take Shell as an exemplar again, far from “going in a different direction”, as Nick Butler claimed, they are actually reducing investments in renewables because it does not “align” with their strategy to maximise extraction of methane (aka “natural” gas, see what they did there, long ago). They have been pulled up several times for misleading greenwashing advertisements.
As recently as 2022 Shell has had some of its adverts banned by the Advertising Standards Authority (ASA) for misleading claims about how clean its overall energy production is, as the BBC reported here.
One can forgive Evan Davis for not being as well briefed as he could be on the history and on-going tactics of the fossil fuel companies to delay the green transition through well funded PR, advertising and influencer campaigns, but it would not be a bad idea for BBC PM to do a follow-up with someone who is well informed.
For example, how about inviting Joana Setzer (Associate Professorial Research Fellow at the Grantham Research Institute on Climate Change and the Environment), and co-author of the report Global trends in climate change litigation: 2023 snapshot, as we know how much the BBC loves a bit of balance.
I was prompted to write this essay after listening to Justin Webb interviewing Ernest Scheyder (author of The War Below: lithium, Copper, and The Global Battle To Power Our Lives) on BBC Radio 4 Today on 3rd April 2024. I was impressed by the author’s arguments, stressing the need to make informed choices in the way we mine for minerals. I was however rather depressed by Justin Webb repeating talking points that are used by those trying to halt or delay the transition to clean energy. One thing Webb said rather illustrates my point:
“Is it also the case of us of us thinking whether we can find some other way of powering ourselves in the future that doesn’t involve doing this, because I wonder if that’s what some people at least listening to this are thinking, just going from taking one out of the ground – oil – into taking another thing or another set of things just isn’t the answer, isn’t the long-term answer for the planet.”
The false equivalence between the extraction of fossil fuels and the extraction of minerals used in renewable technologies is so great (by a factor of between 100 and 1000), philosophers might call it a ‘category error’. I’ll get into the details below, but first I want to address the general issue of harms.
A reduction of harms
Imagine it is the 19th Century and it is proposed that workmen use poles with brushes to sweep chimneys in order to replace children going up chimneys. This is motivated by a need for a reduction in harms to children.
What would you think if someone said that chimney sweeps will harm birds nesting in chimneys and so we shouldn’t rush to replace children? A ridiculous argument, you may think, because it highlights the lesser harm without mentioning the greater harm that is being eliminated.
But that is effectively how many argue against renewable technologies aimed at displacing fossil fuels.
I call it the ‘Fallacy of Perfection’: the idea that any new solution should be developed to a point where it has no discernible short-comings before it can be scaled up to replace the old ways of doing things.
Perhaps the most popular and persistent of the myths relate to the mining of minerals needed for EVs and other renewable technologies. Like a meme that now floods social media, we hear that EVs are not green because of this or that, and the implication being we must find an alternative, or do nothing (which would please the fossil fuel companies – the planet, not so much). The naysayers are delaying getting to net zero which is time critical; it’s almost as if they do not take seriously the increasing impacts of man-made global warming!
The Carbon Brief Factsheet included the following graph:
The harms done by fossil fuel extraction and use is the main cause of the climate and ecological crisis we face.
EVs by contrast are like the birds nests being disrupted by a chimney sweep. There are issues to be resolved – and can be relatively easily – but using these issues as a reason to slow the displacement of fossil fuel use is a dangerous argument, that gives succour to those in climate denial.
The impacts from global warming gets worse in proportion to the cumulative emissions of greenhouse gases, most crucially carbon emissions from burning fossil fuels. Delaying getting to the point where we stop burning fossil fuels will only increase the harms that global warming is already causing. These will get worse with each year we keep emitting on the scale we are at present.
In this world, nothing comes with zero impact, and yes, mining for minerals needed for renewables comes with impacts, but we can choose to mitigate those impacts. But let’s get one thing clear, there is no shortage of the minerals we need to get to net zero. We do need to make choices on where we mine, and also the controls we put in place to minimise impacts, both ecological and social, as Ernest Scheyder makes clear. But we do not have the option not to mine at all, if we are serious about mitigating global warming!
But claiming EVs are uniquely problematic ignores the reality of the immediate impacts – such as from the huge spills of oil (Deepwater Horizon disaster for example) or the water pollution from tar sands, and much more – let alone the longer term ones.
People will need to travel in 2050, and whether it be on bikes, trains, trams, buses or cars, they are going to be mostly EVs (not Hydrogen Cell vehicles). So we need to use our ingenuity to electrify transport, and do it in the fairest way possible.
So let’s not use the fallacy of perfection as a reason for not rapidly decarbonising transport, that the World Bank has called the ‘low hanging fruit’ of decarbonisation.
Immediate impacts of fossil fuel mining
Fossil fuel extraction has immediate impacts that far outweigh the impacts from mineral extraction, in part because of their scale, as with the devastation caused by the Deepwater Horizon, or the pollution of the Niger delta, or the water issues cause by the Canadian Tar Sands mining, impacting people’s habitats and livelihoods, and the ecology.
Long-term impacts of mining fossil fuels
Fossil fuels are extracted and burned once, but the carbon dioxide they release continues to cause warming of the planet for centuries. To power a fossil fuel economy you MUST keep extracting, and do so until you have exhausted all of that ancient carbon. You cannot reuse the coal, oil or gas once it is burned.
Long-term benefits of minerals for renewables
By contrast, minerals for renewable technologies are just the opposite. They are mined one, but are continuously used, enabling three things:
Firstly, they enable us to use the energy of the sun to generate electricity to travel, heat our homes and much more.
Secondly, these technologies ensure we avoid the emissions we would otherwise make, and do this not once, but for the lifetime EV, heat pump or other end-use.
Thirdly, we can then recycle the minerals. So, we have to keep extracting minerals till we have displaced all of the fossil fuel end-use, but once we have (and when recycling is more cost-effective or regulated to be so) we won’t need further mining. We get to a circular economy, because we’ll have enough in the system to reach a steady state of circularity.
We won’t run out of minerals
There is no shortage of the minerals we need to reach a global 2050 ‘net zero’ target. A detailed full life-cycle analysis of demand for minerals shows we can decarbonise our energy production and end-use without optimistic assumptions or modal changes in, for example, transport.
Yes, we have become too dependent on China, but the Earth’s crust provides more than enough.
We can clean up the supply chains
Yes there are some sources that have a poor environmental and ethical record. The solution is not to abandon a push to electrify transport. The solution is to clean up the supply chains. This can be done in a few ways. Governments can legislate to require better management and monitoring of supply chains; consumers can choose EVs where the manufacturer is showing commitment to cleaning up the supply chain; and manufacturers themselves may simply make the moves necessary themselves. Tesla has done this (see their Impact Report), where they show they are committed to ensuring child and forced labour are not involved in materials they import.
Final thoughts
One has to wonder what are the underlying motivations, beliefs or biases that allow people to so easily pick up and repeat the myths and poor arguments that surround minerals and renewable technologies such as EVs.
Obviously, for the professional climate change deniers, they do it (whether they believe it or not) because they get well paid to write their odious pieces for The Teleggraph, Daily Mail and Wall Street Journal.
What is more puzzling is how often these memes are popular with those who would describe themselves as ‘green’. This is a conundrum that really needs a separate essay, but I think that at its root is a belief that ‘natural solutions’ and changes in society can deliver a greener future free from fossil fuels, with only minimal need to rely on that horrible technological stuff.
This is a fantasy, even while natural solutions do have an important role to play, particularly in restoring nature.
Sometimes this belief is defended using some dodgy discredited ‘science’ about the potential impact of regenerative farming in terms of improved soil carbon sequestration (something I have touched on before in Fantasy Maths and the National Farmers Union).
However in most cases I think it is a lack of appreciation of the urgency to stop burning fossil fuels, and the need to electrify much if not most of our energy end use as soon as possible, powered by renewable electricity generation.
All of us who strive to be green really do have to learn to love the technology, even while we insist on it being deployed in ways that do not perpetuate current injustices, and metaphorically and literally redistribute power.
In a talk I addressed the topic ‘Greening Our Energy: How Soon?’, using recent research [1] to show that the UK could be self-sufficient in energy using wind and solar alone, along with significant levels of long-term energy storage to ensure energy security. The talk also discussed how electrification of much of our energy use reduces the overall energy demand, something that Mackay and others have talked about for years.
A question raised by an audience member was ‘How much energy could a community generate itself?’. This essay aims to answer this question, using my home town of Nailsworth as an example. As I said in the talk, the focus is on wind and solar.
When considering the total carbon emissions we are responsible for (so-called ‘consumption emissions’) studies [2] include literally everything. Including imported goods and produce. However, in terms of future UK generation, it is better to consider just the energy produced and consumed in the UK (the so-called ‘terrestrial emissions’).
We can narrow the scope further by considering those forms of energy consumption that are truly local and therefore best considered as being potentially met in whole or in part by community energy.
The two big ones are:
electrified private cars and public transport (we’ll call these simply ‘transport’).
heat pumps used to heat our homes and offices.
In terms of carbon emissions, these two represent 60% of Nailsworth’s terrestrial emissions, and 40% of our consumption emissions, so highly significant, however they are viewed [2].
Credit: IMPACT: Community Carbon Footprint tool, Centre for Sustainable Energy (CSE)
According to Mackay [3], these would require energy consumption of 18 kWh and 12 kWh, respectively, per person per day in this electrified future world. The total including all energy needed would be 68 kWh/p.d and this is the figure used in the Oxford study referred to in my talk. So these two uses of energy would account for 44% of the total consumption of energy used.
The total of 30 kWh per person per day for transport and heating implies an average delivered power supply from community energy of 30/24 = 1.25 kW per person in Winter. In Summer we still need hot water but the great majority of heating is for space heating so we’d need about 12/24 = 0.5 kW per person in Summer for transport.
Nailsworth has a population of around 5,500, so let’s assume a future population of 6,000, which would imply a power supply required (for transport and heating) of 1.25 kW x 6,000 = 7,500 kW = 7.5MW in Winter, and 0.5 kW x 6,000 = 3,000 kW = 3MW in Summer.
Now the capacity factors for wind and solar in England [4] are on average, respectively, about 40% and 3% in winter and 20% and 20% in summer.
The winter solar generation depends a great deal on the orientation of the panels – much more so than in summer. I have taken a relatively pessimistic figure, assuming on average East/West orientation, which still provides some energy in Winter but I have based estimates assuming wind alone meets the required demand in winter.
So let’s start with winter where we will discount solar [5]. Applying the capacity factor of 40% (in this case, dividing by 0.4) the 7.5 MW delivered energy would require 7.5MW/0.4 = 18.75MW of wind energy capacity to meet it. Let’s round that up to 20MW.
For onshore wind turbines, we cannot use the largest ones available and are potentially restricted to say 5MW turbines. Only 4 of these would meet the power requirement of 20MW. Currently we have one 500kW wind turbine high above Nailsworth owned by Ecotricity. Having established this precedent, and given changing public attitudes, and both Stroud District Council and Nailsworth Town Council having declared a climate emergency, one would hope this could be implemented, especially if it is a community energy scheme.
Now, should we increase the capacity to deal with peaks in demand or lulls in wind? No, in my view. Community energy will be connected to the grid. When Nympsfield above Nailsworth is having a lull, other community sites around the country, and indeed large resources such as North Sea wind farms, will be able to take up the strain.
A national energy storage strategy would deal with more extreme lulls that cover most of the country, as discussed in the talk.
Moving now to Summer, the four wind turbines proposed would deliver (now multiplying the wind capacity by the summer capacity factor), 20MWx0.2 = 4MW, so we’d need solar to deliver the remaining requirement of 7.5-4 = 3.5MW. Using the capacity factor for solar in Summer (at 20%, twice as good as the average for the year, 10%), that gives us a required solar PV capacity of 3.5MW/0.2 = 17.5MW.
The average domestic solar PV installation in the UK has been 3.5kW, but with improved panels let’s round this to 4 kW. Assuming that the average home has 3 occupant, we anticipate 2,000 dwellings. They could provide a capacity of 2,000 x 4kW = 8MW, or about 45% of the solar capacity required. Yes, I know many live in flats, but the goal here is to look at broad brush feasibility.
Ground mounted solar would then need to deliver 9.5MW. It’s been estimated that “Approximately 25 acres of land is required for every 5 megawatts (MW) of installation while 6 to 8 acres will be needed for a 1MW farm” [6]. So lets assume 1MW parcels at average of 7 acres each. We’d need 9.5 x 7 acres or about 70 acres.
To give a sense of scale, Minchinhamption Common is 182 hectares or 450 acres, so we’d require the equivalent of 15% of it’s land area. This is not a proposal to use this common I should stress, just to give a sense of scale and feasibility. Nevertheless, shade (for our grazers and humans alike) will come at a premium by 2050 [7] so who knows?
This feels like a doable number.
To the extent to which domestic solar cannot be fully deployed, then ground mounted solar could be increased, or solar on commercial or civic buildings could take up the strain. I haven’t included these but they could make a substantial contribution (actually, are already making a contribution), albeit not necessarily being able to be classed as ‘community energy’.
The question naturally arises as to whether Nailsworth could use small hydro power using its streams, or as a mini Dinorwig, for energy storage, harking back to the Mill Ponds used during the 19th and 20th Century, when they provided some energy resilience to the wool mills of the town. It could of course play and role, and even if at a scale which is less significant numerically [8], could help in enabling local energy resilience [9]. There is strength in diversity, as nature teaches us.
Research on renewables offers up some pleasant surprises in how different forms of it can complement and support each other [10]. All of this is detail to explore of course.
My main goal in this essay was to establish if Community-based renewables – and specifically wind and solar – could compete in relevance with the large national assets such as North Sea wind, and thus provide a strong case for Community Energy schemes.
The answer is a definite yes.
Community Energy could provide a significant percentage (over 40%) of the terrestrial energy demand of a town like Nailsworth, throught the year. This would shift the control of energy, to a significant extent, away from large commercial assets, and could have untold benefits for local communities [11]. Nationally, such diversified and highly dispersed resources would enhance energy security for the whole country.
3.1) Note that 68 kWh/p.d for a 70m population, say, in 2050 would amount to a UK energy demand per year of 68 kWh/p.d x 60m p x 365 d/y = 1,489 TWh/y – the total energy requirement that the Oxford Study shows can be achieved with wind and solar (actually, they show we could do double that quite feasibly with out excessive use of land or sea area).
3.2) Note that the (18+12)/68 = 0.44 or 44%
But be careful not to assume that means 44% of our consumption emissions being eliminated by transport and heating as it depends on the carbon intensity of different processes. It could be more or less. Actually, due to relatively efficiencies, moving to electrification of heating in particular and also transport, make very good contributions to displacing carbon-creating energy usage. As a percentage of our terrestrial emissions, transport and heating amount to about 60%.
So who know what solutions will be needed to provide shelter from the heat?
[8] I’m emotionally attracted to the gravitational storage / micro hydro idea. After all, the Mill Ponds around Nailsworth kept the mills running when the streams ran slack. It’s part of our history. But then again, Dunkirk Mill needed only about 16kW to run, a thousandth of what we are now considering, and even 20 of these would match the vastly greater energy footprint of modern society. The Centre for Alternative Energy’s Zero Carbon Britain report includes an estimate of 8 TWh of generation from hydro (including large and micro) for UK, so about 1% of the total.
[9] Assuming 20 reservoirs at 100m above their twins on valley floor, each holding 10,000 cubic metres of water, and a round trip efficiency of 75%, one could store about 40 MWh of energy, a not inconsiderable amount. If each reservoir used a 100 kW turbine (not the largest micro turbine but illustrative) then they would generate in total 2 MW, or nearly 30% of the Nailsworth average power demand, although at full power, the reservoirs would be exhausted in 20 hours. If larger turbines were used, the duration at full power would decline in proportion (eg. if 500 kW, then in 4 hours)
For storage, Micro hydro would have to compete with (or maybe, collaborate with!) domestic or small scale batteries. For example, if each household had a battery with 100kWh storage, then 2000 of these would equal 200 MWh, and would be equivalent to 200MWh/7.5MW = 26.7h, so about 1 day’s worth of storage. That again is pretty significant local resilience to augment a national massive (30 day) storage capacity discussed in the essay.
[10] While either micro hydro or batteries may have limited capacity, they could make an extremely significant contribution to balancing the local grid over a day or so, and that could in its turn relieve pinch points in the distribution grid when there are short term mis-matches between supply and demand. Indeed, I wrote a piece – Small Is Beautiful – local renewables and storage can catalyse the greening of grid – based on some modelling in the USA that showed that even small amounts of local solar could have a disproportionately large impact in enabling increasing grid-scale wind resources. Similar modelling of a diverse array of renewable assets could reveal other pleasant surprises.
[11] A Community Energy scheme could, if setup right, ensure that it incorporates energy security for all as a founding principle, using profits to help fund the restrofitting (insulation, solar, heat pumps, etc.) of poorly built or maintained accommodation and social housing, for example.
This article includes a video and almost verbatim text of a 40+ minute public talk at Sawyer Hall, Nailsworth, on Thursday 22nd February given by Dr Richard Erskine, Education Lead for Nailsworth Climate Action Network. The numbered paragraphs refer to the slides, but only some of the illustrations from the slides are included in the text below. It’s better to play the video to see the slides alongside hearing the words.At the end of this piece are Acknowledgement, References, and some Questions and Answers from the event.
Update 13 March 2024: Report from the House of Lords on Long-duration Energy Storage: Get On With It underlines importance of storage to get us to fully clean energy system, dominated by renewables, as the talk sets out.
Update 16 April 2023: A great new blog Is a 100% Renewable Energy Grid Possible? by scientist Michael de Podesta also comes to the conclusion that 100% renewables can be achieved.
A recorded online version of the talk can be found here:
The almost verbatim script follows:
1 Greening Our Energy: How Soon
Thank you for coming in this presentation. I really want to help you to understand the potential for solar and wind to get us off oil and gas and fossil fuels in general. You’ll no doubt know – if you’ve looked in the paper or on social media – that there’s lots of opinions flying around about what’s possible and you often see numbers being thrown around. I think it’s very easy to feel bamboozled by some of the big numbers that are thrown around and statements made.
Can you put your hands up if you feel a little bit bamboozled sometimes by what you read? I’ll put my hand up as well.
2 Net Zero by 2050?
Well that’s understandablebut the interesting thing is that when we look at a recent survey [1], eight out of 10 Britains were concerned about climate change, and over half of them (52%) thought that the net zero target for 2050 should be brought forward.
3 Questions we aim to address
I want to try to address three questions in this talk:
Firstly, could all of future energy demand be met by wind and solar? now that’s not to discount other forms of low carbon energy, but if we can show that it can be done with wind and solar, then of course, any other forms of low carbon or zero carbon energy that are available will simply make that goal easier to meet it won’t make it harder to meet.
Specifically, nuclear has met about 20% of our generation needs over recent decades, and it is likely with new projects to continue to do so, and given the climate crisis it would be foolish to stop this, but I want to focus here on wind and solar.
Secondly, what are the opportunities and hurdles on this journey, and
Thirdly, how soon can we do it?
4 My Journey
We are all on a journey.
My journey on these questions started with David Mackay’s famous book ‘Sustainability Energy without the hot air’ [2] which was very influential. It was forensic in working out the carbon intensity of everything we own or do, and in looking at the possible ways to get off fossil fuels.
I later attended the course Zero Carbon Britain (ZCB) at the Centre for Alternative Technology (CAT) [3]in Machynlleth, and this inspired me to believe that we could actually do it.
5 New look at Mackay’s UK numbers
The David Mackay analysis was brilliant but it suggested that we would struggle to get to 100% renewables because the one thing Brits are all good at is saying: “NO!”. No to this solar farm; no to those new pylons; and so on. He was accused of being pro nuclear because he felt we unavoidably would need plenty of it. He replied that he was only in favour of maths.
However, solar and wind costs have fallen dramatically. British opinion is now firmly in favour of renewables. This recent paper [4] – I’ll refer to it as the ‘Oxford paper’ – has revisited Mackay’s numbers and I will share their findings.
6 … and not forgetting a global view
And let’snot forget the paper [5] that made a big splash because it showed that increasing innovation could result in trillions of cost savings if the world pushed hard for renewables.
7 An over abundance
People talk about fusion power which is famously always 50 years away. But we have a fantastic fusion reactor up in the sky – it’s called the Sun! The Sun deposits 170,000 Terawatts of energy on the Earth, which is about 10,000 times more than humanity needs currently. And – wait for it – Shell pointed this out in 2005! [6]
Is anyone really saying that humanity doesn’t have the ingenuity to harvest just one 10,000th of the energy we get from the Sun? Are they really saying that we cannot use this massive over abundance of energy from the Sun?
8 How is UK doing so far?
Before looking into the near future, let’s do a check on where we are. We keep being told we have been leaders. So are we?
Yes and no.
We were certainly leaders when we passed the Climate Change Act in 2008, and when the 2050 Net Zero target was incorporated in 2019. Now look at the numbers.
9 Fossil fuel (FF) UK in 2017
The picture below from the Centre for Alternative Technology (CAT) Zero Carbon Britain (ZCB) report [3] shows that primary energy in 2017 was 80% fossil fuel [I chose CAT’s graphics in part beause they are better for clear and uncluttered communication of the information than many other source – in the talk I overlay some of their slides with key messages in text boxes].
Primary energy is the inherent energy in a lump of coal for example. But when you burn fossil fuels you lose energy in heat that doesn’t do useful work and so you lose 25% of that.
The other key point is that only 20% of delivered energy was electrified at in 2017.
10 Decline of coal in electricity
This Our World In Data graphic shows what has happened in the last 40 years. Coal in electricity generation has dropped from 60% to almost zero [7] in that period. This was a great achievement.
11 The dash for gas
But again, just looking at the electricity generation, gas generation has displaced coal [8]. A cleaner form of energy but still a fossil fuel putting carbon dioxide in the atmosphere. [8]
12 Renewables growing fast
But renewables in recent years, due to their plummeting costs, have been growing really fast over recent decades, and it’s nearing 40% of annual electricity generation. Half of this comes from wind. [9]
13 Good news, bad news 1/2
Good news, Bad news.
Whether it’s the climate change committee or the national audit office or lawyers: they’re telling the government to pull your finger out if we’re gonna to get to clean the clean grid by the middle of the next decade.
Climate Change Committee’s 2022 Progress Report showed that only 8 of 50 key indicators were on track. A pretty dismal performance by the Government.
As Victoria Seabrook of Sky News reported [10], The National Audit Office has also been damning:
“The longer it takes before government finalises its delivery plan, the greater the risk that it won’t achieve that ambition to decarbonize power by 2035, or that doing so will cost consumers more” Simon Bittlestone, National Audit Office, Director of value for money studies.
Energy bills may rise again without government plan to deliver 2035 clean power target, NAO warns – A missing plan to decarbonise Britain’s electricity network is costing households, the report warned. The NAO audit prompted calls for government to lift a de facto ban on onshore wind.
Government lawyers are warning of a risk of litigation against the Government for its laggardly performance.
14 Good news, bad news 2/2
The good news is we’ve been cleaning up the electricity that is currently in use, but we’ve been slow to electrify the large chunks of the economy that are currently not electrified. Transport and heating are the two big ones.
15 Demystifying energy
I want to spend some time demystifying energy a bit.
16 What is energy?
The great American physicist Richard Feynman said that “Energy is a very subtle concept. It is very, very difficult to get right.” [11]
There’s energy in a food bar, in a battery, and in some petrol. They seem totally different – like chalk and cheese!
For our purposes, we can say energy is any source of usable power. It comes in different forms, chemical, solar, nuclear, electrical, etc.
It can be transformed from one form to another, usually with some loss of energy in the process.
This is quite important.
17 Power vs Energy
I want to use an analogy of pouring water. This 40 watt lightbulb uses a hundred times less power to drive it than the kettle. So think of the rate of water flow from here to here as equivalent to the power that’s being used. The energy consumed is then analogous to the total amount of water poured over a certain period of time. If you go for 24 hours we will get a kilowatt hour of an hour of energy used by a 40 W lightbulb. The 3 kW kettle draws much more power – which is analogous to pouring the water much faster like this – so will get to 1 kWh of energy used much sooner (in just 20 minutes).
18 Power & Energy on UK scale
If we scale power and energy up to our UK population of 60 million or so, everything is multiplied by tens of million.
So it’s typical for the UK currently to draw on 40 million kW of power, which we call 40 gigawatts (40 GW) for short. Over a year – which is 24 x 365 hours – this amounts to 350,000 gigawatt hours, which we can call 350 terawatt hours (350) TWh for short.
Country sized power tends to be in tens of GW, whereas country sized energy per year is in hundreds of TWh.
So does that mean we simply install 40 GW of wind, or a bit more to deal with less windy days or days with more demand? Not quite!
19 ‘Capacity factors’
The nameplate power rating of a wind turbine does not reflect the fact it is not always windy. The capacity factor is an adjustment that takes account of this.
For wind in the UK, the capacity factor is 40%.
For solar, given our highish latitude, the capacity factor is 10%.
So 1 GW of offshore wind delivers 3.5 TWh of energy per year.
Whereas 1 GW of solar delivers 0.9 TWh of energy per year
So why not ignore solar for the UK?
Because in winter, wind is high and solar is low, whereas in summer, wind is low and solar is high.
Most of the time, they compensate for each other in a very effective way.
20 Demystifying efficiency
Efficiency is a really key topic when we look at how we use energy. If there are two ways of getting the same result but one uses twice as much energy, it means you’ll need twice as many resources to achieve the result.
21 What is energy efficiency?
The result we want with a lightbulb is to light up a small room. Suppose we need 400 Lumens of light to do that. Then a 40W Incandescent light bulb would do that job. But it comes with large energy losses. Only 10% or less of the electricity put in is transformed into light. So we say it has an efficiency of 10% or less. [12]
22 Less energy loss improves efficiency
With a 6W LED lightbulb we can still get the result we want – 400 Lumens of light output – but with much less loss of energy through heat loss. So the efficiency increases to typically 60%.
23 Electrification revolution
The electrification revolution is key to achieving greater efficiency, because in those areas of energy use where we burn fossil fuels, there is often huge inefficiencies.
24 Faraday invented ability to turn motion into electricity
There is an apocryphal story that Michael Faraday was asked by a politician “what use is electricity?” and he replied “What use a baby?”. That baby has been through its childhood and is now ready to enter adulthood.
The electrification revolution is really not new. Michael Faraday showed how to turn motion into electricity, and the reverse of this, to turn electricity into motion. This is what an electric drill does.
Over the two centuries since Faraday and others made their discoveries various forms of energy use have been electrified. Candles were replaced by light bulbs; Mills moved from water power to electric power; Electric washing machines and other household devices replaced muscle power – arduous manual work.
But some aspects of our lives ended up being powered by burning fossil fuels, and we turn to these now.
25 Petrol/ Diesel Cars
An internal combustion engine burns petrol/ diesel to create power at the wheel, but it loses at least 70% of the primary energy in the fuel, so a petrol/diesel car only has an efficiency of 12-30% overall [13].
26 Electric Vehicle (EV) cars
An Electric Vehicle has much lower losses to create the same forward motion, losing about 20% of the energy stored in a battery. So overall, an EV has an efficiency of 77% [13]. For those EVs that include regenerative braking, they can achieve even higher efficiency.
Great Britain currently uses the equivalent of 445 TWh from petrol and diesel road vehicles. As we’ve just calculated, if this was electric we’d need just 118 TWh. Almost 4 times less [14].
27 Gas boiler for heating
A Gas Boiler is designed to create heat, but there are still heat losses that do not go towards heating rooms or water for taps. A modern condensing boiler can have an efficiency of almost 90% (although often they are setup poorly so do not achieve this level).
So for every 1 kWh of primary energy in the gas that is put in, 0.9 kWh of heat is delivered to the house.
28 Heat pump for heating
Before I talk to this slide, who has a heat pump?
OK, just a few.
Hmmm, Who has a fridge? Looks like most or all of you. But did you know that a fridge uses a heat pump.
A heat pump is a device, invented in the 19th Century for moving heat from one place to another. For a fridge, it moves it from inside the fridge to outside of it.
For heat pumps that are used for heating, an air-source the heat pump harvests the ambient energy in the air outside the house, concentrates it, then moves it inside the house, for space heating or water heating.
Then 1 kWh of electricity is augmented by 2 kWh of thermal energy from the environment, resulting in 3 kWh of delivered heat inside the house. This is an effective efficiency – or Coefficient Of Performance (COP) – of 300% [15].
29 Heat pumps usable in any home
And contrary to received opinion, heat pumps can heat any house that a gas boiler can heat [16]. The key ingredients are a proper house survey; appropriate sizing of the system elements; and properly training technicians installing the system.
(Note: I wrote a blog that attracted a lot of attention ‘Insulate Britain: Yes but by how much?’ that provides a repost to the idea that ‘deep retrofit’ is needed before one gets a het pump)
30 Electrification is future-proofing
As David Mackay observed, electrification is future-proofing. The end users of electricity, be they light bulbs, heat pumps, cars or industry, really don’t care where the electricity comes from.
And if new forms of energy prove to be advantageous in the future, we can simply plug them into the grid.
31 How much energy will UK use in the future?
So how much will the UK use in the future?
32 Electrification reduces demand
The CAT ZCB report estimated that in a UK that has stopped burning fossil fuels, where most energy use is electrified, the energy demand would be reduced by 60%to around 700 TWh.
They included quite ambitious goals for improved public transport, but others such as the Oxford study [4] referred to earlier have come to a similar estimate without assuming major behavioural change.
33 What about new demands in 2050?
The Oxford Study conservatively doubled this figure (to 1400 TWh) to allow for new or novel demands such as generative AI [17], synthetic meats, direct carbon capture, etc.
The Royal Society [18] estimates we’d need 100 TWh of hydrogen storage.
So in total, a generous 1500 TWh of energy demand is the estimate for 2050. to meet the mainly electrified demand.
34 Is it feasible with wind and solar alone?
But is this feasible with wind and solar alone?
35 UK has best wind resources in Europe
The UK has the best wind resources in Europe [3], so we are very lucky in that regard.
It’s interesting to note that at the start of the Industrial Revolution, Britain had as much energy reserves in the form of coal, as Saudi Arabia was discovered to have in the form of oil in the 20th century [2].
Now, the UK could use its wind resources to power a new Green Industrial Revolution. How lucky we are if we are wise enough to grasp the opportunity.
36 Hornsea wind farm phases
The Hornsea wind farms [19] phases 1, 2 and 3 in the North Sea will deliver 129 TWh per year, and wind farms such as these can be constructed pretty quickly. We just need to accelerate the planning processes for additional wind farms.
Floating wind resources in the deeper waters further north will benefit from even stronger wind.
37 Plenty of space to spare
Overall, having assessed the feasible use of land and sea area, the Oxford study concluded that they could even double the 1500 TWh energy supply.
With 1500 TWh, the land and sea areas required to meet the demand are modest. As a comparator, golf courses take up 0.5% of UK land.
And we won’t run out of minerals either, as a comprehensive study has demonstrated [20].
38 Infrastructure & End-use growing – in parallel
There is a perverse argument used to question the rise of EVs – not enough charging points. Or heat pumps – not enough installers. As with every transition, the growth in a new end-use is accompanied by its twin: the growth in a new infrastructure. They are like twins, running a marathon together.
39 What what about variability in wind and solar?
The question naturally arises as to the variability of wind and solar. The extreme scenario is an anti-cyclone stuck over Britain for 2 weeks with poor wind and solar power generation.
[Note added 14-5-24: An analysis of the general requirements for storage (both in terms of energy stored and power capacity) is avaialble at Storage Lab [27] ].
40 Mismatch!
Even in less extreme, or quite normal situations, the supply might be much more than needed sometimes and less than what is needed at other times.
How would we deal with this mismatch?
There are many ways to try to address this. Over a day’s cycle, we can shift demand using a smart grid and smart tariffs. We can get more or less electricity from our European neighbours. There is scope for large degrees of flexibility in the system, to flatten the peak demand.
41 Over-build option
But over slightly longer periods, like a week, we need to do more. One way is simply to build more than we need – this is termed ‘over build’, which obviously comes at a cost, but the cost of both solar and wind have plummeted so this is clearly an option.
42 Energy storage option
Another option is to store excess energy – when it is blowing hard – and to pay it back to the grid when there is a shortfall in supply. This also comes at a cost – to build the storage systems and means for regenerating the power. The choice between over build and storage options depends in part on their relative unit costs.
For the extreme case of a persistent anticyclone, over build alone cannot fix the problem. More turbines on a windless day won’t cut it. So storage has to be at least partof the solution.
43 High wind & solar challenge
But interestingly, studies have shown [21] that the need for massive storage only gets significantly pressing when the fraction of energy from wind and solar exceeds 80% of the total energy mix (which is what this graphic by Ken Caldeira is showing).
44 Batteries will play a role for shorter term
One of the beauties of renewables is that they can exist at multiple scales – for a homeowner, for a community, for a region and for a country. Storage too can exist at these different scales, as many homes who use batteries alongside their rooftop solar PV systems can attest to.
Large battery units are already playing a role in helping to ease pressure points on electricity grids.
45 Dinorwig hydro energy storage
At a larger capacity, the Dinorwig hydro energy plant provides good scale energy storage, able to respond very fast to peaks in demand, or losses in supply [22].
It’s a strategic asset for the UK, but again, it would not be enough to deal with a long-term energy or inter-seasonal storage needs.
46 Hydrogen long-term storage
A recent Royal Society report on long term storage has concluded that hydrogen will play a key role. The hydrogen can be created using electricity when there is an excess of wind or solar, and then stored, and can be used to generate electricity using a fuel-cell to put back on the grid when we need energy back.
In East Yorkshire alone, there are 3000 potential salt cavern locations totalling 366 TWh of stored energy [18].
(Editorial Note: I should stress that hydrogen in this context is for long term storage, NOT for heating. As the Climate Change Committee has projected, heating will be mostly met by domestic heat pumps and district heating – and the district heating itself will often be community-scale heat pumps. The reason for this is the vastly greater energy efficiency of using heat pumps as compared to burning hydrogen in our homes. But this is another talk!)
47 Modelling is key to ensure feasibility
Modelling of the whole system is key, including real-world weather and demand data to test the feasibility of potential solutions, over individual months …
48 Ensuring balance during extremes
… but we must also model system behaviour over decades.
They looked at weather data over a 40 year period to seek out worst case lulls.
It is always very odd that newspaper articles or social media posts raise the issue of lulls as though it is a gotcha discovery. Unsurprisingly, scientists and engineers are not stupid and have of course included the issue of lulls in their projections.
49 Revisiting the questions 1/3
So let’s revisit the questions I raised. Yes, we could meet future demand using just wind and solar. There is an over-abundance of renewable energy and it is an effectively limitless resource.
And the Climate Change Committee broadly agrees, based on their recent report ‘Delivering a reliable decarbonised power system’ – although some of the details differ. And at least in the medium term, anticipate reliance on gas turbines with CCS as backup.
There are many permutations, as to the detailed plans for the transition, but the end-goal feasibility question is settled.
50 Revisiting the questions 2/3
The opportunities are legion:
to stop damaging the planet;
to have clean air in our homes and towns;
to stop being reliant on petro-states and volatile international energy markets;
and to create a newvibrant economy based on green energy.
The hurdles are also there:
regulations and an ossified planning regime that has slowed deployment of onshore wind, solar and grid connections.
We also need electricity market reforms.
But the biggest hurdle of all has been the lack of long-term thinking and political leadership at all levels of government.
51 Revisiting the questions 3/3
I think that ‘How soon’ is a poorly defined question:
How soon to displace the current gas generating capacity?
Or how soon to electrify the 80% of demand that is not yet electrified?
Those are two different targets.
The key ‘How Soon’ is really How soon will we have a government committed to a fully fledged plan to mobilise the economy – including the talents, skills, regulations and incentives needed – to start us on an accelerated path to net zero.
The Oxford paper’s recommendations are:
Remove barriers to new solar and wind energy capacity.
Continue to incentivise accelerated solar and wind energy investment.
Invest in storage solutions, grid upgrades and, where necessary, grid services.
52 Fast transformations not new
These photos of New York street show the change from horse drawn carriages to petrol cars in just 13 years; from 1900 to 1913.
Transitions can be very fast, if the will is there.
We just need to stop the mixed singles to the public and to industry and push on hard.
53 Final reflections – Embrace optimism
One of the lessons that’s been important for me to learn is that its possible to believe both that things are deeply worrying, but that some positive changes are in train, thanks to the work of many people.
A sustainable future is possible if we make progressive choices, for people and planet. It’s ok to be optimistic about the future, while recognising the challenges we face. Resigning oneself to catastrophe is a recipe for inaction and despair, and I for one reject that choice.
I’d recommend Hannah Ritchie’s recent book ‘Not the end of the World’ for anyone wanting a boost of positive thinking on the choices and opportunities we have to build a sustainable future for people and planet [23].
54 Final reflections – System change more than mere substitution
System change more than mere substitution
30 million EVs is not the answer to 30 million petrol and diesel cars (but how many?)
System change not merely substitution.
We need less clogged up, people friendly, walkable towns & cities
Electrification of improved bus, tram & rail services also key, alongside EV cars.
55 Final reflections – We need head, hand and heart
I’d like to close by returning to Machynlleth, and the Centre of Alternative Technology, where my journey began.
While there studying energy futures we found time to spend time with nature.
Here are two fellow students Sarah and Rosie who placed their hands on a tree for me.
A green energy transition is essential to save the planet, and create a new thriving economy and society which enjoys abundant energy enabling education, health and agricultural benefits in impoverished communities [24].
But it’s not inevitable that head, hand and heart will work together to create a fairer world.
We must therefore strive to put communities at the heart of everything we do, to decentralise power as far as possible, and not to perpetuate current injustices.
56 Thank you
Now please, can we have questions.
Please keep questions short as I will repeat each question to ensure everyone can hear the question and my response.
After Q&A, we can break up and move around, get a cuppa, and mingle. NailsworthCAN would very much like to share what we have been doing and to hear from you. We are keen to continue to tap into the talents and ideas of the community.
Richard Erskine, 2024
……………………………………………………………………………………………….
ACKNOWLEDGEMENTS
The talk includes insights from many people: Ken Caldeira [21], Richard Hellen [25], David Mackay [2], Hannah Ritchie [23] and Rupert Way [26] to name just a few.
And from many institutions: The Centre for Alternative Technology [3], Our World In Data [7-9], Oxford Univerisity (including the Smith School of Energy and the Environment), The Royal Society [18], The Schumacher Institute and the UK’s Committee on Climate Change, to name just a few.
How these insights and some materials and data have been used here – including any errors or omissions – are the sole responsibility of Dr Richard Erskine.
The figures used from reports are overlaid in the presentation with annotations using large text to highlight the key messages. Anyone wanting to see the original figures and data are directed via links to the sources.
Niele, Frank (2005), Energy: Engine of Evolution, Shell Global Solutions, 2005
In the text Frank Niele mentions a solar intercept of 170,000 TeraWatt (TW = 1000 GW). This is not the practical maximum for solar power we could harness (and Niele is not saying that, but some people might misread it that way). Due to a number of factors (we would only want to use a small area of land for solar, the efficiency of PVs, etc.) the practical limit is very much less. BUT, even allowing for this, the amount of energy is so massive that we are still left with an enormous potential, that far exceeds the 40 TW requirement. Humanity will need (in his 2050 projection) ’only’ about 1 million square km (or 0.67% of the Earth’s land area). So, in practical terms, there is no ‘functional limit’ in respect of the energy that humanity needs. The calculation backing this up is in Note 16 of my essay Demystifying Global Warming and Its Implications.
Feynman (1969), From an address “What is Science?”, presented at the fifteenth annual meeting of the National Science Teachers Association, in New York City (1966), published in The Physics Teacher, volume 7, issue 6 (1969), p. 313-320
“From 2023 to 2030, we are looking at about an 80% increase in US data center power demand, going from about 19 GW to about 35 GW,” Stephen Oliver, vice president of corporate marketing and investor relations at Navitas Semiconductor, said in an interview. Since total US demand is expected to rise to about 482 GW in 2027 (let’s assume 500 GW by 2030), the 35 GW for data centres is about 7% of the total – significant but hardly existentially large.
Rupert Way was co-author on both the key paper [4] above, and the 2022 paper [5] – which had considerable worldwide coverage – that showed the world could save trillions of dollars if it moved rapidy to scale up renewable technologies such as wind, solar and electrolysers
The answers given are broadly as given but with a little embellishment in a few cases. Some references added to help in solidifying the points made.
Will there be room for nature in this move to renewables, and recognising that the ecological crisis and climate crisis?
Yes. As I said, the ground mounted solar included in the Oxford paper would require 1% of UK land, but pasture takes up 30% and ground mounted solar can co-exist with grazing sheep for example.
There is a The Fallacy of Perfection, that requires that new solutions are perfect while ignoring the harms of the status quo. Extraction for coal alone in 2021 amounted to 7,500 tonnes, whereas“Estimates for the maximum amount of materials we’ll need annually to build low-emissions energy infrastructure top out at about 200 million metric tons, including all the cement, aluminum, steel, and even glass that needs to be produced.” – and once built, this level falls away, whereas with fossil fuels we keep on having to extract it. On land use too, renewables are better than fossil fuels if we look at the full life-cycle (extraction through to operation).
But it is also true that in UK we are not always very good at consulting on projects. We do a cursory consultation, then spend a lot of the budget, then start to raise questions on the requirements while construction is in full flight (HS2 was a case in point). Good project practice is to do a thorough consultation that truly listens to and engages with the public and articulates the impacts, costs and benefits of a new project and the status quo, then pilot and prototype to test out proposals, before then proceeding. Politicians are too often led by industrial partners wanting to push ahead without delay. We can build fast, but we do need to build the right assets in the right places for the right reasons.
Too often, there seems to be a belief that nature-based solutions are in conflict with technological ones, but the truth is we need both. For example, nature based approaches to flood alleviation (like SUDS) are needed, but in many cases, engineered ones (like the Thames Barrage) are needed as well. But on decarbonising our energy, technologies like wind, solar and electrolysers are essential, and as we have seen, they leave the great majority of available land area for nature to thrive, if we choose to use it to address the ecological crisis; it’s not renewables stopping us doing it!
With the greater degree of flexible working, particularly following COVID, and also streaming of top shows … will that help to flatten the peaks in demand?
Great thought! That sounds very plausible and I’m tempted to look into the data to see if this is indeed true. The general message is that there are lots of additional ways in which demand can be nudged to help lower peak demand.
Isn’t it a worry that so much comes from China – batteries and the minerals used in them and elsewhere? … use lots of dirty energy …
Yes and no. It’s those twins again. A lot of claims are made about a minerals crisis by the Seaver Wang paper from last year did a thorough study of this question, and concluded that we have more than enough minerals to decarbonise the world’s economies. However, we do need to diversify our supply of minerals, and not be over reliant on China, that is true. We have to manage political risk. As an example of diversifying sources, Lithium is now being mined in Cornwall. Canada can open up its reserves of minerals.
How large a role could community energy play in the energy transition?
That’s an important question. I made the point that renewables have the benefit of being possible at all scales. The more we can have renewables at local scales, the more resilient we are, and the less the risk of power being solely in the hands of centralised conglomerates. Some of the largest wind farms are owned by private companies that aren’t British. So I’d like to see a lot of community energy. How much of a town’s energy could be produced locally will vary a lot according to the location, and may also vary through the seasons. It’s not clear whether we’re in a position to put a number on it or decide what is optimal. However, a town will still need to be connected to the national grid because it isn’t always windy or sunny in a specific locality. Some assets like Dinorweg or future hydrogen storage facilities, are national assets, for everyone’s benefit. So we need to think of community energy as part of a whole system – giving and taking energy at different times.
You mentioned that there is a majority of people wanting the UK to be more ambitious, so why are some politicians thinking there are votes in delaying action?
What a great question. Hitherto there has been cross party support. In Parliament, there was almost unanimity in votes for the 2008, and the 2019 ‘net zero by 2050’ change. Unfortunately it seems it has become something that has become rather polarised – some trying to claim that there is a conflict between solving current financial issues and investing in the future. But as the 2022 Oxford paper by Rupert Way and others showed, we can actually save lots of money by accelerating the pace of transition to a green future.
Despite claims by the Government that the UK is a leader, we saw in the talk heavy criticisms from the Climate Change Committee and the National Audit Office on the lack of progress in many areas; the UK cannot rest on the laurels of displacing coal. So currently the UK has definitely lost its position of leadership. The country can earn back a position of leadership if politicians grasp the opportunity and stop using the climate as a political football. We need to get back to there being a cross party consensus at least amongst the major parties that will last till 2050, which is 5 Parliaments away.
Could we learn from what Nigeria is doing? They have micro grids and will later bring these together.
Different countries have started from different places. The UK have had large centralised generating capacity and will now need to loosen thing up a bit to accommodate a network of resources at different scales. Nigeria is in a sense doing the opposite from what you say – having lots of local capacity before bringing it all together. I’m sure we could learn from each other.
In the last 6 months you noticed a change in attitudes toward siting of renewables? I’m finding many acquaintances have.
Yes, and we are seeing communities embracing solar and wind for their mutual benefits, as the Channel 4 series The Great Climate Fight showed, it is often regressive Government rules and directives blocking communities from building what they want (such as a wind turbine on the edge of a village), with just a small minority vetoing progress.
Are there issues with using hydrogen, I’ve heard about?
We have to distinguish various uses of hydrogen. Michael Liebreich has a ‘hydrogen ladder’ showing where hydrogen sensibly can or should be used and where it shouldn’t. It cannot compete with electrification for cars and heating homes, and is now being relegated into relatively few areas. Energy storage is one of those. Another is fertiliser production. And there are also applications in industry. You mentioned Bath University research, so I’ll need to talk with you after to determine what issues you are referring to.
What about heat storage playing a role?
Heat storage is a great idea and there are most certainly cases for using it. I am not clear it can displace the need for long term storage with hydrogen, but I understand it could pay an important role. Its worth stressing that there is a lot of waste heat also around that could be exploited (such as from industry). Waste in our sewers could, when combined with a heat pump, supply heat, and of course there are many existing installations of water source heat pumps that can heat large building; Stroud District Council’s offices in Ebley is a case in point.
[since the talk, the following example from Princeton University has come to my attention. They will be using large heat pumps to extract heat from buildings in summer to keep them cool and store this underground, then use the heat pumps again in winter to use the buried heat to heat buildings in winter. They will create a huge thermal reservoir to achieve this outcome. https://www.princeton.edu/news/2021/11/09/going-deep-princeton-lays-foundation-net-zero-campus]
How are we going to convince people that we need a revolution in energy, especially when there are been conflict over the siting of some renewables such as the Arlingham solar array? There is a suggestion the UK should build 6-8GW of solar by 2030, but we need to take people with us.
We have to consult and engage hearts and minds? I don’t think is simply a case of not bribing people with lower bills. The use of Citizens Assemblies and other forms of engagement with the community will be key. People need to understand the benefits. A local village hall with rooftop solar, a heat pump, EV charging and a battery can become a place that brings local benefits and also helps to engage hearts and minds.
I love the BBC series ‘In Our Time’ (IOT), conceived by Melvyn Bragg (MB) and hosted by him for over 25 years. The more than 1000 episodes have covered innumerable topics in the arts, history, science, philosophy, politics and much more. Typically three Professors, leading experts in a field, are invited to explore the knowledge and scholarship on the topic of the week. Delightful surprises has been its hallmark covering topics as diverse as ‘Tea’, ‘The Neutron’, ‘The Illiad’ and so much more.
The life and work of scientists have been covered many times: Robert Hooke, Dorothy Hodgkin and Paul Dirac being a few examples. You might think that the most pressing topic of our age – man-made climate change – might get quite a bit of attention, but it doesn’t. It’s not as if its too contemporary for IOT’s tastes; unsuitable for the historical lens that IOT likes to employ. The science of climate change dates back at least 200 years.
The lives of five scientists come to mind which could help explore the huge subject of climate change: John Tyndall, Svant Arrhenius, Guy Callendar, Wally Broecker and Michael Mann are just a small sample of ones that come to mind. None of these has been covered by IOT. Here’s why each of these would be great candidates for an episode:
John Tyndall is regarded as one of the greatest experimentalists of the 19th century, and a great populariser of science. His apparatus – that in the years 1859-1861 demonstrated that carbon dioxide and other gases were heat trapping, but that oxygen and nitrogen were not – can still be seen at The Royal Institution, where he did his experiments. An episode could cover Tyndall or simply be on ‘Greenhouse Gases’ and include a survey of work up to Manabe & Wetheralds seminal 1967 paper.
Svante Arrhenius, a Nobel Prize-winning scientist, published the first calculation on how much the world would warm if the concentration of carbon dioxide (CO₂) in the atmosphere doubled – in 1896. Again an episode could cover Arrhenius exclusively or deal with the question of ‘Earth Climate Sensitivity’.
Guy Callendar published a paper in 1938 that was the first to demonstrate empirically the correlation between rising levels of CO₂ in the atmosphere (attributable to human activities) and rising global mean surface temperature. Some have even suggested that instead of referring to ‘The Greenhouse Effect’ we should use the term ‘The Callendar Effect’.
Wally Broecker was a famous oceanographer who coined the term ‘The Great Ocean Conveyor’, which moves heat around the oceans of the world, and whose understanding is crucial to climate science. He also coined the term ‘Global Warming’. Broecker said that following the publication of Manabe and Wetheralds seminal 1967 paper, man-made climate change stopped being a cocktail conversation amongst scientists, and something that was increasingly concerning.
Michael Mann et al published the famous ‘Hockey Stick’ paper in 1999 which gathered all the disparate data to demonstrate unequivocally that the world was warming. So powerful in fact that the fossil-fuel funded forces of denial started a vicious campaign to try to discredit Mann. They failed, as the findings have been supported by independent research since.
Needless to say, there are a wealth of women scientists whose work might be considered too recent for IOT, but is often of crucial importance. For example, Friederike Otto’s work on extreme weather attribution has been revolutionary, because now we have the ability to put a number on how much more likely a specific extreme weather event has become as a result of man-made global warming. This can be done in a matter of days rather than the year or more that used to be required for this kind of attribution study (see the World Weather Attribution site for more details). The topic of ‘Extreme weather events’ is assuredly in our time, and increasingly so!
Well, no, because this episode was exceptional in more ways than its rarity.
In every other episode of In Our Time, MB approaches the conversation much like you’d expect of a curious student, trying to learn from the expert professors who he robustly challenges, but respects. The debated points would be ones where experts have engaged in debating a point in the published literature, so disagreements are possible; say, to what extent Rosalind Franklin’s work was key to discovering the structure of DNA. What is not generally entertained on IOT are outlier comments from those who are not experts in the field.
So, the IOT Climate Change episode in 2000 was quite different. Outrageously different. MB approached the conversation not as a curious student, but sounding more like an opinionated journalist with an angle doing an interview, and boy, did he have an angle!
He had a completely different tone to normal, not of respectful enquiry. He reprised talking points that are rife within climate science denial circles, and even cited Matt Ridley (“no slouch”) a well known propagandist – a free-market fundamentalist like his father – who engages in constant attacks on climate science, and the climate solutions he wishes to undermine.
Leo Hickman noted on Twitter (3-1-2015) “Little known fact: Bragg witnessed GWPF’s Companies House docs for Lord Lawson”, so one is bound to speculate whether it was no accident that MB was channeling the GWPF (Global Warming Policy Foundation) non-science.
It’s easier to see what I mean about the episode by listening to the episode but I will use some snippets from the transcript here to illustrate what I mean (MB quotes in italics):
“With me to discuss what could be called “The new climate of fear” at the beginning of a new century is …”, from the off, it was clear that MB was not interested in obvious questions like “how have we come to an understanding of man-made global warming?”. He clearly wanted to frame it in a way that minimised any discussion of the underlying science. He wanted it to be a ‘both sides’ apparent exchange of newspaper comment pages opinion.
After George Monbiot’s first contributions, MB chips in “Now this is very much a received view, and you’ve been one of the people that have made it received by banging on, very effectively in the Guardian and in other places, I’m going to challenge this in a minute or two, but I just want to emphasise to the listeners, how apocalyptic your views are, …” – trying to undermine his guest with a charge of alarmism shocked me 24 years ago and shocks me still. The reason it is ‘received’ Melvyn is because of decades of research, thousands of scientific papers, and resulting IPCC (Intergovernmental Panel on Climate Change) reports, not Monbiot’s writings, however lucid they may be.
MB later pushes harder “Right now, you two have spent….devoted your lives to this subject and I haven’t, but nevertheless, I’ve looked at…tried to find some evidence which contradicts this block view, which seems you’ve got your evidence, but there’s other points of view , and ….’cause I’m worried about the evidence that you can know so much about what’s going to happen in 100 years time, and I’m worried about the lack of robustness …”, but never asks the question ‘please help me understand the evidence’, no he shares what he has read who knows where – in The Spectator perhaps. This might seem normal on a social media comments thread but is pretty unedifying on the normally rather good In Our Time.
MB says something that is straight from the climate science denial factory at GWPF: “Mmmm, but you…well er…I’m still worried about the evidence for this, the evidence that you….what evidence can you tell us Professor Houghton, that in the next century….’cause all this is to do with man-made pollution isn’t it? That the worry is that this is the Greenhouse Effect, it’s all to do with us emitting too much CO₂, and that sort of thing, can you give us your evidence, for the…why the accumulation of this is going to have such a devastating effect? Because people use extra CO₂ as fertiliser don’t they? To bring crops on?”
The framing, the tone, the references to denialist talking points (such as: ‘carbon dioxide being good for plants therefore must be good to have more of it’, would fail Philosophy 101, let alone the scientific demolition of it).
All of the talking points he raised have been answered innumerable times, if he bothered to do genuine background reading from experts on the subject.
There have been other episodes of IOT that have touched on climate since then, such as the ones on ‘Corals’, ‘Ice Ages’ and others, but clearly both Melvyn Bragg and the production team are staying well clear of man-made climate change after their last diabolical attempt.
What motivates MB’s climate denialism is unclear. It is certainly not independent scholarship. The history of our understanding of climate change has been set out clearly many times, such as in Weart’s book (see Notes). Yet, being a Labour Peer, the free market fundamentalism that drove Lord Lawson and continues to drive much of the funding for climate denial, is unlikely to be the reason. Maybe in some perverse way, it’s his faith that took him there – who knows? The fact is he was very poorly read and badly briefed. It has left a large black hole in an otherwise great series, In Our Time, that is surely crying out to be filled.
No doubt an episode entitled ‘Man-Made Climate Change’, or one based on the life and work of the many scientists that have done so much to reveal our understanding of it, will come back as a topic in due course. There are no shortage of topics linked to it that could also be covered (Fossil fuels, Energy transitions, Extreme weather events, Rossby waves, and many others).
Though I suspect it will not be in Melvyn Bragg’s time.
We’ll have to wait for the sad day when the great man moves on.
(c) Richard Erskine, 2024.
———————— o O o ———————–
Notes
I have not made the essay longer still by including the rebuttals to all the talking points raised by MB, but I don’t need to as others have done a great job addressing commonly shared myths. A good place to go for short non-technical responses is Katharine Hayhoe’s ‘Global Weirding’ series of short videos.
The book by Spencer Weart I mentioned is a great historical survey – starting with scientists like Fourier in the early 19th Century – and is available online: The Discovery of Global Warming.
Of course, the most up to date and rigorous evidence on the causes and impacts of climate change, and on the possible scenarios we may face in the future, is contained in the IPCC (Intergovernmental Panel on Climate Change) reports. The latest full assessment being the 6th Assessment Report.
Getting a reliable sense of what the science is telling us can be hard for non-experts, particularly on shouty social media. I always feel we should go back to the established experts. Some summaries can be useful if they do not try to selectively spin the science in a direction to support a particular framing.
Intergovernmental Panel on Climate Change (IPCC) is an international body whose work is the product of an international team of scientists from over 60 countries who give their time voluntarily to produce in depth reports. The Sixth Assessment Report (AR6) is the latest full assessment, and covers different aspects: causes, impacts, adaptation and mitigation, both globally but also from a regional perspective. One of the reasons people go to secondary sources is because of the huge size of the IPCC reports. But the IPCC provides summaries. The AR6 report comes in three parts, with summaries as follows:
Part II: Impacts, Adaptation & Vulnerability Report assesses ecosystems, biodiversity, and human communities at global and regional levels. It also reviews vulnerabilities and the capacities and limits of the natural world and human societies to adapt to climate change.An accessible summary is available as a short video: https://youtu.be/SDRxfuEvqGg A written Summary for Policymakers is available here https://www.ipcc.ch/report/ar6/wg2/downloads/report/IPCC_AR6_WGII_SummaryForPolicymakers.pdf
If IOT do decide to do a new episode on Climate Change – or more accurately, man-made climate change – they might do well to first re-read Professor Steve Jones’s 2011 report on coverage of climate change at the BBC, and its tendency of using false balance. The report recommended that the BBC coverage “takes into account the non‐contentious nature of some material and the need to avoid giving undue attention to marginal opinion” (download the document then skip to page 14 to get to the report, avoiding the self-justification by BBC senior management prefixing the report itself.)
Yes, but it only dealt with man-made climate change in the dying few minutes. Richard Corfield, when not talking over the two women scientists with him, was dismissive of the risks. He used an argument that fails Critical Thinking 101, along with Ethics 101, and more.
His gobsmacking words:
“a ‘Greenhouse Climate’ is the natural condition for the Earth. 85% of Earth history has been ‘Greenhouse’ Ummm, 70 million years ago carbon dioxide levels were 8 times what they are at the moment, which made them 2,400 parts per million. Before that they were 12 times higher. The only certainty is that climate change is a natural part of the Earth and as a species we may have been the result of climate change. We may now be altering it but anyhow we’d have to deal with it, so I think we are going to have to geo-engineer our own climate to deal with it. Nothing wrong with that.”
A logically incoherent argument. And it’s not ‘we may now be altering’, we are altering, please read the IPCC reports Richard.
To conflate tens of millions of years with Homo Sapien’s quarter of a million years of existence; or the 12,000 years where civilisation has emerged, in the stable climate we have enjoyed alongside nature since the end of the last ice age; or indeed the 200 years where man-made carbon emissions have increased CO2 levels at an unprecedently fast rate in geological terms, is crass.
The way to stop additional warming is simply to stop burning fossil fuels as soon as possible.
To simply shrug and say that the climate always changes so we’d have to have done something anyway at some point is asinine, and fails to mention that we’d have had 10s of thousands of years to deal with it, not the few decades we now have left to do something, precisely because of naysayers like Melvyn Bragg and Richard Corfield.
No wonder this disaster climate advocate Richard Corfield has been on IOT 8 times.
As I discussed in a previous essay Is 2°C a big deal?, we know that as the world warms the chance of extreme weather events will increase markedly. This essay does not revisit that established insight, but is more of a diversion, exploring simple probabilities.
Attribution studies can now routinely provide estimates of how much more probable a particular event has been made as a result of man-made global warming. The World Weather Attribution organisation provides many example.
There will be impacts on the environment, society and agriculture. Focusing on the latter, sceptics might say, “ok, the chances are increasing, but if we have a crop failure in one region, one year, we have many regions able to compensate.”
The follow up question that comes to my mind is “if I accept that point the question is then how often will we have multiple failures in a given year?”.
There can be some big surprises when one explores probabilities. Bear with me as a tease out a few insights.
A famous example of surprising odds
Imagine there is a public meeting and people arrive one to one. Assume they have random birthdays and we exclude siblings. The question is: how many people need to arrive before the chance of two of the people present having a greater than evens chance of having the same birthday?
What number do you expect? Think about it.
To answer this it’s easier to start by determining the chance for each arrival to NOT have the same birthday. The 1st arrival has 365 choices out of 365. The 2nd arrival has 364 choices out of 365 to avoid having the same birthday. The 3rd arrival has 363 choices out of 365 to avoid a clash. And so on.
So the probability for 3 arrivals not having the same birthday is (365/365) x (364/365) x (363/365) which equals 0.9918 (rounded). So the chance that at least two of these three having the same birthday must be 1 minus this, which equals 0.0092 – see Note [1]. This is pretty small; about a 1% chance.
If you keep repeating this process, surprisingly one finds we only need 23 people to arrive for the chance of two matching birthdays to be greater than even (ie. greater than 0.5). See table in Note [2].
As you can see from the table, for 10 arrivals the chance of a match is just under 1 in 10 (0.1), but then rapidly escalates.
Calculating the chance of extreme weather events without global warming
By extreme weather events I’m not talking even about the current serious flooding in the UK. I’m talking about an event that would take out the arable sector in a large area.
To make this simple and purely as an illustration, I will take the 1,400 million hectares of arable land globally and break this down into 100 blocks, each of 14 million hectares.
Since the UK has 13 million hectares of arable land, the world figure can be thought of as about 100 UKs (of arable land only).
If the chance of an extreme weather event anywhere across the world between 1900 and 1950 was on average 1 in 1000 per year, that in effect defines what level of event we mean by ‘extreme’ for this illustration.
Then, we need to ask the question: what would have been the chance of 2 extreme events occurring in any one year? What about 3?
Let’s first follow a similar but adapted method as with the birthdays.
The chance of NOT having an extreme weather event in the first block is 1 minus (1/1000), which equals 0.999.
Now, the probabilities for each block are assumed to be independent, so the chance of NOT having an extreme weather event in any one year in all blocks is 0.999 x 0.999 x … x 0.999 (with 100 factors), and this equals 0.90479. So a 90% chance of not having an extreme weather event in any of the 100 blocks.
So the chance of having at least one extreme event in any one year across the 100 blocks would be one minus this figure, so that = 1 – 0.90479 = 0.09521 = 0.1 approx, or 1 in 10, or 10%. This is not insignificant. It means that a 1 in 1000 year event will happen once every 10 years somewhere on the planet.
In the next section I’ll use the percentage form, rounded to 2 significant figures to express the odds.
We have gone from a 1 in 1000 chance of an extreme event in one block in one year, to a 1 in 10 chance of at least one extreme weather event across the 100. A simpler way to see this is the 100 x (1/1000) = 1/10.
Moving to multiple extreme event is not so simple.
The basic idea is to visualise the 100 blocks as containers, and the chance of an extreme event as a ball that can be put into a container to indicate an extreme weather event has happened there.
Then, calculating the odds becomes an exercise in counting all possible permutations.
If there were 2 events in one year, then they could be in the same block (and there are 100 ways for that to happen), or in different blocks (and the chances of that are a little more complex to calculate). In general, we need to work out the odds of how you sort X objects amongst 100 containers. We do that using something called a ‘binomial expansion’ – see Note [3] if you want to dive into the details.
We can then look at what happens when the chance of any single event changes due to global warming changes from odds on 1 in a 1000 to say 1 in a 100.
The chance of extreme weather events with global warming
To explore the impact of global warming on the change odds, I have used a progression as follows. The average chance of an extreme weather event in any one year, in any one block, was 1 in 1000 but as the world warms it might become a 1 in 100 year event, or worse a 1 in 50 year event, or worse still a 1 in 25 year event. In Note [4] there are details on calculating the odds for up to 10 events per year across the 100 blocks.
The odds for a single event are already changing. The 40C weather we had in the UK would have been virtually impossible without man-made global warming. But the purpose of this essay is not to make projections or estimates, but simply to illustrate the surprising change in odds that occurs when multiple events are involved.
Here is a summary of how the odds change in our illustrative example:
We see that in the warmest scenario (1 in 25), an extreme weather event is likely to happen every year somewhere in the world (98%), but there is a high probability (77%) of there being 3 events occurring in a single year across the world.
If we have 2 or 3 blocks in the world suffering from extreme weather events and consequent crop failures, then that starts to have a major impact on food supply, which is potentially catastrophic.
What is worrying is how the odds of multiple events can escalate quite fast.
So if you have the feeling that more than one extreme event seem to be occurring every year around the world – more frequently than they were a few decades ago – you are not wrong.
(c) Richard Erskine, 2024
————————————— o o O o o ——————————————
NOTES
These notes are only included for those that wish to check my workings. Thanks in advance for spotting any errors. If you are not interested in the details, you don’t need to read these notes.
[1] The one minus trick
If you pick a card from a normal deck of cars, the chance of pulling an ace of spades is 1 in 52. As a number that equals 0.01923, it’s probability. But there is 100% chance (a probability of 1) of pulling a card, so one can say the chance of NOT pulling the ace of spades is 1 – 0.01923 = 0.98077 (which is also what you get from the fraction 51/52).
If a probability of an outcome is difficult to calculate it can sometimes be easier to calculate the probability of not having the outcome, and then using the ‘one minus …’ trick.
So we want the chance of at least one extreme event across 100 blocks. We could try to calculate the chance for 1 event, the chance for 2, then 3, all the way up to 100. The trick is instead to calculate the probability of there being no event across all 100 blocks. Then by taking the resulting probability from one, we get the probability of at least one event occurring.
[2] A famous example of surprising odds
Table calculating the odds:
The Product is the calculated by multiplying the successive A/B values. So for 4 arrivals the Product = 1 x 0.9973 x 0.9945 x 0.9918 = 0.9836 is the probability that none have the same birthday. So the chance of at least two having the same birthday for 4 arrivals = 1 – 0.9836 = 0.0164
[3] Use of the binomial expansion
Let’s assume that the probability of a loss of crops due to an extreme weather event in any one year for any region (because of many possible direct or indirect effects: extended heat wave; flooding; inability to work outside; migration; war) is p, then:
The chance of there NOT being an extreme event in one specific region in any one year is (1-p)
The chance of there NOT being an extreme event ANYWHERE in the world (for all n blocks) in any one year is (1-p) raised to the power n, which is written (1-p)n
Therefore, the chance of there being at least one extreme event (ie. 1, or 2, or 3, etc.) anywhere in the world, in any one year is 1-(1-p)n
The probability of exactly k out of n regions being hit by an extreme weather event in any one year is trickier to calculate but can be done using the binomial expansion:
P(k,n) = ( n!/ (k!(n-k)! ) * pk * (1-p)n-k
To create a table it is convenient to use a generator (especially if n gets very large, as some spreadsheets will blow up or truncate numbers in an unhelpful way), so, we start with P(1,n):
P(1,n) = n * p * (1-p)n-1
P(2,n) = ((n * (n-1)) / 2) * p2 * (1-p)n-2
and in general there is the way to calculate the next number based on the previous one:
P(m+1,n) = P(m,n) * ((n-m)/(m+1)) * p / (1-p)
This is the formula used in the Table (see Note [4]) for P(2,100), P(3,100), etc.
eg.
P (2,n) = P(1,n) * ((n-1)/(2) * p / (1-p)
The sum of P(i,n) from i = 0 to n must be 1
For n=100, the chance of at least 1 event would be P(1,100) + P(2,100) + … + P(100,100).
The chance of at least 2 events would be P(2,100) + P(3,100) + … + P(100,100).
And so on.
[4] Table of probabilities based on binomial expansion
I’ve drafted a suggested Keir Starmer speech, next time Rishi Sunak or his client media attack Labour for their £28 billion per year green investment promise. Instead of being on the defensive, I suggest attack. Over to Keir …
Yet the Prime Minister is deaf to the experts and deaf to popular opinion. He is now fully captive to the climate action delayists – actually climate change deniers – of the increasingly hard right of his Party.
If you want more extreme floods and more extreme heat waves, getting worse every year and,
if you want crop failures around the world spiking food prices and,
if you want petrostates and wars causing energy insecurity and poverty,
then vote Tory.
If you want instead a Government that is not in denial and truly acknowledges the serious risks we face and,
if you want a Government that will accelerate action on climate change by greening our energy, and protecting the ecosystems on which we depend and,
if you want a path to a sustainable future that is fair to all,
There are no shortage of myths and memes that attack EVs and Heat Pumps, particularly in the pages of The Telegraph and other right wing outlets. It’s a curious phenomenon, railing against thermodyanmics.
There is of course an inevitable transition to a clean, electrified and decarbonised world. The goal of naysayers is not to stop it happening (they are not that silly), merely to delay the inevitable for as long as possible. That’s what the fossil fuel lobby wants to achieve – wringing out as many dollars as they can before the bubble bursts; before assets are stranded.
There are several myths about EVs and Heat Pumps that are widely shared. These have been refuted many times, such as in these plain English pieces on the Nailsworth Climate Action Network website: myths about EVs and myths about heat pumps.
One I hadn’t seen before popped up on my social media timeline. It suggested that if EVs got caught out in a snowdrift, the batteries would get cold, so couldn’t work, and occupants would freeze, whereas those in petrol/diesel cars would be OK with their idling fossil fuel powered engines.
I can imagine The Telegraph readers – fed on a daily diet of hit jobs on any clean tech – chuckling at the idea of EVs freezing up in the snow.
The truth is quite the opposite. This meme is just another lie powering another social media storm; another myth to add to a growing list. Reuters provides a great factcheck refuting the points being shared widely across social media. Reuters quoted Professor David Howey from the University of Oxford’s Department of Engineering Science:
“Electric vehicles use very little power when stationary … the motor doesn’t consume power at zero speed … only the car electronics and heating/cooling systems use power when the car is stationary, and the amounts are relatively small … [and could run climate settings for] at least a day, probably many days”
Dr Katherine Collett, was also quoted, saying of EVs that “Many of them are installed with very efficient heating systems nowadays”
But it gets better, because the “very efficient heating system” being referred to is – hold onto your hats – a heat pump. This means that both the car’s battery and car interior are kept snug by a heat pump; and just as for home heating, that means electrical supply stretches further. A heat pump can turn one unit of electrical energy (in an EVs case, from a battery) into a few units of heat energy, as explained here. If you had a resistive heater for the EV that would keep you warm for so many hours [1], with the heat pump it could be 3 times as long that you would stay warm.
The efficiency of EVs and Heat Pumps, and the future-proofing that electrification enables, means that The Future Is Electric.
This will all probably make The Telegraph readers heads explode.
Their bete noirs – EVs and heat pumps – are now conspiring to keep EV drivers snug in snow drifts long after the petrol heads have started to freeze because their fuel has run out.
Stuck in a storm of disinformation about EVs and heat pumps, this is the perfect cautionary tale on what not to believe, for those who have been misled by a right wing propaganda machine. A machine in part funded by fossil fuel interests and in part motivated by misplaced culture wars ideology.
My advice is, don’t get caught in yet another bullshitstorm of disinformation, get off social media and the papers, and hunt down genuine experts. They’re not exactly hard to find.
(c) Richard W. Erskine, 2023
Notes
[1] In one test an older Tesla that had resistive power heater (so not with a heat pump) and at about -10C, it was found that “No surprise, but the Tesla is vastly more efficient, burning 1.6 kWh per hour versus the Hyundai sucking gas at the rate of 10.3 kWh per hour”, and both the Tesla (2019) and Hyundai were able to maintain a comfortable internal tem[erature for nearly 2 days .
Summary: The neoclassical economics, assumptions, methods and data used by William Nordhaus and other economists in their models (known as IAMs, Integrated Assessment Models), on how the economy interacts with climate change, are flawed. They rely on narrowly defined data, projected into the future in simplistic ways, that grossly underestimates the likely impacts of global warming on the economy, and in broader terms. These misrepresentations have acted as a fig leaf that have enabled policy-makers and politicians to avoid taking urgent action to reduce carbon emissions, and has therefore already done incalculable harm. It is not too late to base policies on scientifically grounded estimates of future impacts, and to ensure these deliver a fair transition to a net zero future.
A fundamental question
A fundamental questions is how much warming can we realistically tolerate to avoid serious damage? As we will see, economists have hitherto come up with surprisingly high estimates in answer to this question.
I know I am not alone in being both puzzled and angry at the apparent lack of urgency shown by governments to the growing risks of man-made global warming. For the purposes of this essay, let us be generous and assume that these politicians aren’t from the breed of economic liberals who obdurately denigrate climate science for ideological reasons.
We are nonetheless left with something far more insidious and dangerous. Mainstream policy-makers and politicians who walk the corridors of Westminster and other centres of power who seem quite happy to see new fossil fuel exploration, and keep putting off urgent plans to transition to a net zero future. They fail to acknowledge that not all paths to net zero are the same [1].
So what is going on?
Nordhaus’s neoclassical economics
One explanation for this lack of urgency is because economists hold much more sway with policy-makers than scientists, and hitherto, economists have been telling a quite different story to the one we hear from scientists.
Scientists will say that an average rise in global mean surface temperatures of 4°C or more over a mere century or so would be catastrophic. Scientists will point out that the PETM (Palaeocene-Eocene Thermal Maximum) 56 million years ago was estimated as being a rise of about 5°C (which occurred over a period of several thousand years) [2].
While we do see temperature ranges between the tropics and polar regions that greatly exceed 5°C, it is a misunderstanding (that infects the work of some economists) to imagine that this should be a source of comfort. Heating the whole world by 5°C is an enormous amount of energy that has in the past and would in the future knock the climate into a completely new state. It would be an end to the relatively stable climate in which humanity and nature has co-evolved and co-existed, and do this not in several thousand years but in a mere century. A blink of the eye.
Yet William Nordhaus, whose pioneering work on IAMs won him the Nobel Prize for Economics in 2018, estimates economic damages by the middle of the next century to be just 2.1% with a warming of 3°C and 8.5% with a warming of 6°C [3]. Bear in mind that 8.5% drop on GDP is less than two times the financial crash following the 2008 banking crisis (about 6% drop for the UK).
So, during a global temperature rise that will bring mass extinctions, hugely destructive sea level rise, etc. Nordhaus’s economics seems to just give a shrug! Nothing to see here. Any wonder then that Rishi Sunak is also giving a shrug, and why the UK Treasury and many other arms of Government seem not only relaxed about climate change but in many cases have, and continue, to actively frustrate the path to net zero.
I should add that Nordhaus himself adds words of warning [3]:
“Because the studies generally included only a subset of all potential impacts, weadded an adjustment of 25 percent of quantified damages for omitted sectors andnonmarket and catastrophic damages …”
So, there’s a whole lot we don’t know or are not measured well enough, so let’s add a measly 25% to the narrowly circumscribed and quantifiable impacts. Wow! Why not 250%, or 500%?
Other economists have been deeply critical of this approach and regarding specific aspects of the modelling. Issues include:
the scope of impacts included is quite narrow. A key reference used by Nordhaus acknowledges “As a final conclusion, we emphasize the limited nature of work on impacts.” but that does not seem to stop them publishing [4]
many often questionable parameters are included. For example, extrapolating from current meagre efforts to date “it is assumed that the rate of decarbonization going forward is −1.5 percent per year” [3], yet detailed analysis of existing technology indicates dramatic worldwide savings totalling trillions of dollars by 2030 are achievable with ambitious decarbonisation policies and plans [5].
using the productivity of different regions with current climatic variability, such as the continental USA, as a way to calibrate forward in time, over many decades, the impact of global warming, signals a complete misunderstanding of climate change and is grossly misleading.
discounting damages is based on current data for goods, so grossly underestimates impacts on future generations (the Stern Review had a lot to say about discount rates).
extrapolations from current data is done using simple smooth functions, whereas we can expect discontinuities and abrupt changes as the world warms, in thousands of systems in a myriad of ways (more to say on this below in thresholds and system impacts)
The UNFCCC (UN Framework Convention on Climate Change) originally set the target for peak global warming to be 2°C, with a supplementary Paris Accord ambition to keep it to 1.5°C, under pressure from the most vulnerable nations. Yet the current policy commitments (the Nationally Determined Contributions) from the parties to the convention would take us currently to between 2.6°C and 4°C according to the UNEP Emissions Gap Report 2022 [6].
A critique of neoclassical economics
Steve Keen has provided a comprehensive and excoriating critique of the work of Nordhaus and others in his paper The appallingly bad neoclassical economics of climate change [7]. I want to pull out a few key observations (quoted snippets) from Keen’s paper, but please study the paper in full:
Nordhaus excludes 87% of US industry from consideration, on the basis that it takes place ‘in carefully controlled environments that will not be directly affected by climate change’
Nordhaus’s list of industries that he simply assumed would be negligibly impacted by climate change is so broad, and so large, that it is obvious that what he meant by ‘not be directly affected by climate change’ is anything that takes place indoors – or, indeed, underground, since he includes mining as one of the unaffected sectors (more to say on this below in thresholds and system impacts).
If you then assume that this same relationship between GDP and temperature will apply as global temperatures rise with Global Warming, you will conclude that Global Warming will have a trivial impact on global GDP. Your assumption is your conclusion.
Given this extreme divergence of opinion between economists and scientists, one might imagine that Nordhaus’s next survey would examine the reasons for it. In fact, the opposite applied: his methodology excluded non-economists entirely.
There is thus no empirical or scientific justification for choosing a quadratic to represent damages from climate change – the opposite in fact applies. Regardless, this is the function that Nordhaus ultimately adopted.
As with the decision to exclude ∼90% of GDP from damages from climate change, Tol’s assumed equivalence of weather changes across space with climate change over time ignores the role of energy in causing climate change.
What Mohaddes called ‘rare disaster events’ – such as, for example, the complete disappearance of the Arctic Ice sheet during summer – would indeed be rare at our current global temperature. But they become certainties as the temperature rises another 3°C.
The numerical estimates to which they fitted their inappropriate models are, as shown here, utterly unrelated to the phenomenon of global warming. Even an appropriate model of the relationship between climate change and GDP would return garbage predictions if it were calibrated on ‘data’ like this.
Deeply problematic, as Keen points out:
The impact of these economists goes beyond merely advising governments, to actually writing the economic components of the formal reports by the IPCC (‘Intergovernmental Panel On Climate Change’)
and he concludes:
That work this bad has been done, and been taken seriously, is therefore not merely an intellectual travesty like the Sokal hoax. If climate change does lead to the catastrophic outcomes that some scientists now openly contemplate … then these Neoclassical economists will be complicit in causing the greatest crisis, not merely in the history of capitalism, but potentially in the history of life on Earth.
Beyond Nordhaus
It seems that Nordhaus and others have been able to pursue their approach because they had free rein for too long in what seemed to be a relatively niche field (when compared with the majority of climate change related research). It would be easy to shrug concerns off as a squabble amongst academics in an immature field of research.
Wrong! The issues are not of mere academic interest but have real-world consequences for policies and climate actions being undertaken (or rather, not undertaken) by governments and industry. For example, a recent paper by Rennert et al [7] on the social cost (SC) of carbon dioxide (CO₂) notes:
For more than a decade, the US government has used the SC-CO2 to measure the benefits of reducing carbon dioxide emissions in its required regulatory analysis of more than 60 finalized, economically significant regulations, including standards for appliance energy efficiency and vehicle and power plant emissions.
and this paper arrives at a social cost for carbon dioxide that is 3.6 times greater (that is 360% higher) than the value currently used by the US Government, and leads to their conclusion that:
Our higher SC-CO2 values, compared with estimates currently used in policy evaluation, substantially increase the estimated benefits of greenhouse gas mitigation and thereby increase the expected net benefits of more stringent climate policies.
In plain English: we urgently need to stop burning fossil fuels.
Other papers are now amending IAMs to be more realistic. One paper titled Persistent inequality in economically optimal climate policies [8] notes:
The re-calibrated models have shown that the Paris agreement targets might be economically optimal under standard benefit-cost analysis.
These researches are however concerned at the narrow cost-benefit global approach. They take a more detailed look at the differences between and within countries when it comes to the fairness of future pathways. They find that the economic response to climate change will vary greatly depending on the level of cooperation between countries. The conclusions are curiously both optimistic and depressing:
Results indicate that without international cooperation, global temperature rises, though less than in commonly-used reference scenarios. Cooperation stabilizes temperature within the Paris goals (1.80°C [1.53°C–2.31°C] in 2100). Nevertheless, economic inequality persists: the ratio between top and bottom income deciles is 117% higher than without climate change impacts, even for economically optimal pathways.
So better modelling indicates we can dial back on the ludicrous Nordhaus ‘optimal’ warming estimates to something closer to the UNFCCC’s 2°C, as a target that policy-makers and politicians should take seriously. The bad news is that the well known deep inequalities that exist in how climate change plays out will not be remedied merely by staying within the 2°C limit.
Other things must happen – in the solutions that are adopted and how these are implemented within countries and across regions – to ensure that inequalities are not perpetuated or even widened.
Thresholds and system impacts – nature
I want to illustrate the idiocy of the approach taken by Nordhaus and others who use IAMs to project implausibly minimal impacts resulting from 3°C to 6°C of global warming.
It speaks not merely to the lack of an appreciation of how systems work in the real world, but also, a complete absence of imagination.
Systems thinking has recently become a buzz word in some UK Government departments, but it is not obviously reaching the parts of the Treasury or Number 10 Downing Street where decisions are made.
We don’t need one of the much talked about major tipping points (e.g. loss of Arctic Sea Ice) to suffer extremely severe consequences from global heating.
When talking to young people about climate change there is a story I like to tell that helps illustrate how a small change in global mean surface temperature can have a big impact, and it concerns the Pied Flycatcher. This is a picture I created when talking to Primary School children (picture me also holding a globe at the same time to show the migratory paths).
This what I say:
“The Pied Flycatcher flies from Africa to northern Europe just in time to nest so it can feed its hatchlings on the caterpillars of the Winter Moth, which in turn feed on the leaves of oak trees.
But the oak trees have been coming into leaf a few weeks earlier, due to global warming, and the moths have adapted.
But the Pied Flycatcher in Africa is unaware of this, so it arrives at its normal time of the year only to find that caterpillars to feed its hatchlings are scarce.
Fewer hatchlings survive to make the return journey to Africa later in the season, so their numbers decrease.
This has all happened owing to only a small change in the temprature, because life-cycles of the bird and moth that worked together have now being disrupted.“
It is not hard to see that the link (the red arrow) between these separate life-cycles has been broken and has thereby disrupted the system as a whole. There has been a severe decline in their numbers as a result of this ecological dislocation [9], and this (at the time) with less than 1°C of global warming.
In general, nature can adapt to changing climate, but within limits and only at certain ‘speeds’. A species of plant that likes cold conditions might migrate further up a mountain as the climate warms, but eventually it will run out of mountain!
With global warming now being so fast, nature cannot fully adapt or evolve to the changes being wrought.
Thousands of such ecological (and indeed physical and societal) thresholds have been crossed and will be crossed.
Thresholds and system impacts – human society
Let’s move to another example of how climate change is already having an impact – and in this case with industry.
Last summer there was a drought in Europe. Politico reported [10]
Water levels on the Rhine, Europe’s major inland river connecting mega-ports at Rotterdam and Antwerp to Germany’s industrial heartland and landlocked Switzerland, are precipitously low …
That’s a pressing problem for major industries, but it also puts a damper on EU plans to increase the movement of goods along waterways by 25 percent by 2030 and by 50 percent by 2050 …
So those factories that are ignored by Nordhaus because they are indoors find that their raw materials struggle to get in and their produce struggles to get out, when the Rhine is dried up. Not exactly rocket science insight! Note here the complex picture of potential harmful feedbacks:
global warming causes an extreme weather event (a widespread drought)
the drought causes low waters in the Rhine
the low water in the Rhine adversely impacts the passage of material and thereby the manufacturing sector
mitigation steps would require more land transport, leading to greater net emissions (but could not completely replace the tonnage provided by shipping)
greater net emissions increase the risk of extreme weather events.
It really isn’t hard for even young children to work this out (I’ve had conversations along these lines with 12 year olds) but apparently too hard for some economists.
One of economist (surveyed by Nordhaus), Larry Summers, replying to one question said ’For my answer, the existence value [of species] is irrelevant – I don’t care about ants except for drugs’. I guess no one has told him that insects pollinate plants and are therefore an essential part of life on Earth, including human life, and hence our economy. It starkly illustrates the recklessly narrow scope of climate impacts considered by some economists.
Conclusion
For too long, policy-makers and politicians, in the UK, USA and elsewhere have been able to justify their inaction on climate because of the economists like William Nordhaus telling them that a warming globe will have essentially only marginal impacts on the economy. A 2015 survey of the social cost of carbon used by countries [11] found a number of countries using an average 2014 price of $56/tCO2 similar to the USA (rising to $115 in 2050). The UK is actively reviewing how it puts a cost on carbon, as in a January 2023 paper by the BEIS Department [12].
Nordhaus would now say that he is calling for early mitigation, at least on a precautionary basis, but that is a bit like calling the fire brigade when the fire is already well established.
It is time for policy and action to be based on science, systems thinking, and a just transition, rather than some approaches and models that are well past their sell-by date.
George Box quipped that ‘All Models are wrong but some are useful’.
In the form that William Nordhaus and others have developed IAMs over the last few decades, a better aphorism to use might be:
‘All models are wrong, and some are dangerously misleading’.
Thankfully, if belatedly, other economists have been teaming with climate scientists, and challenging and improving the models. Models are needed, and can be useful, to guide our thinking. There is still much work to do. We need to be able to ask ‘what if …?’ type questions to see what the future might look like with different assumptions, and drive ambitious policies.
It’s now time for policy-makers and politicians to recognise the need to radically review their policies and actions, based on the best available approaches and models.
Nordhaus, William D., and Andrew Moffat, 2017 A Survey of Global Impacts of Climate Change: Replication, Survey Methods, and a Statistical Analysis, National Bureau of Economic Research (NBER) Working Paper 23646, https://www.nber.org/papers/w23646
Both, C., Visser, M. 2001 Adjustment to climate change is constrained by arrival date in a long-distance migrant bird. Nature 411, 296–298 (2001). DOI 10.1038/35077063. https://www.nature.com/articles/35077063#citeas
Smith, S. and N. Braathen, 2015 Monetary Carbon Values in Policy Appraisal: An Overview of Current Practice and Key Issues”, OECD Environment Working Papers, No. 92, OECD Publishing, Paris, https://doi.org/10.1787/5jrs8st3ngvh-en.
Valuation of energy use and greenhouse gas (GHG) emissions: Supplementary guidance to the HM Treasury Green Book on Appraisal and Evaluation in Central Government, January 2023, Department of Business, Energy and Industrial Strategy.
If the experience of a good friend of mine with British Gas is anything to go by, then the answer to this question is a definite no! Two and half months after a team of subcontractors arrived, and 15 return visits later, my friend still doesn’t have a properly operating system!
Nevertheless, I welcome the fact that British Gas want to offer householders the option to install a heat pump, which was launched in 2022. And the commitments provided are reassuring:
We know changing to a new kind of heating might seem like a big move, so our Warm Home Promise is there to give you total peace of mind.
Our engineers will only install a heat pump if we’re confident it’ll heat your home as well as a traditional boiler.
We’ll design your new heating system to reach the right temperature for your home. And if it doesn’t, we’ll come to put things right – or give you your money back.
Is my friend’s experience a one off, or evidence of a deeper issue with how the outsourcing is operating? I don’t know, but British Gas need to urgently determine the answer to this question (and also help my friend!).
As a strong advocate of heat pumps and as someone who is frustrated at the disinformation that surrounds them, pushed by fossil fuel interests, my concerns in this case have nothing to do with the technology or its capability to fulfil its promise. It is a question of how best to scale up capacity.
But let’s wind back a bit and consider the broader question of how large companies use sub-contractors, and outsourcing in general.
Why do they do it?
The risks and failures of subcontracting
Companies often resort to outsourcing because they lack either the skills or capacity to deliver a service, particularly when they are new entrants into a market they want to penetrate. The logic is often that they do not have the time to immediately meet the demand, so seek the support of other companies to fulfil this demand.
Large providers with an existing customer base have the power to make an attractive offer, but often jump the gun, and go to market before they truly have the capability to fulfil the latent demand. I’ve seen this many times over the years in different sectors.
In some the of largest IT projects I have witnessed, especially with Government procurement, big companies win the main contract on the basis that they have the financial muscle to lead, but know they do not have the specialist skills, so they sub-contract to medium sized companies with spare capacity.
Often, these medium sized companies also lack the specific skills or capacity for the projects in play, so they too subcontract to those with genuine expertise in the new technologies, even sole traders. Via this process of successive subcontracting, there is a dilution of accountability. The ones who end up on site discover that there has been a mismatch between client expectations and the resources assembled to deliver the project.
Why don’t the main contractors not do the obvious thing and train up their staff to deliver the new stuff? A very good question, and one that has always puzzled me.
Sometimes it is due to organisational inertia. Imagine a building company that has spent several decades delivering standard British build homes. There is a whole industry behind the standard model. It takes changes in many aspects of the business operations – supply chains, basic skills, and much more – for them to move to something different, such as European style modular, 2050 ready, house building.
The same is true of a large company that has spent years delivering gas boiler installations, now wanting to start delivering heat pump systems. The pressure to create a sales pipeline will often trump the concerns of the engineers wanting to create a solid new delivery model. Inevitably, companies end up trying to run before they can walk.
Impact of failed subcontracting
My friend is very ‘Green’ in everything she does. She wanted a heat pump to replace her boiler. If she’d asked me, I’d have recommended a few questions to ask any potential supplier (see below).
She opted for British Gas because they were her existing supplier and their website made reassuring claims.
The promise was to arrive late in July this year, and finish the installation within 3 or 4 days, It is now mid October (two and half months later) and after 15 visits by the sub-contractor and British Gas the system is still not working.
It is obviously not the fault of the technology (they deployed well established products), but the lack of competence and experience of the staff the subcontractor deployed. Because my friend’s contract is with British Gas, the issue is 100% with them, and they acknowledge that.
Questions for British Gas and other outsourcers
These are my questions for British Gas:
How do they recruit subcontractors?
How do they ensure their subcontractors are competent, in both heat pump installation in general, but also the specific product configurations preferred by British Gas?
What project management and oversight do they provide to ensure effective delivery?
If there are issues, how effective is their ability to escalate matters, to ensure timely resolution?
On the basis of my friends experience the answers to all these questions is really disappointing.
So will outsourcing turbo-charge the roll-out of heat pumps, by British Gas and other large companies wanting to get into the market?
My genuine belief is that it will not.
The small and medium sized companies/ enterprises (SMEs) that know how to do it are already maxed out, so the only option that outsourcing can provide is to go with companies that do not know how to do it, but claim they can.
A better model in my view for expanding the capacity of the heat pump delivery market is to replicate existing successful SMEs.
Companies like British Gas cannot build a business by outsourcing to subcontractors who lack competence and experience, they need instead to properly skill up their own workforce and have systems in place to ensure they achieve effective delivery.
Meanwhile, as we enter winter, my friend is still being let down, not by the technology or its ability to do the job, but by incompetence.
What a delivery process looks like
Delivering a heat pump to replace a boiler is really not that complex but as with any technology, it requires genuine experience and a proven delivery process, not just classroom training. A mitred butt joint is quite a basic carpentry skill, but you can’t just show some Powerpoint to an apprentice and expect them to pull it off very well, and start being a frame maker. Practice makes perfect.
A plumber experienced in fitting gas boilers and radiators will have many transferrable skills. However, a sole trader plumber will rarely be able to make the transition to a sole trader heat pump installer. If they’ve never worked with heat pumps, don’t expect that minimal training will ensure successful delivery, especially given that many of the most specialised tasks – the electrical and digital setup of the heat pump – are not plumbing tasks at all.
Installing heat pumps really needs a team with a variety of skills, and I believe this is the only way to properly scale up the installation of heat pumps as I wrote about here.
Only a company that is focused on installing heat pumps and has successful projects it can reference, would I ever consider to install a heat pump in my house. I followed my own advice and it proved to be a good decision.
I want the people who turn up on site to be co-workers, employed by the same company, not an assortment of contractors who are strangers to each other, with no shared company ethos and ways of working.
That is why out-sourcing so often delivers poor results for the customer, be it software, house building or the installation of a heat pump system. It leads to the dilution of accountability, and really no assurance in the quality of practice that actually turns up on the day.
Following this essay I’ve produced AGuideline for Householders Considering a Heat Pump.
I really want British Gas to make a successful switch from installing gas boilers to installing heat pumps, but they really do have to consider how they do this.
If they choose in the short term to outsource work, they will need to ensure that there is true competence and experience in the teams that are deployed in their name.
How they ensure the quality of those teams is up to them, but it is essential they do.
A Guideline for Householders Considering a Heat Pump
Here is my guideline for householders considering signing a contract with an air source heat pump installer.
1. Getting an understanding of what is possible
It’s really important to move beyond the misinformation and received wisdom attached to heat pumps.
The essential point to understand is that if a building can be heated by a gas boiler, it can be heated by a heat pump. But of course in both cases, the unit needs to be sufficiently powered and the rest of the plumbing setup and sized correctly too.
It will mean that some things change though. For efficiency reasons it is best to use a ‘flow temperature’ as low as possible (that is the temperature of the water flowing through your radiators). This is actually true whatever the heating source.
A room that needs to be heated to 21°C does not need a flow temperature of 70°C or more. The latest UK building regulations for new builds mean that whatever the heating system used, the flow temperature should not exceed 55°C. So plumbers will need to learn how to implement ‘low temperature’ systems, whether with gas boilers or heat pumps.
Often 35°C is enough, and even in mid winter, 50°C is the maximum required in even the most challenging of settings. The 3 key factors that determine the flow temperature are: the external temperature; how well insulated the house is; and the surface area of the ‘emitters’ (radiators or underfloor heating):
The colder it is outside, the harder a heating system has to work to achieve the same result, but a well designed heat pump system will normally ensure that the flow temperature never needs to be more than 50°C, even when it is -5°C outside.
The fabric of the building determines how fast the building loses heat, but it is a myth to say that old buildings cannot be heated by a heat pump or need ‘deep retrofit’ before they can. This case study and others cited in the essay prove otherwise.
The larger the surface area of your radiators, the lower the flow temperature required (whether it be a gas boiler or heat pump). By moving from a single panelled radiator to a double panelled one with fins, the effective surface area is increased greatly, without the wall space of the radiator increasing at all (the height and width of the radiator unchanged even if it gets a bit fatter). The same is true moving from a double panelled to a triple panelled radiator, when this might be needed (which is not as common as is believed).
The pipe work will need sufficiently sized pipework and flow rates to move the heat needed around the house, but that should be part of the assessment that a supplier makes. In many cases, no changes to pipework are required.
In terms of system operation, householders will not have the fast heating up of the house twice a day as they do with a gas boiler. Instead the heat pump stays on for longer, and the house does not go through big swings in temperature. In many cases, the system is setup to stay on 24/7 but with the system set back a few degrees overnight.
2. Get a proper assessment done
A reputable company providing installation services for a heat pump should do a full room by room assessment of your home and check various things we have already mentioned: the state of the plumbing; existing radiators; space requirements inside and outside for the kit required; etc. A householder will typically have to pay for this report but be offered a refund if they go with this supplier.
Other things the assessment will cover are the electricity and water supply. In some homes that have not been upgraded for many decades, remedial work might be required, for example, to carry our work on the mains supply, but this is often unnecessary. They might recommend that an electrician does a ‘load survey’ before the project proceeds (in my larger than average old house I was worried that our 80A mains fuse being too low, but it turned out to be fine).
I would recommend getting at least two assessments from different suppliers. If their assessments are not similar (e.g. in terms of the size of heat pump required and costs), then you need to understand why. Also, ask for references in the locality – and take up an offer to visit these. Consider choosing a supplier that is not too far away, as they need to be able to pop back to fix any issues post installation.
A potential installer should produce a professional report (delivered electronically as a PDF) that should be quite detailed, including the following:
the assumed nominal coldest day of the year used as a basis for the design including maximum heat loss calculations (e.g. external temp of -3°C);
confirmation or otherwise that the design uses MCS standards for target room temperatures (21°C, 18°C and 22°C for living spaces, bedrooms and bathrooms, respectively);
an estimate of the heating power requirements (in kW) for each room when heat loss is at a maximum;
the total heating power requirement (in kW) for the house as a whole when heat loss is at a maximum;
the maximum flow temperature for the radiators;
the total expected heat delivered by the system over a year (in kWh);
the estimated Seasonal Coefficient Of Performance (SCOP) for the system over a typical year.
confirmation that the system design and products used will be specified to include metering that will enable – over any period – electrical usage (kWh) and heat delivered (kWh) to be viewed (via an App) (without this, the householder is unable to ascertain how efficiently the system is actually operating);
the capacity of the water tank, based on the usage for the potential occupancy of the house.
You may say ‘Oh, but I’m happy with 19°C in my living room’ or ‘our children have left home, so can be we get away with a smaller hot water tank?’, but remember that the system needs to be fit for the house, and future occupants when and if you move on.
The steps required before the installer can come on site to install the heat pump system will vary a little, depending on the householder’s situation (e.g. if local authority permissions are required). This list is illustrative of the preparatory steps that need to be ticked off:
Local authority approvals ✔︎
Electrical ‘load survey’ ✔︎
Water pressure checks ✔︎
Checks on pipework sizing in the house ✔︎
Clearing loft and installed improved insulation ✔︎
Sorted out draughts in windows and doors ✔︎
EPC certificate ✔︎
Government grant conditions met ✔︎
Check by installers of suitability of locations for outside external equipment, ✔︎
and checks on space inside for internal equipment ✔︎
3. Project execution
There should be someone who is the point of contact for sorting out issues, acting as project manager (PM).
The team should arrive onsite with all the equipment and gear they need that matches the design. That includes the heat pump, new hot water cylinder, any new radiators, pipework, valves, etc. If there is a lot of going back and forth to local suppliers of kit then that suggests a poorly organised team.
A typical installation process would be as follows:
Introduction to installation team, and logistics agreed (eg. access times)
Old gas boiler and tank removed
Existing pipes and radiators flushed
Heat pump (external) and other kit (internal) moved into position
Plumbing in of kit (heat pump, hot water tank, etc.)
Plumbing in any new radiators specified
Heat pump connected to electrical power
Control system system installed (the ‘brains’ of the system)
One wireless thermostat placed in living room, set to 21°C
Various control setups competed
Setback thermostat e.g. by 3°C to 18°C between 10pm and 6am
Weather compensation setup
System put into operation
Radiators ‘balanced’ to ensure optimal heat distribution (after that, householders should avoid fiddling with radiator TRVs)
Metering installed if not inbuilt for flow/return heat and electricity usage
Certificates produced for MCS compliance
The householder should be briefed on the system setup, specifically:
the optimal location for the thermostat (this is typically a single one placed in the living room and set to 21°C. If the system is designed and installed properly, all rooms will achieve their target temperature if the living room reaches its target temperature – no need for clear controls in every room)
setup the operating regime over a 24 hour cycle e.g. any set back used overnight
that weather compensation is, as per good practice and to maximise efficiency, in operation (so the system only works as hard as it needs and when it is warmer outside, the flow temperature automatically is lowered).
The householder should be briefed on how they can use a console or App to ascertain:
current state of operation (e.g. flow temperature, tank temperature, mode of operation (space or water)
electricity usage by the heat pump over a given period
heat delivered by system over a given period
the coefficient of performance (COP) over a given period (e.g. day, week, month or year) (which is the ratio of the heat delivered and the electricity usage)
Over a period of a year, the COP is termed the Seasonal COP, and in a properly designed modern system, it should be at least 3. In my old house we achieved 3.3 last year. In a modern well insulated house, even better results (higher SCOPs) are achievable.
In addition
there is just one task, in my experience, that a householder needs to be briefed on, which is checking the water pressure in the system. With old plumbing, there can be air locks created even after the system was setup correctly, and this can lead to slight drops in pressure. A simple process is used to reestablish the correct pressure, and the householder should have received the simple instructions on how to do this (I had to do this twice in the first month, but not since).
4. Post Project service
The team should of course leave the site having completed all the tasks, ensured the system is operating correctly, and that everything is tidied away. If radiators have been replaced then old ones will be removed, unless the householder wishes to retain them for any reason (perhaps some are not so old and they can use in another project elsewhere).
Within a few days of the installation, the company / project manager should make a call to check that everything is OK.
The company should have already offered to carry out the annual service and indicate the costs for doing so (typically £150 or less). This service will check the status of the refrigerant in the heat pump, and other tasks. It is important that annual checks are carried out (just as they are for any heating system).
The company should respond promptly if it appears that the system is behaving badly e.g. if the COP is less than 2 for example for an extended period.
In a well designed and installed system, the householder can simply let the heat pump do its thing and will not need to do anything. No fiddling with the system console or radiators is required and to be discouraged. Just leave it alone.
But it is important to periodically check how the system is performing. I would suggest doing this weekly at first, but then monthly once you are convinced that the system is operating well. After some time you may choose to do it only every quarter. I would recommend that you always do an annual check on performance. When in the future you possibly come to sell the house, being able to quote the SCOP achieved for successive years will provide reassurance to any buyer.
If this is true globally, it is even more true for an advanced economy like the UK.
Our Government will no doubt claim we can use unproven carbon capture at scale, or deploy dodgy carbon accounting (we are world leaders in that at least), to claim they can still get to Net Zero, or ‘Not Zero’ as we should be calling it.
Rosebank is not about lowering bills and energy security as the Government claims, for several reasons:
Every tonne of carbon dioxide we emit increases the risks of extreme weather events, crop failures and health hazards – that damages everyone’s security
Most of the Rosebank output (oil) will be sold on international markets with UK consumers getting no preferential treatment in terms of bills or energy security
Renewable energy costs have dropped massively and the UK has the capability to max out wind and solar
Electrification will enable huge improvement in energy efficiency in transport and heating
Electrification is future proofing – it can take electricity from any source (wind, wave, solar photovoltaic, etc) and at different scales (your roof, community energy, North Sea)
Mixed signals creates market confusion that only delays the transition to a post fossil fuel secure and healthy future, and it harms our ability to show ambition and leadership
The IPCC says that every year matters, every tonne of carbon matters, every action matters. We need our Government to listen to the great majority of UK citizens that say they want action.
My interest in this question derives from many years working with large organisations internationally in both the public and private sector, who strive to improve their curation and uptake of institutional knowledge by the experts within the organisation. So, I am not thinking in terms of the general use of tools such as ChatGPT as simply a better Google, for use on the internet.
AI and Jobs
Will Artificial Intelligence (AI) tools make people redundant?
Some people, undoubtedly, particularly if they are doing mundane desk-based jobs collating information or giving advice that is based on a body of accepted information. In some cases, the jobs may not be regarded as mundane, but may be be subject to procedures that are relatively easy to emulate.
But will ChatGPT or subsequent AI tools replace experts? It depends a little on what we mean by an expert.
Imagine a community centre where experts are paid to offer advice to people on their entitlement to benefits. One can well imagine an AI tool very quickly replacing the basic role of synthesising the information needed to help someone know their rights.
Is this enough? I don’t think so. Context is crucial.
The person’s situation may be complicated by multiple factors including complex domestic relationships, mental health problems, access to transport, and many others. The adviser will need to display emotional intelligence not only knowledge of the current state of legislation. It may be that in being able to get support in checking knowledge of the latest legislation on benefits, the adviser can spend more quality time with someone who is in desperate need of support.
In this example AI would not be replacing a human, but would be helping redefine the role of the human, to increase their true value to society.
AI as an on-demand wikipedia without editors
In the context of knowledge in general, the AI models that are emerging today, are being seen as ‘oracles’, able to dispense accurate and well sourced summary answers to complex questions.
However, it is worth noting that AI models such as ChatGPT have to feed off source material. Who chooses the scope of this source material? If the material is of varying quality and provenance, what rules are embedded in the AI to help it decide on the level of trust it gives to different sources?
When such tools try to synthesise knowledge from public resources on the internet they confront the very real issue that much of what is written on a subject is copied; and often without being checked first. Even trusted actors can fall foul of using poorly researched information and unwittingly give it credibility by propagating it.
If you ask Google ‘when did the Titanic sink?’ you’ll get the basic answer pretty quick. An AI tool does not do much more. They will refer to the collision with an iceberg and broken compartments taking on water.
If I ask the question ‘why did the Titanic sink?’, things get more interesting, and the AI tool can do better than Google. However, it may not anticipate the 5 whys that inevitably result. Why was it going too fast, and so on.
I had to experiment with one AI tool to get the right question in order to get to the primary cause I was interested in:
What were the company pressures that influenced the captain of the Titanic to maintain high speed despite the risk of iceberg collisions?
The answer I received was:
There were allegations that Captain Edward J Smith was under pressure to maintain high speed despite the risk of iceberg collisions. It was claimed that he ordered to ‘make a record crossing’ thus indirectly causing the collision with the iceberg. However, it is also suggested that the Titanic hit an iceberg in 1912 because of a basic steering error, and only sank as fast as it did because an official persuaded the captain to continue sailing.
I still feel I would need a historian with expertise in this area to really delve into the history. Having a very smart on-demand dictionary is great, but is Wikipedia-without-editors (as some may see it) really enough? What about the intentions that lie behind a question or the specific context that it is being raised in?
A historian might conceivably point me to diaries written by the captain long before the ship sailed, and only recently discovered – and are not fully digitised – that reveal the pressures he felt under to go as fast as possible.
AI and complex questions of science and history
Let’s try a much more difficult question.
When was the science settled that the world is warming as a result of man-made emissions of greenhouse gases?
I got the following answer from an AI tool (a ChatGPT derivative):
The science of climate change has been studied for over a century. In the late 19th century, scientists first argued that human emissions of greenhouse gases could change Earth’s energy balance and climate1. The United Nations Environment Programme (UNEP) was formed in 1972 as a result of growing concern about the impact of human activities on the environment. The Intergovernmental Panel on Climate Change (IPCC) was established in 1988 by the World Meteorological Organization and the United Nations Environment Programme to provide scientific information on climate change. The IPCC Third Assessment Report in 2001 found “new and stronger evidence” that humanity’s emissions of greenhouse gases are the main cause of the warming seen in the second half of the 20th Century.
Not a bad answer, but not one I feel would satisfy an expert climate scientist.
This is what Spencer Weart writes in The Discovery of Global Warming, referring to the seminal 1967 paper by Manabe and Wetherald:
This was the first time a greenhouse warming computation included enough of the essential factors, in particular the effects of water vapour, to seem plausible to experts. Wallace Broecker, who would later play a major role in climate change studies, recalled that it was the 1967 paper “that convinced me that this was a thing to worry about.” Another scientist called it “arguably the greatest climate-science paper of all time,” for it “essentially settled the debate on whether carbon dioxide causes global warming.” Experts in a 2015 poll agreed, naming it as the “most influential” of all climate change papers.
So I went back to the AI tool, to ask the question more precisely, and anticipating something closer to what I expected:
When were climate scientists finally convinced of the greenhouse gas theory for global warming on Earth.
I got the following answer:
The greenhouse gas theory for global warming on Earth was first proposed by Swedish scientist Svante Arrhenius in 1896. In his 1938 study, he showed that humans had already emitted enough CO2 into the atmosphere to increase the average temperature on Earth and that the Earth was responding: Over the previous 50 years, the average temperature had indeed increased as per Arrhenius’s proposed greenhouse effect. Many other theories of climate change were advanced, involving forces from volcanism to solar variation. In the 1960s, the evidence for the warming effect of carbon dioxide gas became increasingly convincing.
By around 2000, many climate scientists were convinced that the rise in temperature was likely to continue as greenhouse gases accumulated. Some predicted that an unprecedented global warming would become apparent.
Let’s forgive the fact that the tool confused Callendar with Arrhenius when referring to the seminal 1938 paper (it was Callendar who first showed empirical evidence of man-made emissions causing global warming), or the suggestion that Arrhenius was the first to understand the carbon dioxide driven greenhouse effect (Tyndall certainly did, much earlier, but Arrhenius was the first to calculate its magnitude).
The bigger surprise was that the AI tool did not cite or allude to the 1967 Manabe and Wetherald paper.
I am sure that new versions of ChatGPT and other tools will fix such problems when answering complex and nuanced questions.
AI in an organisational knowledge context
There is however a more fundamental issue with the use of such tools within organisations – whether public or private – that are aiming to become learning organisations and develop a body of trusted knowledge.
Chinese walls, confidentiality and other factors mean we cannot simply let rip with open access to the information held in organisations. And while quantity is useful to AI it is much less important to an expert than the quality of the information and insight being parsed.
Let’s consider a scenario.
A multi-national consulting engineering company has done thousands of projects around the world. It partners with diverse international and local companies – experts in specific disciplines such as new materials, acoustics, carbon accounting, and much more – in design, project management and construction
On the one hand, the consulting company wants its intellectual property respected, and in many cases, kept confidential. Clients and partners want the same for their contributions to projects. A complex Venn diagram emerges of private and shared information, and the insights (knowledge) that emerges from these experiences. Document management systems are used to apply both open access but also need-to-know policies, and often at quite a granular level.
Documents never get printed and left on trains because people who by virtue of their role need access to certain collections of information, get it – by design. Documents that are needed to be retained and never unintentionally lost, never are – by design. This is just basic content management good practice – notwithstanding the inability of Governments and many companies to apply these 20th Century capabilities effectively.
The issue for AI is that it would need to be able to navigate these complex access rights when providing answers to questions. The same question would have to give different answers to different people; even within the same organisation if chinese walls are not to be breached. This is the Achille’s heal of AI if it is to be commercialised in an institutional setting.
I am grateful to a relative (Jon Hayter) for making the following observation:
Isaac Asimov clearly gave some serious thought to this when he wrote “I,Robot”
At one point when the hero is speaking to a holographic version of his deceased mentor the programme gives him information but can only answer specific questions. At one point when he has made a statement based on his own thought processing the hologram says “That, is the right question”
On the other hand, the consulting organisation also wants to parade their experience and say that it uses its unique collective know-how on past projects in the conduct of new ones. This is in part through the tacit knowledge of their expert employees, as well as the codified experience within the organisation (guidelines, technique papers, anonymised project summaries, etc.) embodied in the lingua franca of knowledge: documents.
Resolving this tension between confidentiality and reuse is part of the art of working in complex organisations, and especially in consulting.
It begs a question as to the source set of information that an AI tool can or should use to answer a queries like:
We’ve been asked to design and manage the construction of a new theatre in north east China that will be a showcase for regional Chinese culture, and an exemplar of sustainable construction and operation. What projects should we learn from and who would we ideally partner with?
Financial constraints, unrealistic expectations, political interference and resulting scope creep will be at least as important as innovative design and engineering, and all have to be factored into the answer.
Much of what is most useful as source material will be the tacit knowledge that is often not written down, and by definition, unparseable. This is gold dust.
To counter the ‘not written down’ issue, some organisations conduct informal review interviews and workshops at the end of each project to tease out insights. For those enlightened consultancies that actually make time to do this, these reviews would aim to provide not only an overview of what was done (the what), but also why it was done that way.
Those candid reflections; those serendipitous encounters; those lightbulb moments – none of which appeared in the project file – might be scribbled in notebooks or might surface in those informal post-project reviews. Sometimes it has to wait till the exit interview or even retirement (to save the blushes)!
As things stand, only true experts can navigate the intersection between technical know-how, personal testimony, historical and current context, emotional factors, politics, deep insights, and much more, that explain the why’s and wherefore’s of key decisions on complex endeavours.
Would ChatGPT conjure up a Sidney Opera House design out of the blue if nothing remotely similar existed beforehand?
You know the answer.
That does not mean that the AI of the future cannot play a role as an assistant in these endeavours – taking on some of the mundane tasks that exist in the curation of information and even knowledge.
For example, in the business of applying subject-specific metadata based on controlled vocabularies, AI could certainly prove a powerful assistant by making time-consuming tasks more efficient, if not quite a complete replacement for the expert knowledge curator.
However, I am confident that for the foreseeable future, it will not replace the true expert within an organisation.
Update a: A survey by Nesta UK of Heat Pump users published 23 May 2023 finds high levels of satisfaction with heat pumps, see Note [7]
Update b: Following requests from some readers for more information on the data, an Appendix has been added to provide details of the data collection, analysis of the COP and SCOP achieved, and a comparison of running costs pre and post the heat pump installation.
Update c: Added a note on 26th Sept. 2025 about the Catapult study final report (Dec. 2024) relating variations in housing types and also use of high temperature heat pumps, see Note [8].
Here is a plot spoiler: my wife and I are delighted with the results of living in our listed Cotswold stone home heated by an air-source heat pump since December 2021. I want to share our story as a corrective to the belief, widely expressed in the media, that it would be impossible to do what we did: self-evidently, this is untrue.
In case you want answers to the burning questions I often hear, I’ve collected a few (see Note [1]).
Here’s our story.
Beginnings
Twenty five years ago my wife and I bought an old Grade 2 Listed property with friends and split it in two. It was a bargain; ignoring the subsequent years of work! Renovations of sash windows and shutters, valley gutter lead work, lime mortar of the end terrace, and so much more, followed over the years.
It turned out the party wall used to be an external wall when the property was first built in 1805, but about a decade later the mill-owner who acquired it wanted something much more substantial. He extended the small cottage outwards and upwards into what was advertised later as a “capital messuage”. Not a property I’d ever imagined owning, but somehow we found ourselves as custodians of this beautiful property.
When we arrived there was a higgledy piggledy array of partitioned rooms created for the care home it had been used for prior to its closure. We set about restoring it to its former Georgian glory. But there is always something to do on an old house, and the journey continues. That’s why it always amuses me when people talk about ‘retrofit’ as if it is some fast and easy project, because from my point of view, maintaining the fabric of a property is always a process not an event; a process that never really ends.
We couldn’t find any way to split the existing water, gas and electricity utilities in two, externally or internally. So we had to start again with new services, that were routed in at the back of the property, so as not to impact on the Georgian front elevation. Both households were essentially starting with a blank canvas and put in completely new plumbing and gas boilers. We called our plumber ‘Danny the ferret’ because of his ability to get into impossible spaces and never leave a mess. The small new gas boiler could now heat our now separate property in less than an hour; like a Ferrari, able to go from 0 to 60 in 5 seconds.
Little did we know then, but Danny did such a great job on the pipework and radiators that when last year we had an air-source heat pump fitted, the plumbing through the house turned out to be fit for purpose (big enough pipes in the right places). No new copper piping required except to connect the new equipment, and only one third of the double-panel (with fins) radiators he fitted needed fattening a little to be replaced by triple-panel radiators.
And those that were upsized (like the one illustrated) were a bit fatter but the same height and width, so fitted into their positions without any need to change the pumbing. This in effect increases the surface area of the radiator, allowing for a lower flow temperature to still heat the room to its target temperature.
Exploring opinions on heat pumps
While I am no laggard, I’m also generally not an ‘early adopter’ of anything, even when I believe its the right thing to do. It took me a decade longer than most to switch from a film to a digital camera because I was unconvinced they were good enough. I tend to prefer for others to learn the lessons and pass them on. I like it also when there is an inevitable reduction in costs as a market matures a little. So I did get a digital camera finally, 20 years ago now, and it’s been great, but I was hardly an advocate for ‘new tech’.
My old friend Chris runs a successful business, Yorkshire Energy Systems (YES), installing solar and heat pump systems (amongst others), and had already been giving me an education on heat pumps. I thought we should consider one when we had to replace our existing system, but again, I was in no rush, I needed time to explore the subject.
I invited Chris down to Nailsworth to give a talk to the local climate group I helped run (Nailsworth Climate Action Network), and also invited leaders of local political parties of all persuasions from the Stroud Valleys, including the then MP for Stroud, David Drew for Labour, and the prospective Conservative candidate, Siobhan Baillie (who in a subsequent election became our MP).
The talk went down extremely well – Chris is a brilliant speaker.
Chris later gave another talk for us online during Covid. This also went down well and was recorded, then posted on our website: What if we could all heat our buildings with renewable technologies? It addresses most of the myths you hear about heat pumps, and a number of people have told me how it helped make up their mind, and go for a heat pump.
I have a good natural science background, so I was not in any doubt about the soundness of the 19th century physics that underpin the workings of a heat pump, which we all blithely rely on every day (most people have a fridge, which pumps heat from inside the fridge to its outside). But understanding the science is rarely enough, even for ex-scientists!
While in one ear Chris was telling me that any building that can be heated with a gas boiler can be heated with a heat pump, in the other ear, other friends, including experts in insulating old buildings, told me that ‘deep retrofit’ was an essential precursor to heating an old building with a heat pump, especially an air-source heat pump. Doubts crept in.
It’s barely believable that this sea water has enough heat to warm anything, it’s pretty chilly at this time of year, but yet, thanks to an extraordinary technology called a heat exchanger, it’s the sea that’s going to heat this house.
But cold water contains enormous amounts of energy. Thermal energy is just the jostling of molecules. So the water Roger Harrabin was feeling, say at 7°C, is a sweltering 280 Kelvin (on the absolute temperature scale), with no shortage of jostling molecules from which energy can be extracted. Similarly for air. If you stand in front of an air-source heat pump the air blown out is colder than the surrounding air by a few degrees, because the heat pump has extracted thermal energy from it (using the same tech that a fridge uses to extract thermal energy from anything you put in the fridge). This is more than enough to heat a home.
Yet journalists and commentators have continued to assert that heat pumps can’t heat older buildings without substantial insulation work. Our national broadcaster (the BBC) is culpable, despite its duty to “inform, educate and entertain”. Most recently, during the 8th March 2023 episode of Jeremy Vine’s BBC Radio 2 programme, the host expressed dismay that a radiator was lukewarm. What hope is there when a key programme is failing to “inform and educate” in this way. A hint: it is the thermometer in the room that tells how warm the room is, not any preconceptions on how hot a radiator should be.
Roger Harrabin, Jeremy Vine and many others in the media do a great disservice by sharing their incredulity, and repeating unfounded beliefs about heat pumps.
Why did we decide to go for it?
For us there was a stark choice we faced that forced us to stop vacillating, and certainly stop listening to the naysayers:
Firstly, the 25 year old gas boiler was creaking and behaving a bit oddly even after a service. There was no ‘ripping out’ as the tabloids and others emotively campaign against, merely the natural end of life that could no longer be serviced to stay in operation much longer.
Secondly, I had seen the data from the Energy Saving Trust, and it was quite clear that getting off using gas to heat our home and replacing it with a heat pump would massively reduce our annual carbon footprint (as I wrote about in Are Air Source Heat Pumps (ASHPs) a Silver Bullet?). This is true even after allowing for an electricity grid that was not yet fully powered by wind, solar and other alternatives to fossil fuels. As a family who wanted to do their bit to reduce the impact of our household on global warming, this was a pivotal moment. If we didn’t take the plunge now, we’d be locking in a high carbon footprint for another 20 or so years with a new gas boiler.
Thirdly, the idea that we must carry out extensive insulation (as the The Retrofit Academy claims) before even considering a heat pump, was simply impractical. Even if we could have afforded the eye-wateringly expensive costs of ‘deep retrofit’, Listed Building approval would not have been forthcoming. Imagine wrapping this Georgian splendour in external wall insulation, or removing coving and panelling to somehow fit internal wall insulation.
Finally, at the time, the domestic Renewable Heat Incentive (RHI) was still available but due to close at the end of March 2022, so we needed to get a heat pump system in by then if we were to benefit from the scheme.
Going for it – plans and preparations
So we made the decision in July 2021 to ‘go for it!’. I prepared an outline plan, checklist, questions, etc. We wanted to do it right first time.
I prepared a plan of the house showing the floor area and volume of each room, existing radiators, etc. We bought seven digital thermometers to distribute around the house to ‘benchmark’ how it behaved with the existing gas boiler. I did calculations using current bills and ‘what if’ estimates of future gas and electricity prices to determine whether we would break even on running costs. I remember saying to my wife “gas prices are likely to go up faster than electricity prices sometime in the future. So while our running costs may struggle to compete with cheap gas today, with the current ratio of units costs, that’ll change sooner or later, as the grid gets greener”. Little did I know then that a combination of Putin and a super efficient heat pump would make it sooner, not later.
The biggest challenge was our living room with its large bay window – a thing of beauty but also a significant challenge in terms of heat loss. When the sun is up, even in winter, the room receives a lot of extra warmth (so-called ‘solar gain’), but when the sun goes down, we have to use shutters to reduce heat loss. Not ideal, but sometimes compromises have to be made.
The project forced us to finally clear out the loft. About 100 boxes from our two daughters and us needed sorting, many unsorted from when we had moved in 1998! After several dusty visits to the loft we cleared it, cleaned it, then beefed up the insulation with 300mm depth of Knauf insulation rolls.
The first thing to be grateful to the heat pump for is that it forced us to clear the loft when in our late 60s, rather than having to face it in a decade or two when our knees will have gone. It was such a relief, I can’t tell you.
We had already done what we could to reduce draughts. Our beautiful sash windows could not be replaced even if we’d wanted to. When we had a major servicing job done on them (thanks to a local firm Simply Sash Windows with expertise in historic buildings). We had discreet brushes fitted that paid back immediately with a significant reduction in draughts (when our local climate group did a local ‘retrofit fair’ we included short video snippets on our website to explore options, such as for draught proofing).
For other homes and householders there may be good reasons for carrying out more extensive insulation and other ‘retrofit’ measures. However, bear in mind that if the objective is to get off gas, and thereby reduce your carbon footprint significantly, you really should leave enough in your budget to finance a heat pump system. I discussed this in an essay Insulate Britain: Yes, but by how much? that has attracted a fair amount of mostly positive attention.
For building regulation reasons when we took on the property, we’d had fire strips with brushes fitted around existing internal doors, and it turns out this was very helpful in reducing the turnover of air in the house, and so reducing heat loss.
We contacted two companies specialising in heat pumps. My old friend Chris’s company YES in Yorkshire, and a local company CEG (Cotswold Energy Group) which was recommended to me by a customer of theirs in Nailsworth. Both firms have a proven track record of installing high quality systems (my research assured me).
We had approval from the Listed Building Officer in August 2021 to proceed, based on outline plans I submitted for locating the external unit well out of sight at the rear of the property. The new services we got installed 25 years ago ran up the back of the building and into a boiler room that could be repurposed for the internal units required for the new system. We had a few lucky breaks like that. The new hot water tank would just fit. Another lucky break.
The surveying of the house is a crucial stage in the process and ought to be whatever heating system is installed. Overall, these were the checkpoints I ticked off on the project prior to installation:
Listed Building approval ✔︎
Electrical ‘load survey’ and mains fuse; existing 80 Amps would be fine ✔︎
Water pressure checks ✔︎
Sizing of pipework in the house ✔︎
Cleared loft and installed new beefed up insulation ✔︎
Sorted out draughts in windows and doors ✔︎
EPC certificate ✔︎
RHI requirements met ✔︎
Check by installers of suitability of platform outside for external units ✔︎
Check space inside for internal units ✔︎
We were now ready to proceed and made our requirements very clear. For example, I said I was not interested in a hybrid system with gas as a backup; that seemed a bit like buying a petrol car in 1910 and having towing arms for a horse, just in case. A few people tried that, but it didn’t catch on.
The instructions was clear: size the system properly to meet our requirements and those of any subsequent owner (including one who might have 4 teenage children who shower a lot).
The two quotes were very similar in terms of proposed design and costs. In the end, we felt that everything else being equal, going local swung it, in case we needed any call outs to sort out teething issues and future servicing. Chris felt this was an important factor to be taken into consideration.
System design
With a big enough heat pump you can heat any building that a gas boiler can heat. Geneva City Hall has been heated by a water-source heat pump since 1928, and a large air-source heat pump at Hillpark Drive, Glasgow has been heating 350 homes since 2017.
The system design followed MCS standards (Microgeneration Certification Scheme), which stipulates that the system should be able to achieve at least 21°C in living rooms, 18°C in hallways and bedrooms, and 22°C in bathrooms; and be able to do this on the nominal coldest day of the year for your location. For our system and location, that meant that the heat pump itself, and radiators, were sized to achieve these targets with a flow temperature of 50°C, when it was -1.6°C outside. The ‘flow temperature’ refers to the temperature of the heated water pumped to the radiators.
It is crucial that the assessment is done room by room. If just one large room has an undersized radiator, the room may not reach its desired temperature at the expected flow temperature, undermining expectations, for no fault of the heat source itself. In our case, only a third of the radiators
The heat pump itself must be able to achieve the peak heating demand for the whole house.
Because the largest heat pump available at the time was just short of the peak demand estimated for our home, we ended up with a so-called ‘cascade’ system, with two smaller heat pumps working in parallel, in conjunction with a buffer tank. In total, it had more peak power than we needed, but not so much as to cause an issue.
In a way, the so-called ‘cascade’ system (actually not in series but two in parallel) turned out to have the benefit that when it was comparatively warm outside, only one of these units needs to be in operation at any time. The clever electronics made sure that each one shared the work equally over the year.
The only issue was that both firms were maxed out with work because others like us were trying to get things done before the end of the Renewable Heat Incentive (RHI). In addition the Covid pandemic had disrupted supply chains for heat pumps, like everything else, so kit was also scarce.
The installation
Consequently, the installation was finally done at the start of December 2021, taking just over a week. It was a larger than normal system for a larger than normal house, but the principles are the same, whatever the property.
The twin Mitsubishi Ecodans were placed on a platform we created outside the old boiler room:
The new internal setup looked as follows:
You may think that this looks complicated, but it is not a complexity that you need to deal with or even to understand, any more than you need to understand how a modern car works. In many ways, the increase in sophistication of systems (be they cars or heat pumps) reflects their ability to be ‘easier to drive’, extremely efficient, environmentally aware, and with very little in the way of maintenance to worry about. Hardware and software combining forces!
The installation process went as follows:
Introduction to installation team, and logistics agreed
Old gas boiler and tank removed
Existing pipes and radiators flushed
Heat pump and other kit moved into position
Plumbing in of kit (heat pump, hot water tank, etc.)
Plumbing in the few new radiators required
Heat pump connected to electrical power
Control system system installed
One wireless thermostat placed in living room, set to 21°C
Various control setups competed
Setback thermostat by 3°C to 18°C between 10pm and 6am
Weather compensation setup
System put into operation
Radiators ‘balanced’ to ensure optimal heat distribution
Metering installed for flow/return heat and electricity usage
Certificates produced for MCS compliance
In addition to the assessor/ designer who did the design we had two plumbers who did the physical work and ‘plumbing in’ of the new kit, followed by an electrician who did the setting up of the heat pump (see ‘The Team’, Note [3]).
As it turned out, only on a few very cold days (in practice, a minimum of -5°C) did the flow temperature ever get as high as 50°C (the maximum design ‘flow temperature’ for our system). The flow temperature is best kept as low as possible, while still doing its job (see Note [4]).
The results
So how did things turn out?
We get the space heating we need and plenty of hot water. Our hot showers have never been better. We have bills that of course have gone up due to the energy crisis, but less than they would have done if we’d stayed on gas.
A key measure of the success of a heat pump installation is the performance it actually achieves in practice – as opposed to some published figure that assumes an idealised situation.
For any heating system one needs to think of the performance of the whole system. In addition to the heating system, there are the radiators and fabric of the building, because it is all these together that determine how efficiently rooms are maintained at a desired temperature.
For our heat pump, performance was assessed using the ‘seasonal coefficient of performance’ (or SCOP), which is the heat energy delivered during the year (in kilowatt-hours (kWh)) divided by the electrical energy used by the heat pump (in kWh).
For our system, the SCOP achieved in the year to March 2023 was 3.3, or 330% in percentage terms; pretty impressive I feel. That means that for every 1 kWh of electricity we put in, we get 3.3 kWh of heat out of the heat pump (2.3 kWh of this is harvested from the ambient air). By comparison, the old gas boiler was only 72% efficient, meaning that for every 1 kWh of primary energy in the gas put in we got 0.72 kWh of heat out.
This means that even though the unit electricity price is greater than the unit gas price by a factor of over 3, this is more than offset by the relative performance of the new heat pump compared to the boiler it replaced. So our running costs are less by comparison.
This result is totally at odds with the naysayers we hear incessantly in the media that they are difficult to install, won’t work on older buildings without substantial insulation measures, or will cost a fortune to run. None of this received wisdom has been true in our case.
The project has not identified any particular type or age of property that cannot have a successful heat pump installation. The suggestion that there are particular home archetypes in Britain that are “unsuitable” for heat pumps is not supported by project experience and data.
So I don’t believe our experience is in any sense exceptional or to be treated as anecdotal. Rather, it demonstrates that even in a building that is ‘hard to treat’, as the experts would call it, there is no reason an air-source heat pump cannot be successfully installed and operated, with a little care and preparation.
I hope that our story provides an illuminating corrective to the media naysayers, and others who should know better.
If it gives encouragement to those wishing to get off gas for whatever reason, it will have done its job.
If you are doing it to lower your carbon footprint, then whatever your lifestyle, it will be one of the most impactful decisions you will ever make in your lifetime.
Q. So can you really heat a large old house with an air-source heat pump, without ‘deep’ retrofit?
A. Yes.
Q. But does it cost more to run than than if you’d stuck with a gas boiler?
A. No, comparatively it costs less. Of course, gas and electricity unit prices have both gone up, but gas proportionally more and the efficiency of the heat pump trumps the differential in electricity-to-gas unit price. See Will my heating bill increase if I get a heat pump? another essay I have written, where I show this in detail.
Q. But it’s really expensive to install surely?
A. It costs less than a new kitchen that is mostly MDF and air, and yet we all seem happy to pay for a new kitchen that the next owner of the house will quite possibly “rip out”. The heat pump will probably last for 25 years and will over its lifetime save you more in carbon emissions than anything else, for a typical householder at least 3 tonnes of carbon dioxide per year compared to gas with the electricity grid as at present (and even more as the grid gets greener). Octopus Energy are creating a market for a low cost heat pump installation, that may meet the needs of a majority of householders, if not outlier cases such as mine.
Q. But can we go green using biogas?
A. Burning stuff is so last century! And as Prof. David Mackay said, if you have some gas it is much more efficient to send it to a gas turbine to create electricity to power a heat pump in your home than to burn the gas in your home. The same would be true of biogas, an idea that has been heavily criticised by independent expert Dr Richard Lowes , and flies in the face of the clear recommendations of the Climate Change Committee, see Note [6].
Q. Should we hold on for hydrogen boilers?
A. No. Using ‘green hydrogen’ from excess renewable electricity to make hydrogen to burn in our homes would require 6 times as many wind turbines than if we simply sent the electricity direct to home to power heat pumps. Hydrogen will be in demand in hard to decarbonise sectors including fertiliser production, etc. Anyone who gets a hydrogen boiler will be locked into expensive hydrogen. The Climate Change Committee expect hydrogen to play only a niche role in heating (see Note [6]).
Q. Did you have to change all your radiators, and pipework?
A. No. The pipework did not need changing and only 1/3rd of the radiators needed fattening a little (from 2-panel to 3-panel); their width and height were unchanged.
Q. Is it noisy?
A. Not at all, even standing close by. If birds are singing you can’t hear it at all.
Q. Can it heat hot (tap) water?
A. Yes. It does that in shortish bursts, and because the hot water tank is under mains pressure, the showers are now much better.
Q. Does it use an immersion heater some of the time?
A. Not for day-to-day water heating. It does use an immersion heater periodically to boost the tank’s temperature from 50°C (the target temperature, achieved with the heat pump) to 60°C (required to kill the organisms); no point flogging the heat pump for this little job. This deals with the risk of Legionella, which is a requirement under current regulations (even though the sealed nature of the system makes it an extremely low risk). In any typical month, the legionella ‘purge’ occurs only for a few hours overnight every two weeks. It has very little impact on the measured performance or the heating bills.
Q. But my plumber said you can’t get the water hot enough?
A. Untrue. Because who wants the hot water from a tap to be more than the new building regulation of 48°C, which a heat pump can easily achieve. Why would we want young or older relatives and visitors to scold themselves?
Q. My plumber also said that radiators never get hot enough?
A. Your plumber is mistaken. Again the new building regulations (whatever the source of heat) are that the ‘flow’ temperature should not exceed 55°C. If your room needs to get to 21°C say, then (depending on the external temperature) you may only need 35°C, 40°C or 50°C (in the most cold days), to get the room to 21°C. Not as fast as if the flow temperature was 75°C, but just as assuredly, and more cost effectively. Your skin temperature is say 34°C, so a radiator at 40°C or even 50°C might feel lukewarm, but that is irrelevant. It’s what the thermometer on the wall says that counts. Don’t touch, look!
Q. Was the project disruptive?
A. Not the fitting of the heat pump, which took just over a week. For us, the most challenging thing was clearing the loft, in order to increase the loft insulation to modern standards, and get the grant.
Q. Do you have smart controls around the house?
A. No. A well designed system (whatever the source), which includes properly sized radiators and weather compensation, will always ensure that if the single thermostat in your living room is reading the target temperature there, the other rooms will also be at their target temperature. No fiddling with controls in rooms or zones. By keeping it simple in this way, we actually can forget we have a heat pump at all. It works reliably, without us ever having to fiddle with any dials or thermostats. We just let it do its thing.
You will have many more questions no doubt, but these are typical of the ones people raise with me, so I hope the answers have proved illuminating.
[2] History of heat and heat pumps
19th Century scientists developed the ‘kinetic theory of heat’.
They established that ‘heat’ in substances was no more than the jostling of molecules, and the higher the temperature, the greater the average speed of their jostling. Not only that, but that this jostling only ceases at ‘absolute zero’, which is -273.15 degrees on the Celsius scale. So the water Roger Harrabin was feeling, say at 7°C, was a sweltering 280 Kelvin (the name subsequently given to this new scale). No shortage of jostling molecules from which energy can be extracted. A balmy -5°C for winter air passing through an air source heat pump – no problem – it being a balmy 268 Kelvin.
The first working compression–expansion refrigerator (or heat pump) was built by Jacob Perkins in the 1830s. William Thomson (Lord Kelvin) first proposed using heat pumps for space heating in 1852.
For those interested in learning more about the history, Boltzmann’s Atom: The Great Debate That Launched a Revolution in Physics is a great biography of one the principal scientists who shaped this theory, and fought a long battle with naysayers such as Mach who couldn’t bring himself to believe in the existence of atoms.
[3] The Team, for our project overall were:
Senior manager, who provided oversight at every stage to ensure a quality delivery: during assessment, proposal, installation and hand-over.
The assessor/ designer. She was in her 20s, a graduate with a 1st Class degree in Geophysical Sciences who’d decided to not go into the oil & gas industry, and instead become a heat pump consultant. She did the surveying of the property, estimating heat loss for every room, and preparing a detailed design and costing for the system.
There were two plumbers who did the physical work, both young (also in their 20s). They first removed the existing boiler and hot water tank. They then got the units in place on the day, including a heat pump (actually two in our case) that had to be lumbered up a steep path to reach the back of the house (built into the side of a hill); a hot water tank; water pumps etc. They then connected all the pipework to the units. Because they were part of a team, these plumbers didn’t need a deep knowledge of heat pumps, just the principles of low flow temperature systems. They drained and flushed the existing plumbing in the house, and found no issues with it holding the slightly higher water pressure needed for a heat pump. Only a third of the radiators needed up-sizing to triple panel ones – and given all the dire warnings I’d heard (false as it turned out) I was pleasantly surprised by this. Once the system was operating, they ‘balanced’ the system, so heat was as evenly distributed as possible.
There was one electrician, who not only did the electrical connections, but also setup the controls for the heat pump(s). The so-called ‘weather compensation’ is a crucial part of the setup, as it optimises the efficiency of the system. The temperature of the water being pumped around the radiators is kept as low as needed to deliver the space heating. We also decided to have the heat pump on continuously but set back a few degrees at night. This meant that the heat pump was not having to work too hard in the morning to reach the target temperature in rooms. With our thick walls, it would not be a good idea for these to cool down too much. Paradoxically, we use less energy this way than if the system went off completely over night.
[4] Flow temperature is best kept as low as possible, that does the job
The larger the surface area of the radiator, the lower the flow temperature needed to deliver the required amount of heat. This is why underfloor heating is often able to operate at a mere 35°C. But even for a system like ours, the flow temperature can be relatively much lower than expected. For example, in the last 7 days, the external temperature has ranged from 8°C to 11°C, and the flow temperature has never exceeded 36°C.
Which is why the talk of ‘high temperature heat pumps’ to deal with difficult buildings strikes me as a non sequiter.
In any case, building regulations that came into force in June 2022 in the UK mean that all new systems must have a flow temperature no higher than 55°C, whatever the heat source, so those trying to flog high temperature heat pumps are barking up the wrong tree in my view.
Just design the low flow temperature system properly. Job done.
[5] Performance of air-source heat pumpbased system
The performance for the system relates to the heat pump, radiators and the fabric of the building itself. Any one of these can influence how good the measured performance is. The performance over a given period (say a day) is calculated as the ratio of the heat energy that the system delivers (in kilowatt hours, kWh), and the electrical energy used by the system (in kWh). Because a heat pump harvests energy from the ambient environment (air, ground or water) warmed by the sun, the performamce will be greater than one (or 100%).
For space heating the performance gets less the greater the difference is between the outside temperature and the required internal temperature. We follow a common practice of calling the performance measure the ‘coefficient of performance’ (or COP) and over a whole year, calling it the ‘seasonal coefficient of performance’ (or SCOP).
But what about the daily performance? Well, for our system, even when it was -5°C (colder than the nominal coldest day -1.6°C) the daily COP was 2.1 (210%) and over the year, the daily COP got as high as 5.4 (540%).
The SCOP we achieved of 3.3 (330%) was extremely good, in our view.
Another way to measure performance is the kilowatt hours required to heat one square metre per annum (kWh/m².a). The average British home needs over 130 kWh/m².a, but a new house built to Passivhaus standards would need just 15 kWh/m².a. The Association of Environmentally Conscious Builders aim for 50 kWh/m².a when doing retrofit on existing buildings, with a “certifier approved exemption” 100 kWh/m².a for difficult older buildings. Their standards for doing retrofit can be found here: https://aecb.net/introduction-to-the-aecb-carbonlite-building-and-retrofit-standards/
Our building was using delivered energy of 123 kWh/m².a to heat the house when using the gas boiler. After modest retrofit ‘fabric’ measures and moving to the air-source heat pump, the demand has reduced to about 114 kWh/m².a (not far off the AECB higher “exemption” of 110 kWh/m².a, although I should stress we have not asked an AECB member to review our fabric measures). The reduction in heat loss due to the loft insulation is possibly offset by the 24/7 operations of the heat pump (with overnight setback). See the Appendix for more details.
Nevertheless, I am convinced that the gentler heating with the heat pump (rather than the wild swings we had with the gas boiler) gives rise to improved comfort and energy management overall.
[6] Heat pumps will be primary tool in decarbonising heating
The Committee on Climate Change (CCC) in their 6th Carbon Budget stated (based on very detailed modelling of scenarios, costs and risks):
‘By 2030 37% of public and commercial heat demand is met by low-carbon sources. Of this low-carbon heat demand 65% is met by heat pumps, 32% district heating and 3% biomass. By 2050 all heat demand is met by low-carbon sources of which 52% is heat pumps, 42% is district heat, 5% is hydrogen boilers and around 1% is new direct electric heating.’
And the district heating itself can be provided by commercial scale heat pumps. Some can have a heat power rating of 48,000 kW (compared to a typical 3 bedroom home needing 6kW) – see https://www.bbc.co.uk/news/business-65321487. Since district heating will often be needed in towns which often have a river flowing through them, or are by the sea, water-sourced heat pumps can be used.
So in practice, combining the domestic and commercial/ district heat pump provision, heat pumps would eventually provide the great majority of the heat ti dwellings of all kinds.
Deputy Director of Nesta, Katy King, provided some summary points from survey on Twitter:
Satisfaction with heat pumps is high and, overall, satisfaction levels between heat pump and gas boiler users are very similar.
But heat pumps users were MORE satisfied with their running costs than boiler users.
People who installed a heat pump into their own home were the most satisfied (81% as or more satisfied than previous heating system)
When you include homeowners who didn’t commission the heat pump themselves in the sample, 73% of heat pump owners are as satisfied or more satisfied with their heat pump compared to their previous heating system.
Satisfaction with heat pumps is just as high in older properties – People living in Victorian houses were just as satisfied with their heat pump as people in mid-century properties or modern homes
As the overwhelming majority of heat pump users are satisfied with space and hot water heating, safety, reliability and noise – it’s time to put to rest outdated concerns about the heat pumps.
There are two important findings expressed in two consecutive paragraphs from the Project Summary Report p.33 (my bold added):
“As seen in the above statistics, most of the ASHPs (installed through the project) which were capable of operating at high temperatures (>65°C flow) used the R290 refrigerant. These were observed to operate at a similar annual SPF to the low temperature ASHPs. This is likely due to a combination of the higher performing refrigerants and the weather compensation controls meaning that they operated at lower temperatures most of the time.”
“Despite the high level of variation overall, house type and age did not have a statistically significant impact upon the heat pump performance results. This indicates that where trained designers and installers determine a heat pump as suitable for a given home, the house type (as categorised in the EoH project) should not impact upon performance.”
[and in conclusion, p.37]
“The project outcomes suggest that it is possible to install heat pumps across the majority of UK housing archetypes and that these heat pumps can operate with good overall efficiencies.”
ACKNOWLEDGEMENTS
I’d like to thank my old friend Chris Wilde, who is Managing Director of Yorkshire Energy Systems (YES), for educating me on heat pumps over the last few years, and showing me the art of the possible. Always witty and wise, I have learned so much from Chris.
Also, thanks to the team from Cotswold Energy Group (CEG), who were very professional at all stages in the project. I’d like in particular to thank Zoe Phillips for her advice and support.
Thanks to Marilyn, my wife, for reviewing the essay and providing improvements, but most of all, for being my companion on this journey.
A number of people have asked me to provide additional data on the performance of the system. I was keen to keep the essay accessible and non-technical but I appreciate that some readers may want more. So here it is – for those that want more!
Data prior to heat pump when house was heated by a gas boiler
Prior to the Air Source Heat Pump (ASHP), the house was heated with a gas boiler dating from 1998 (both space and water heating).
The house is a large semi-detached dwelling with a floor area of 251 m² over three floors.
Prior to 2020 we had done what we could to the house to help reduce heat loss, including:
fixing guttering
fixing lime mortar on external walls
adding brushes to sash windows to reduce draughts
in a back area fixing insulation of the pitched room of a small extension room
But as to the main building, it is Grade 2 Listed and so certain measures (such as wall insulation). As part of the move to a heat pump (and as a condition for getting the RHI grant), we did add beefed up insulation to the main building’s loft in 2021.
In 2020 the gas usage was 45,567 kWh and cost £1,409 (with an average unit cost of 3p per kWh; including standing charges).
I estimated that 94% of this was for space heating, which gives 42,833 kWh/yr for space heating. But allowing for the old gas boiler being only 72% efficient (see Appendix Note [ii]), the actual house heat demand for 2020 was less, at 30,840 kWh.
This yields a unit heating demand of (30,840 kWh/a)/ (251 m²) = 123 kWh/m².a – when comparing homes this is a very useful measure to use because one can compare two houses of different sizes if one uses this measure of energy per unit area.
Now, given one estimate for the average heat demand for UK is 133 kWh/ m².a, it seems that my old house was already doing quite well – being slightly better than the UK national average per unit area.
Data collection
Most heat pumps come with consoles and / or Apps to enable a householder to monitor energy usage and performance. The Mitsubishi Ecodan does have this capability.
So normally, finding out the Coefficient Of Performance for a heat pump over a day, week, month, or any other period, should be easy and out of the box. Our situation was unusual and more complex than most people would have to experience for reasons I will explain.
In our setup we had two 11.2 kW Ecodans, that work in parallel (what Mitsubishi confusingly term a ‘cascade’ system), so that the peak heat output is 22.4 kW, although the peak heat loss estimated was a bit less than this (at 18.6 kW when the external temperature was at the nominal coldest day of the year, at -1.6°C)
To cut a long story short, it turns out that Mitsubishi’s marketing blurb was wrong and the standard metering features didn’t work with a ‘cascade’ setup as they do for a single unit (which is the most common situation). However, Cotswold Energy Group were brilliant and remedied the situation by installing 3rd party meters to enable performance to be measured and recorded. It did mean we ended up with a slightly more complex and bespoke setup. The good news is that it all works fine and now I don’t have to worry about it – I just read the numbers off a table or graph.
The heat meters get rate-of-flow data from the pumps and temperature data from gauges that were fitted to the ‘flow’ and ‘return’ pipes. The temperature drop between the warm flow and cooler return, multiplied by the flow rate, gives an instantaneous measurement of the heat being delivered to the house (e.g. to the radiators, but when in hot water heating mode, to the hot water tank).
These are the two heat monitoring meters used:
In addition, there was a meter for each heat pump measuring how much electricity they were using:
These 4 meters were then wired up to a data aggregation unit (Teracomsystems TCW260) which does the mathematics to work out the COP for each heat pump (and for the system overall). This local wifi connected unit stores the data over time, and can be accessed by a browser to produce reports, which can be configured in different ways.
I am able to download the data into a spreadsheet to then do further analysis if the raw reports are not quite enough to answer all my questions.
I also manually read the data for a while and did the calculations manually, just to satisfy myself that the system was setup correctly to give me the answers I needed.
In addition to the heat pump data, I also bought some digital thermometers to place around the house to assure myself that each room was reaching its design target temperature.
When doing visits of people thinking of getting a heat pump, I will get them to place their hand on a radiator and ask them “is that on?”. Oh, it doesn’t feel very hot? They often reply. I then get a thermal gun (cheap to buy – its not a camera – and a useful tool) and check the temperature of the radiator. Say, it is a typical 40°C. I then check the temperature of the palm of their hand – I typically get 34°C (a bit less than a human’s core temperature). “You see, not much different, so not surprising that it doesn’t feel ‘hot’ to touch!”. I then direct their gaze to the digital thermometer on the wall – “what does that say?”. Oh, it’s 21°C! “Indeed, just as it was designed to achieve, 40°C in this radiator is enough to heat this room to 21°C – so don’t be misled by touching the radiator and assume that tells you anything useful, it doesn’t”
By the way, there is just one thermostat in the house (in the living room). The design of the system ensures that when the sitting room is meeting its target temperature (21°C), then the other rooms will meet their temperature. So no need for thermostats or fancy controls on each radiator or in each room. I don’t even touch the TRVs. The system was ‘balanced’ by the installers, and we then just leave it alone.
The heat pump will adjust the flow temperature to deliver the right amount of heat to this and other rooms. The colder it is outside, the higher the flow temperature required and delivered. This is called ‘weather compensation’. In our system, the flow temperature never needs to get higher than 50°C. Most of the time it is a lot less than this. This ensures that the efficiency of the system is maximised (which also means the running costs are correspondingly reduced).
Needless to say, the system has performed brilliantly, thanks to the quality work done by Cotswold Energy Group in the design, installation and commissioning of the system.
We basically now do very little. No twiddling of dials. We just let the system ‘do its thing’.
Performance data for heat pump
In the year from when data was available for the new Air Source Heat Pump – from 1st April 2022 and 31st March 2023 – the total heat demand in the house was 29,689 kWh/a (‘a’ here standing for ‘annum’).
Only slightly (about 4%) less than the inferred heat demand for the pre-heat pump period (despite additional loft insulation).
Due to reduced occupancy, we estimated a slightly lower level of hot water usage, and so space heating was estimated at 96.6% of the total demand, that is. 28,681 kWh/a (see Appendix Note [i]).
In terms of unit area heat demand, that now comes to 114 kWh/ m².a
The Association of Environmentally Conscious Builders (AECB) aim during retrofit to reduce heat demand to 50 kWh/ m².a, but will allow exceptions for difficult to treat older dwellings like mine, with relaxed target of 100 kWh/ m².a. So, our figure of 114 kWh/ m².a is not too far off that relaxed target.
SCOP (Seasonal Coefficient Of Performance)
The total electricity used by the ASHP during the year in question was 8,843 kWh.
The SCOP can be calculated as ‘total heat demand’ divided by the ‘total electricity used’, which in our case was
= (29,689 kWh / 8,843 kWh) = 3.36
This is an astonishly good result for our old house. I had been told by some people to expect much worse. But Zoe had estimated accurately what to expect, and so it was gratifying to see her estimates confirmed in practice.
There is a slight error in this calculation as it does not separate out the direct hot water components (for taps/ showers). But it is only a small error as the great majority of the energy used was for space heating.
Winter COP
Ah, yes, but was it OK in winter? People will be wondering.
I have daily data on heat demand and electricity usage. Obviously this was greatest during the winter months. But interestingly the average COP (Coefficient Of Performance) held up remarkable well; remembering it is not an absolute measure of heat demand but the ratio of heat demand and electricity used.
This is what my analysis found (and is illustrated in the scatter diagram):
Average COP over winter months 22/23 (Dec’22, Jan’23, Feb’23), was 3.4 [note that the average external temperature was 6.2°C]
Design MCS coldest day for location was -1.6°C and on actual data fit line this gives a day-COP of 2.3
Actual coldest day was -5°C and data point gives COP of 2.
It is interesting that in summer, the COP can actually be worse in summer than in winter because, while much less energy is being used, the proportion of that energy being used for hot water rather than space heat is higher and that tends to be less efficient, so the ratio can be worse.
Overall, the winter COP was on average close to the overall annual COP (or SCOP).
Running costs comparison
My current (5th Nov 2023) flexible Octopus tariff charges 26.85p/kWh for electricity and a 52.32p/day standing charge. The gas unit pricing is 6.82p/kWh and 27.47p/day.
If this was the charge for the full 12 months, the cost would be:
(8,843 kWh x 26.85 p/kWh) + (365 day x 52.32 p/day)
= £2,565
Whereas for the same period, if we’d stayed with the gas boiler, we’d need to deliver 29,689 kWh of heat, but to do this with a 72% efficient boiler that would required burning gas with a calorific value of (29,689/0.72), 41,234 kWh, and the per annum costs of this would have been:
(41,234 kWh x 6.82 p/kWh) + (365 day x 27.47 p/day)
= £2,912
So, despite the fact that the unit price of electricity is almost 4 times that of gas (26.85/6.82), the efficiency of the heat pump combined with the inefficiency of the retired boiler more than compensates for the difference.
Of course the unit prices of both electricity and gas have risen since 2020, and even the differential in price has not improved, but the heat pump ensures we are better off than we would have been if we’d stuck with the old gas boiler.
Appendix Notes
[i] How much energy does it take to heat the 300 litre hot water tank?
The temperature in a modern unvented hot water cylinder is stratified with cold water entering from the base of the tank under mains pressure and hot water being delivered also under mains pressure from the top of the tank. The thermostat is in the upper part of the tank.
However, for simplicity, let’s work out how much energy would be needed to heat 300 litres uniformally at 40°C, up to the target temperature of 50°C.
Being careful to do housekeeping on the dimensions.
It takes 4180 joule to heat 1 litre of water by 1°C
So to raising 300 litre by 10°C (from 40°C to 50°C)
requires 4180 (joule/litre.°C) x 300 (litre) x 10 (°C) = 1.254 x 10⁷ joule
1 joule = 2.778 x 10⁻⁷ kWh
So the energy required can be converted to kWh units as follows:
(1.254 x 10⁷ joule) x (2.778 x 10⁻⁷ kWh/joule)
= 1.254 x 2.778 kWh
= 3.48 kWh
or 3.5 kWh approx.
I used this figure as an estimate of hot water daily energy usage during the period post installation of the heat pump.
[ii] Retired gas boiler efficiency
The retired gas boiler was a Glowworm Hideaway 120B (a Balanced Flue Boiler).
I used the BRE/SAP products database https://www.ncm-pcdb.org.uk/sap/ to get the efficiency figure for this boiler, which stated a ‘SAP seasonal winter efficiency of 72.9%’ and for seasonal summer efficiency 62.8%. Given that winter was when the boiler worked hardest, I used the former figure, rounded down to 72%. Given it was 25 years old, this may have been slightly optimistic.
I was asked this question by a householder who is in the process of considering making the switch from a gas boiler to a heat pump, in part due to a desire to reduce their carbon footprint. After an exchange where I learned their current situation and thoughts, they asked:
“One thing that keeps going through my mind are the electricity costs for the heat pump. We are billed for 40,000 kWh of gas, which is a lot. How much would it cost us for the electricity to run a heat pump?
Also, can we install a solar PV system that would be able to generate at least some of the electricity we need?“
I replied as follows:
“It depends in part on the extent of the fabric measures you do implement, although I understand that you have decided not to execute the ‘deep retrofit’ that an architect recommended due to the huge cost, for your 17th Century home. Can I just make it clear that your architect is ill-informed in saying that deep retrofit is essential before you consider a heat pump.“
No change in heat demand
“Let’s assume your current gas boiler has been operating at 80% efficiency.
That means the actual current delivered heat energy is 0.8 x 40,000 kWh = 32,000 kWh, which is then the actual heat demand!(You say that the bulk of this is on space heating, so I am ignoring the complication of the split in energy use between water and space heating, for simplicity).
Let’s assume in first instance that you don’t reduce this amount in the short term (through insulation etc.), in order to make a like for like comparison.
Let’s also assume that you achieve a SCOP (Seasonal Coefficient of Performance) of 3 (by the way, my listed house has a predicted SCOP of 3.6, so better than 3. So, for the calculation below, this can be regarded as a conservative estimate, as long as your system is professionally designed and installed; and remembering that the system as a whole may require some radiators to be upgraded).
That would imply the amount of electrical energy required would be
= 32,000/ 3 = 10,700 kWh (rounded up)
I am going to use capped prices (as at Autumn 2022) to get a ‘worst case’ for you at least this winter.
At the current capped rate of 34p/kWh for electricity this would mean an annual cost of
10,700 kWh x 34 p/kWh = 363,800p = £3,638 using the heat pumpsystem
The cost of using the current boiler, with 10.3/p/kWh for gas, would be:
= 40,000kWh x 10.3p/kWh = 412,000p = £4,120 using the gas boiler.
This calculation (with its assumptions) implies that the running costs would be less for the heat pump than with the gas boiler.
This might at first surprise you given the higher unit cost of electricity, but it rather demonstrates the impact that much higher efficiency has on running costs.
Obviously, this will change if/ when the unit prices change, but not necessarily in a bad way. If, as has been muted, electricity costs from renewables are decoupled from the costs of gas station generated electricity (which is dependent on world market costs, which then tends to drive up the costs of all domestically generated electricity irrespective of source. Then in future, we could see a drop in electricity, and this would be a progessive reduction as the grid gets greener and greener over time). “
After fabric measures
“It would also be different if – as would be prudent – any measures are undertaken like loft insulation to reduce heat demand.You said you planned some measures. As my essayexplained, there is a trade-off between insulation (and other fabric measures) and a heat pump, which depend in part on your overall retrofit budget. All I suggest is that you leave some money in the pot to get a heat pump, but that’s not to say that fabric measures are not important, far from it.
Suppose that following loft insulation and other fabric measures you decide to implement, the actual heat demand of 32,000kWh was reduced by 20%, to 25,600kWh.
With the same SCOP, that would imply the amount of electrical energy required would be
= 25,600/ 3 = 8,500 kWh
At the current capped rate of 34p/kWh for electricity this would mean an annual cost of
8,500 kWh x 34 p/kWh = 289,000p = £2,890 using the heat pump.”
With domestic solarPV
“Solar energy peaks in summer whereas heating requirements peak in winter (but both are middling during Spring and Autumn, the ‘shoulder’ months). Nevertheless, one could reasonably expect – thanks to the ‘shoulder’ months – that the home grown electricity would reduce the heat pump running costs by roughly 25% (only a professional house survey, taking into account the orientation of panels, tree shading, etc., would answer this question precisely).”
Summary
“With your current gas boiler your annual heating costs are: £4,120
With a professionally designed and installed heat pump system and no insulation measures your annual running costs should be no more than: £3,638
With a 20% reduction in heat demand following cost effective insulation/ draught proofing, the heat pump annual system running costs would be: £2,890
With solar PV, let’s assume a further reduction in costs of 25% giving the heat pump system annual running cost of: £2,168
“Flying is only 2% of global emissions, so it’s ok to fly.”That’s what I heard from a neighbouring table in a restaurant. I didn’t have the heart to lob in a comment “Yeh, but I bet it’s not 2% of your emissions!”
The Oxfam Extreme Carbon Inequality report [1] showed that top 10% by income were responsible for 50% of emissions and bottom 50% were responsible for just 10%, so averages such as that 2% figure can conceal some important truths and not a lttle of moral hazard.
The significant warming that the planet is experiencing [2] is thereby much more of an issue currently of high consumption in the West than population growth in the global south.
We can quite easily get a feel for the numbers.
Let’s start with averages
The world emits about 40 billion (giga) tonnes of carbon dioxide a year (or 40 GtCO₂/yr) from burning fossil fuels [3].
We have a world population of about 8 billion, so the average CO₂ emissions per person is 5 tonnes of CO₂ a year (5 tCO₂/yr).
2% of that figures gives 0.1 tCO₂/yr.
Time to relax?
So what about the average flyer?
2% of 40 GtCO₂/yr is 0.8 GtCO₂/yr, or 800 MtCO₂/yr.
A Smithsonian Mag article [4] estimated that only 6% of the world’s population flew in any one year; 6% of 8 billion is 480 million people.
If we share out the 800 MtCO₂/yr of flying emissions amongst those 480 million in any year, we get 1.7 tCO₂/yr per person. Given that a UK to Madrid flight is estimated as 265 kgCO₂ (0.265 tCO₂) [5], it shows the impact that longer journeys and frequent flyers are having in pushing the average up to over 6 times this number.
Needless to say 1.7 tCO₂/yr is nearly 40% of the world’s average per person total footprint, not a comforting 2%.
What about the UK?
Pre-COVID figures suggest that nearly 50% of UK citizens fly at least once per year, and flying accounts for 7% of the UK’s emissions. However, 1% of UK residents were found to be responsible for 20% of overseas flights [6].
It gets worse
The emissions from flying become stacked higher and higher with increasing income. The top 1% globally emit a staggering 7,500 tCO₂/yr, and are responsible for half of the world’s flying emissions [7].
The takeaway message
Let’s not kid ourselves that our flying emissions are ‘small’. In the UK they are on average 7% of our CO₂ emissions but the actual emissions increases in line with our consumption, which tends to correlate with incomes.
The case for a fair system that does not penalise the least well off, and has an escalating frequent flyer levy, is now undeniable. It needs to be sufficient to disincentivise frequent flying. Whereas the incentives today are completely the opposite. Airlines reward frequent flyers with gold membership cards, priority boarding, deluxe lounges and streams of offers.
As more people in the world gain access to flying, and as the relatively easy-to-decarbonise sectors (like cars and heating) are dealt with, the percentage of emissions from flying – however you wish to measure it – will only grow.
I’m not going to tell anyone “don’t fly!”, how could I? When I was working as a consultant until my retirement in 2016 I was making 10 to 15 flights a year. I’m in no position to preach to anyone. But we have all been in denial about flying, myself included, for too long.
If we can’t stop flying, can we at least stop lying … to ourselves!
IPCC, 2021: Summary for Policymakers. In: Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change [Masson-Delmotte, V., P. Zhai, A. Pirani, S.L. Connors, C. Péan, S. Berger, N. Caud, Y. Chen, L. Goldfarb, M.I. Gomis, M. Huang, K. Leitzell, E. Lonnoy, J.B.R. Matthews, T.K. Maycock, T. Waterfield, O. Yelekçi, R. Yu, and B. Zhou (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, pp. 3−32, doi:10.1017/9781009157896.001.
See Figure SPM.4(a) in Reference [2]. This does not include the contribution from other greenhouse gases that currently make a lower but still very significant contribution as shown in Figure SPM.2 in Reference [2].
One would like to imagine that Middle England might have woken up to the reality of climate change with ever more frequent heatwaves (not to mention flooding), but judging from the latest screams of derision from the usual suspects at the warnings of imminent heat stress, it’s hard to tell.
So, how do we navigate the conversation on climate change during a heatwave?
How do we make the link between the latest extreme heat wave to climate change when we have been telling people for years that weather extremes are not to be confused with the long term trends associated with climate change?
For example, in situations such as when a US Senator held up an unseasonal snowball to ‘demonstrate’ there is no global warming, he was rightly reminded of this distinction.
I’ll get back to these questions. I wanted firstly to illustrate the challenge we face in trying to have a conversation with the doggedly unconcerned.
Stuff and Nonsense
I overheard someone in a delicatessen yesterday joking about “hilarious” letters in The Times, writing on how we didn’t need extreme weather warnings back in 1976.
Can’t we just enjoy it?
Chuckle, chuckle.
Bloody nanny state.
Helloooo … It’s called summer!
What’s the world coming to?
I wanted to ask if she knew that there were 70,000 excess deaths across Europe during the 2003 heatwave, and that just this week fires have been raging across Europe, from Portugal to Croatia, devastating many communities.
I resisted the temptation. No, I chickened out.
It reminded me of the ‘stuff and nonsense’ sketch French & Saunders did some years back satirising Middle England’s perpetual angst over our alleged nanny state (you know, the one that gave us food banks, Grenfell, and a host of nannyish things).
The sketch – which I cannot find on YouTube – had two portly conservative stalwarts trying to outdo each other with stories of how much pain they have endured without needing to call a doctor. Shotgun accidentally blew my foot off … ha, ha, ha, no problem!
bloody bed-wetters these days ….
… stuff and nonsense.
It’s really no different to the ‘Elf ’n’ safety’ campaign Richard Littlejohn, Boris Johnson and others have pursued over many years in their toxic opinion pieces in the Daily Mail, Daily Telegraph, and elsewhere. This is now firmly embedded in the psyche of Middle England and a favourite source of jokes at Conservative Party conferences.
Extreme weather and climate change
Yes, we did have a very hot summer in 1976, but what does that prove?
Whataboutery only proves that the speaker has no ideas and no grasp of the evidence.
The truth is that as with a progressively loaded dice, the odds keep changing. This is the latest from the MetOffice [1]:
“We found that in just two decades, the probability of seeing those record breaking 2003 temperatures again have become more than 10 times more likely.”
And the chances will keep increasing. Warnings like this are not new. Dr Peter Stott from the Met Office wrote in 2014:
“Updated model projections of future changes suggest that by the end of the century summers as hot as 2003 will be considered unusually cool.”
That is no longer exceptionally hot, but exceptionally cooler than the new normal.
Think about it.
The odds have increased because of our human emissions of greenhouse gases, principally carbon dioxide from fossil fuels. The odds get worse with every year we continue to emit the stuff.
I don’t think our progeny will be chuckling away in 2100 at anti-woke opinion, just despairing at the obdurate ignorance of those led us to this place.
The language of weather and climate
The British have a very well developed language of weather, which suffuses our every day encounters, our poetry, our paintings and our culture generally.
Surrounded as we are by a warm ocean, a cold pole, a European continent and, from below us, the Mediterranean and Saharan land mass, our weather can seem unpredictable.
We are less articulate when it comes to climate; barely literate.
But we have been told not to confuse weather with climate. Climatologists customarily defined climate change as a trend that could be discerned over a few decades, not a few days.
This makes it hard to talk about any one particular event – such as the 2003 heatwave – and put it down to climate. This was a godsend to climate change deniers, who like tobacco companies before them would make the defence that this person could have got lung cancer anyway (the increased odds don’t prove that THIS person would not have got it anyway).
Of course the counter reflex of claiming that every extreme event is the result of our human emissions doesn’t convince either; our weather variability doesn’t go away in a warming world, it is just gets superimposed on a rising trend.
So, just as a pinball machine on a tilt will still produce apparently random outcomes, the biases formed by the tilt will increase the odds of some outcomes versus others. The UK is getting warmer, and that has consequences as both ends of the hydrological cycle: be it extreme heatwaves or extreme flooding.
A new science has come to the rescue in our attempts to unpick the apparent contradictions in talking about short term weather extremes in the context of longer term climate change: extreme weather attribution.
Extreme weather attribution
It is now possible for climate scientists to put a number on a particular event and say how much more likely it was as a result of man-made global heating; 20%, 50%, 3000%, or whatever the physics and historical records together show.
This is actually not so new in its general application, as the quotes from the MetOffice above attest to. General retrospective studies on the raised chance of, say, a hot summer across the UK or Europe, have been published before.
What is relatively new is taking a specific event that may be relatively localised and ascribing odds to it, and doing this within a few days of the event occurring; of extreme weather attributions as a service.
Dr Friederike Otto is one of the pioneers of this science and approach. Her book is an unputdownable account of her journey and the implications of this work: Angry Weather: Heat Waves, Floods, Storms, and the New Science of Climate Change, 2020
Speaking of the floods in Germany in 2021 she said [4]:
“These floods have shown us that even developed countries are not safe from severe impacts of extreme weather that we have seen and known to get worse with climate change,”
In May this year, the World Weather Attribution (WWA) organisation issued [5] its analysis of the extreme / early heat wave in Pakistan/ India in early Spring, which they concluded was 30 times more likely (i.e. 3000% more likely) than it would have been without human caused global heating.
A different conversation
So what do I do next time I’m in a queue and I hear someone chuckling at the latest opinion piece in the papers mocking those concerned at climate change and the latest extreme weather event? I might try a gentle question:
“Can I ask why you think there is nothing to worry about?”
This should flush out enough to respond to with the material I covered earlier.
It’s real, the impacts can be life threatening, and the trends mean it’s going to get more frequent and more intense. One could continue:
“Why does this have to be part of an on-going culture war?
Why isn’t this something that should unite us, if not for our own sake, for the sake of our grandchildren?
How does one speak about climate change to a friend, colleague or neighbour who is not engaged, and sceptical about the need for urgent action?
I was prompted to write this essay because I have been asked this question three times in the last two weeks, and it got me thinking.
In the local climate group I help run, we focus on positive local action. Unlike many climate groups we do not post dystopian images of the latest horror from the front line of climate impacts. This is not because we deny them but because we have found it is not the best way to engage people who are thinking about getting involved, or are avoiding getting engaged!
They can find the scary material elsewhere, and our job, as a climate group, is to facilitate and catalyse change, through networks, conversations and projects.
But then there are some people – in every community – who clearly do not feel that urgent action is needed. So can we really avoid dealing with those sceptics?
The sceptics we are talking about here do not fit the stereotype of an ideological ‘denier’ – such as Lord Lawson – but they have often heard or read things that reassure them that action is not urgent (‘those alarmists have gone too far!’, they hear, a reassuring salve). Conservative newspapers actively dismiss the need for urgent action.
So will facts change a sceptic’s mind?
It is well established that while facts are important, a key reason why people believe in certain things is their culture and values (I recommend reading Katharine Hayhoe on this [1]).
If one group believes in the freedom of communities to do their own thing, free of central Government ‘interference’, there is then a perceived conflict of values with others who favour the need for regulations to promote change. The challenge is to find the common ground, the shared values.
If someone believes that the planet will work things out, with or without our help, they may be quite fatalistic about society’s ability to change ‘there is nothing we can do’. The challenge then is to show how – assuming we had sufficient agency to cause the problem – we also have the ability to prevent the worst happening.
Most people are neither deniers nor fatalists. They want a positive future for their children and grandchildren. If they can see the need for change, they can become champions for change.
Who are we talking to?
It is very easy, especially for those like me who spend way too much time on Twitter, to frame the engagement challenge in terms of those on the ideological right who have made a career out of climate science denial. That is a mistake in my view.
Various surveys in the UK, USA and elsewhere indicate a growing number that see the need for change. Some just voted in Australia to end the reign of a right wing, climate change denying party.
The UK Government’s Winter 2021 Attitudes Survey showed that 85% of respondents were either ‘very concerned’ or ‘fairly concerned’ about climate change [2].
(Credit: BEIS Public Attitudes Tracker: Net Zero and Climate Change Winter 2021)
Even in the USA, where we are constantly reminded of the polarised nature of political debates, we find that on climate change, there is a majority of people who are either ‘Alarmed’ or ‘Concerned’ (in the nomenclature of the Yale Climate Change Communications ‘6 Americas’ [3]).
As the authors of this report write:
There has been substantial change in the distribution of the Six Americas over the past five years. The Alarmed segment has nearly doubled in size, increasing 15 percentage points (from 18% to 33% of the U.S. adult population), including an increase of 9 percentage points from March 2021 to September 2021. In contrast, over the past 5 years only about 1 in 10 Americans have been Dismissive (decreasing from 11% to 9%). Overall, Americans are becoming more worried about global warming, more engaged with the issue, and more supportive of climate solutions.
The ‘Dismissives’ are only 9% of the US population, but often appear to be 90% of the commenters on Twitter, Facebook, and elsewhere. That is not a reason for allowing them to frame the conversation in their terms.
Instead, we increasingly need to get off our computers and have 1:1 convivial conversations in person, over a cup of coffee, at a market stall or over the garden fence, with the majority who are genuinely curious at exploring the issues.
The ‘Why?’ question
Exploring values as opposed to just facts is a crucial part of the conversation. When someone makes a strong, provocative statement, the response should initially aim to explore the ‘why’ rather than the ‘what’:
Why do you feel that?
This might well reveal those values or assumptions that are really at the heart of someone’s feelings, and explain the anger or frustration they express. This is almost impossible to do online.
Those sceptical of the need for change are not solely on the right. There are some environmentalists who have a such a strong preference for nature-based solutions, they will find all the downsides of technological solutions, while being blind to any shortcomings of their preferred solutions.
In fact, we all need to ask ourselves the ‘Why?’ question from time to time, to question our beliefs, biases and assumptions.
A little bit of knowledge can be a useful thing
People new to climate change can be overwhelmed by its sheer complexity, and think they must have encyclopaedic knowledge to engage with people, especially sceptics; they don’t!
It does help to know some key concepts, which can be used to help guide responses to questions. A few are summarised here:
Civilisation and agriculture have blossomed since the end of the last ice age with a stable atmospheric concentration of carbon dioxide at 300 parts per million (ppm). In just a short period since the start of the industrial revolution, human emissions have pushed it to over 400 ppm [4]
There are many carbon cycles that cover vastly different timescales. Despite large flows of carbon into and out of the oceans, the flows balance each other; maintaining a stable concentration of carbon dioxide in the atmosphere. Humans are now upsetting that balance at alarming speed [5]
Carbon dioxide is called a ‘long-lived greenhouse gas’. The raised concentration in the atmosphere (caused by burning fossil fuels) remains raised for a very long time [6]
The rise in global mean surface temperatures of about 1.2°C since the start of the industrial revolution is already having impacts, and every 0.1°C of rise on top of that will increase the impacts [7].
All societal and personal choices have a carbon impact of some sort, but it is important to understand the full impact of any choice, over the full life-cycle of a thing or activity. We should not let a lack of perfect solutions stop us taking action [8].
How to engage with the Concerned or Cautious?
There are many different styles of engagement. This is my personal perspective, but everyone can develop their own style.
There can be a tendency to try to argue facts with people, but this can be difficult. If the challenge is based on some bad reading of a topic, and is not something you feel qualified to respond to, is that the end of the conversation? I would argue that with a questioning approach, a fruitful conversation is still possible.
Questionable challenges come in a variety of categories. Here are a few key ones:
Simply fallacies of argument that require no knowledge of the facts per se.
‘What about?’ type challenges that are aimed at deflecting from a core issue.
Misunderstandings in the nature of a system, that often ignore important aspects of the system.
These can cover quite a wide range of what one might hear at a climate stall or over a coffee with friends. Often they are combined in different ways, but usually one of these plays a central role.
Simple fallacies of argument
There are many resources that deal with critical thinking and fallacies of argument. The Greeks were familiar with many of them, and they are still used in debates. Debates and conversations on climate change are not immune to fallacies of argument.
Here is one example:
‘By far the greatest use of peat in the world is burning it for fuel, so isn’t stopping its use in our gardens really just virtue signalling?’
In such cases, you don’t need to google the actual numbers because this is a simple logical fallacy and the best way to deal with it is to substitute another example that exposes its flaws:
‘If it was true that the greatest number of wife abusers in the world is in <another country>, would it be ok to say that calling for a stop to wife beating in the UK is really just virtue signalling?’
That is obviously nonsense, but then so is the argument against stopping using peat in gardening.
There are are countless examples of the use of fallacies of argument. One advocate from a think tank that denied the need for action on climate change made a statement on TV along these lines:
‘I am not a climate denier, but this latest scientific report is saying we must reach net zero by 2050, which seems to be ludicrously exact in its timing, doesn’t it?’
This is what might be termed the Fallacy of Precision. My response would be through a progressive sequence of questions:
‘You do accept that warming will increase with more emissions?’ (if not, that reveals climate science denial)
‘You do accept that more warming will cause more extreme weather events and therefore more impacts?’ (if not, that reveals climate science denial)
‘So you accept that the sooner we make cuts the greater our ability to reduce harms?’ (if not, that reveals they don’t understand that prevention is always better than cure)
So in response to this example of the Fallacy of Precision, the key argument is:
‘It is ok to get there early; I’m happy with 2050 +/- 5 years! It is not about binaries. The longer we delay, the greater the risks. 2050 is a political planning goal, and to declare it is not saying there are no risks before that date, and catastrophe after it. The impacts are already being felt, and will increase with more emissions.’
‘What about China? The UK has a tiny footprint by comparison’
My personal favourite immediate response is the take the iPhone out of my pocket and ask:
‘Where do you think this was manufactured?’ (they normally guess right, yes, China)
then follow up with
‘So how do we account for the associated carbon?’
They realise that they have to concede that it isn’t quite so simple as blaming China, but the comeback is often:
‘Yes, but population growth is a big issue isn’t it?’
I respond that I acknowledge the issue of resource depletion, but in the context of climate change, I am concerned with the idea that we should place the blame for our situation on the poorest in the world. Africa has been responsible for just 3% of emissions, yet will be hit very badly by climate change; worse than us. At this point I often get out a pen and paper and ask if they are familiar with the Oxfam Extreme Carbon Inequality report? Most are not, so I sketch out the key figure based on the report [9]
Hand sketch by Richard Erskine, based on Oxfam ‘Extreme Carbon Inequality’ report.
‘This shows that the richest 10% of the world’s population have been responsible for 50% of carbon emissions, yet the poorest 50% have only been responsible for 10% of emissions.’
This is a great conversation starter, because it can lead in many directions:
historic emissions;
funding for adaptation;
per capita versus national emissions;
resource depletion;
educating girls;
low carbon development for poorer countries;
climate justice;
and much more.
This is an area that is not awash with easy solutions, but it is a chance to challenge simplistic claims that population growth is the cause of the climate crisis, when in fact, consumption growth (propelled by fossil fuelled energy) is demonstrably the primary cause.
Misunderstandings in the nature of a system
Here is one example of a claim I heard recently:
‘Blue Whales eat krill and poo 3 tonnes a day, so if we got them back to the levels in the oceans before humans decimated their numbers, we could draw down most of the carbon we emit. Problem solved’
The person involved was a huge fan of what are called ‘natural solutions’, and that is fine, as long it isn’t used to dismiss other valid solutions (which was his intention, based on other remarks he made dismissing Wind Turbines etc.).
This illustrates the immediate difficulty for someone at a climate stall in a market who is no expert on carbon cycles, whales or even the total carbon emissions emitted by humanity. But interestingly, despite those apparent shortcomings, it is possible to challenge such a claim …
… by using questions back at the questioner, using the ‘little knowledge’ I shared earlier.
It is crucial that the response is not merely a counter statement. Always start with questions. Ones like:
‘I’d be interested to read more on this idea, do you have a good source?’ (if it is simply a second hand belief that has not been properly researched, they may stumble a bit here)
‘How long would it take to build up the Blue Whale population, and would it be in time to avert dangerous global warming?’ (this may elicit a response like ‘maybe 50 years’, and the follow up might be ‘do we have 50 years?’)
‘That’s interesting, but can you explain why the atmosphere has been so stable since the last ice age, even before we started decimating the whale population?’ (this is of course a trick question, but a valid one. The whales’ contribution to carbon cycles was there 5,000 years ago, yet the carbon dioxide levels didn’t drop because of it; it was in balance)
This could lead to a co-discovery of some more information. Maybe a bit more reading on carbon cycles and so on. Maybe the conclusion will be that we need the whales back, but they won’t get us out of our current predicament.
Conclusion
These are just examples of actual encounters, but I hope they give a flavour of the approach I like to take.
Those new to climate change who want to engage friends, neighbours and others should not feel intimidated. Responding to someone who expresses certainty with questions is always a reasonable approach, that everyone can learn from. If you are part of a fledgling community climate group, you can develop your confidence by working with others when running a climate stall. Learn from others who are more experienced, and then start to have a go yourself. Practice makes better (don’t be beguiled by the illusion of perfect!).
Remember, the great majority of people out there are on your side, and even those that are not, manage to be polite when face to face, in person.
And try to reduce your time on Twitter. Yes, that’s you I’m talking to Richard!
Since the end of the last ice age, the levels of atmospheric carbon dioxide was stable at just under 300 parts per million (ppm), but since the industrial revolution it has risen to over 400 ppm; higher than at any time in the last 3 million years. The nearly 10,000 years since the end of the last ice age have been relatively stable, and civilisation and agriculture have blossomed in this period.
Carbon cycles are just that. There are short-term cycles (like the Northern Hemisphere’s autumn and spring cycle, leading to flows of carbon into and out of the atmosphere) but also longer term ones. The longest are geological in timescale. The oceans store huge amounts of carbon in their depths, but there are chemical, physical and biological processes that mean carbon flows into and out of the atmosphere. The reason for the stability of the pre-industrial concentration in the atmosphere is precisely because a combination of these cycles has created a balance. The balance can be disrupted and changed over long periods. The current disruption is extremely fast and man-made.
Carbon dioxide is called a ‘long-lived greenhouse gas’. When humans emit an amount of it into the atmosphere about half is absorbed in the oceans and biosphere, about half remains in the atmosphere, and because the the balancing cycles (and despite the fact that individual molecules may move back and forth on quite short timescales), the raised concentration in the atmosphere remains raised for a very long time.
I’ve answered the question ‘Is 2°C a Big Deal?’ in another essay: https://essaysconcerning.com/2021/10/14/is-2c-a-big-deal/. According the the Intergovernmental Panel on Climate Change (IPCC), a rise in global mean surface temperatures is already having impacts and every additional 0.1°C of rise has consequences, so it is now urgent to try to avoid 1.5°C and at least 2°C. They found that the difference between 1.5°C and 2°C was huge, in terms of impacts; and the risks escalate if we go above 2°C. All policies and actions need to be judged on whether they fit into the narrowing window of time.
All societal and personal choices have a carbon impact of some sort, but it is important to understand the full impact of any choice, over the full life-cycle of a thing or activity. One considers how bad one thing is, it has to be considered alongside the alternatives. We all have to live, to breath, travel to work or play, etc. and so we have to consider a ‘balance of harms’ and also, a ‘balance of benefits’.
I want to challenge the assumption that scaling up heat pump capacity in the UK is very hard.
In many ways this belief is symptomatic of a wider malaise in the approach to skills we have had in the UK for far too long. Maybe the crisis in energy – and particularly gas – now confronting us is the jolt we needed to do a rethink.
Scaling is only hard if we still frame the challenge in the same ways we do today – in terms of number of certificates gained through further education colleges. This is not the answer.
We need something far more like the apprenticeships of old – not those where all the money pours into the colleges, but one where the firms who are doing the real competency development through practice get a decent share of the funding.
Another implicit assumption that needs challenging, is that we need to create clones of experts with very deep heat pump expertise. I don’t think that is true (except maybe in very hard or non-standard outlier cases). In all technologies, as they mature, there is an element of de-skilling that takes place.
An example is software, where modern tools alleviate a lot of the skills hitherto required in, for example, creating a web site. Although this can reframe the skills question, and quite different design skills can emerge (e.g. illustrators rather than coders).
Heat pumps have matured to the point where we are near to this point (but they still have some work to do to simplify their manuals further).
Finally, we need to scale up the number of heat pump SMEs (Small and Medium sized Enterprises). A massive strategic blunder would be to see the challenge as retraining 100,000 existing one-man-band boiler fitters/ plumbers, to turn them into 100,000 one-man-band heat pump fitters/ plumbers.
A new SME-led approach would put the emphasis on competency development and rebalancing the training budget, with more of the funding going towards the SMEs who can grow the right skills, and do this organically.
We may still need training colleges, but we have to accept that the current model is broken and it is not fit for purpose, and certainly not for our current emergency; their role needs to be radically transformed.
If a heat pump project is broken down into its distinct roles and competencies, the challenge becomes much easier.
In what follows, I am assuming an air-source heat pump (ASHP) and a ‘wet’ heat distribution system (pipes and wall-mounted radiators), as this will apply for the overwhelming majority of homes that need to transition from gas boilers (to be ‘retrofitted’).
Meet the total UK team that would be needed to install 1,000,000 heat pumps a year by 2030 [1]:
9,000 electricians with expertise in configuring heat pumps.
4,500 assessors/ designers to assess a property, carry out heat loss calculations, and size and design the whole system (heat pump, hot water tank and radiators). This is the most critical role to ensure the overall system design performs to the efficiency expected.
45,000 plumbers required to follow the designs given to them, but not to understand heat pumps in any depth.
Britain with the help of its allies trained 100,000+ pilots in WW2 in just a few years, and many more women and men building the planes. They didn’t do that by sitting them in classrooms, trying to get them to understand aerodynamics! They got plonked into two seaters and were soon taking the controls.
We need to be honest about the malfunctioning monetised approach to technical training in the UK (actually, most ‘higher education’), and instead focus on practical skills, competency development, and real world practice / achievements. I recommend a great discussion on the issue of ‘resources not courses’ [2].
I asked a plumber who was part of the team that installed the heat pump in my house about his college training. He told me “I didn’t get much out of it. I only really learned what I was doing when I left college and started work, and it took a few years to gain my confidence”.
The individual tasks involved in assessing, designing, installing and commissioning a ‘heat pump system’ can be broken down and assigned to roles with the right skills. I have outlined the project in the notes [3].
The interesting observation is that the plumber is the role which puts in the most hours on the project (to do traditional things like bending copper pipes), but requires the least level of knowledge on heat pumps. They just need to follow the design handed to them. So scaling capacity, if targeted effectively, can be very effective. I have included a skills table in the notes.
The assessor/ designer who was on the team that installed the heat pump in my house – let’s call her Chloe – was a physics graduate in her late 20s. She made easy work of the assessment, calculations and design, and putting together the proposal for the overall solution.
In 10 years, would it really be so hard to scale an SME-led model, including cross-trained electricians and plumbers, and developing a new career path for ‘heat pump assessor/ designers’ like Chloe?
Let’s not talk ourselves into defeat.
We just need to get smart, and organised, and fund the right things.
[1] Estimate of roles required for a typical dwelling
Average man-days per house
Workforce required for 1,000,000 installation per year, assuming 230 working days a year
Assessor/ designer
1
4,348
Plumber
10
43,478
Electrician/ configurer
2
8,696
Note that 10 man-days per house, would typically mean 2 plumbers for 5 days.
[2] Resources not courses
There are deep issues with teaching and training in the UK. The marketisation of education and training means that further education colleges are paid for accrediting students, not developing true competencies. There is a great discussion on this in relation to heat pumps at the BetaTalk – The Renewable Energy and Low Carbon Heating Podcast in the episode The Training Fiasco in Plumbing & Heating – I am certainly not claiming there is an easy way of fixing the training issues in the UK. I am simply saying we can reframe the problem through better organisations and coordination of the roles and skills.
[3] Project outline
Assessment of the heat loss of the house in its given state of fabric, in order to ensure that the heat pump can deliver the peak load required, during the depths of winter. This must be done room by room to ensure correctly sized radiators in every room. Other aspects to be assessed are the existing pipework, radiators, power supply and water pressure.
Design of the whole system, including the air-source heat pump (ASHP), and requirements for hot water, and radiator heat distribution. Any upgrades of radiators will be part of the design, as well as decisions on the peak flow temperature required.
Installation includes several tasks. Physical installation of the ASHP and associated kit (control system, buffer tanks, etc.). Connection to the electricity supply. Connecting the heat pump sub-system to the existing pipework, and upgrading any radiators as per the design. Then ‘balancing radiators’ to ensure optimal heat distribution.
Commissioning involves configuration of the controls (including ‘weather compensation’) to maximise the efficiency of the heat pump during all weathers, and enabling effective energy monitoring so that the customer can see how well the system performs over days, months and years; and finally, ensuring all the paperwork is completed with certification authorities such as MCS (Microgeneration Certification Scheme).
Each hands-on role can be addressed differently in terms of scaling capacity. We will need:
An assessor/ designer, who can also play the role of designer, and needs a high level of knowledge of the overall system aspects.
A plumber who will do pipework and deal with physical kit installation, but requires only limited knowledge of heat pumps.
An electrician/ configurer with high skills in the specific heat pumps installed, and their controls.
Other roles not directly involved are management, accounts, supply chain/ store manager, sales & marketing, and these are important as in any similar business, but don’t ‘scale’ anywhere near as fast as the hands-on roles.
Here is how the the hands-on roles match the stages in the project:
In the face of turmoil in the gas markets, it’s not surprising that multiple articles and opinion pieces have been pouring forth on fracking for gas in the UK – and calling for a delay to the transition to low carbon – from the same nexus of right-wingers (GWPF, etc.) who have spent years denying global warming, and deny the impacts are anything to worry about (despite the latest stark warnings from the IPCC summarised by Carbon Brief)
Not happy with denying the causes and impacts, of man-made global warming, the next stop for these bad-faith actors has been to deny the solutions. Hence the stream of nonsense attacks on EVs and Heat Pumps recycled year after year, and month after month, with increasingly shrill voices as the adoption of these solutions begins to demonstrate traction.
Market forces guys, you should love that!
No, they will never let the science – which shows the overwhelming logic of electrification of end-use technologies – get in the way of their ideologically based opinions.
It is of course a long-running multifaceted campaign by right-wing ‘think tanks’, such as GWPF in UK and Heartland Institute in the USA, that have hitherto been successful in slowing action on climate change. Now the tide has turned in recent years, and they know that public opinion is not on their side, but that won’t stop them finding opportunities to muddy the waters.
And we are in the midst of just such an opportunity, and you can imagine them thinking:
I know! Let’s exploit the Ukrainian tragedy and crisis in gas markets – and anxieties in UK society – to double down on anti-renewables, and demand more pro-fossil fuel exploration; especially fracking.
So their latest stunt is to coordinate articles in the Telegraph etc. and a letter from the usual suspects in parliament; some affiliated or cosying up to those very same denialist right wing ‘think tanks’.
For those of us that are genuinely concerned about UK energy security and resilience, and a greener future that will make us more resilient in every way – food security, conserving nature, and much more – the question is: what to do?
Keep calm and carry on is my main message.
The path to net zero will continue to be bumpy. Getting off our addiction to fossil fuels has withdrawal symptoms. A serious fight back and disinformation war from vested interests was inevitable. They see action on climate change as a threat to their illusory vision of an unfettered ‘free market’; so regulations to address harms to the environment, nature and human health are an anathema to them. Hence Trump’s attempts to eviscerate the U.S. Environmental Protection Agency.
The good news is that only a dwindling segment of the population are ‘dismissives’ (to use the nomenclature of the Six Americas), making up just 9% of US population.
Similarly in UK, there is a majority who want action on climate change. The latest UK Public Attitudes Tracker (BEIS, Autumn 2021), shows that 85% of the UK adults were concerned about climate change, and 87% supported renewables. Whereas only 17% supported fracking.
You can understand why the likes of Steve Baker MP, Matt Ridley, et al are becoming increasingly desperate and alarmist. Expect more heat, and even less light, from the Net Zero Scrutiny Group, GWPF, etc., and their various enablers in the media.
Give the noises off a rest guys, it ain’t working.
It seems bizarre that the fate of the world might hinge on this question, on the psychological state of one man, but this is where we are.
We are told that a NATO secured ‘no fly’ zone over Ukraine is not possible, because it might trigger World War III, and ultimately Mutually Assured Destruction (MAD) with an exchange of strategic nuclear weapons.
We know that nuclear weapons do not prevent terrorism, civil wars or conventional ones, and ‘great power’ proxy wars have been a scourge on the world since 1945.
Near misses between nuclear powers have been far more frequent than many realise. As Sasan Aglani states:
“A recent Chatham House report documents 13 instances between 1962 and 2002 where nuclear weapons were almost inadvertently used due to miscalculation, miscommunication, or technical errors. What prevented their use on many of these occasions was the ‘human judgement factor’ – intervention of individuals who, based on prudent assessment of situations and against protocol, either refused to authorise a nuclear strike or relay information that would likely have led to the use of nuclear weapons.”
And in the latest moment of high risk, NATO’s nuclear weapons haven’t restrained Putin; far from it.
In a sense, they have enabled him.
Nuclear deterrence is usually described in the simplistic terms parroted by politicians, and as the UK’s Ministry of Defence describes:
“Potential aggressors know that the costs of attacking the UK, or our NATO allies, could far outweigh any benefit the could hope to achieve”.
But this was the obsolete MAD strategy of the 1950s, not the more complex picture that emerged from the 1960s onwards: flexible response.
Both US and Russian military strategists were unhappy with a nuclear force that was literally incredible. They needed some way to make MAD credible, that is, to make nuclear weapons usable.
The answer was a ladder of response: the threat of battlefield nuclear weapons would cause an opposing large conventional force to think again. If that failed to deter, then medium range nuclear weapons would do the trick. The ultimate ‘deterrent’ would be strategic intercontinental multiple warhead missiles.
But this is the kind of theoretical scheme dreamt up by wonks in think tanks. It can be tested in war games but not in practice, and certainly not with Putin in the room, playing the game.
It takes no account of accidents, miscalculation or, dare I say, one mad man who refuses to act logically.
If Putin ordered a battlefield nuclear weapon attack on a Ukrainian city that refused to submit, what would NATO do then?
Would Putin go this far, risking that “it might trigger World War III”?
He seems to like taking risks, crossing red lines and getting away with it.
Each time, the world tutted, and looked away, even though the plan was already pretty clear. His intentions towards Ukraine have hardly been a secret. He has given many speeches on the state of the west (which have enamered him to the religious far right in the west), and the need to rebuild a greater Russia.
He clearly wants to undermine western democracies and any countries in Russia’s orbit aspiring to join them.
Putin has always been testing, probing, and seeing what lines can be crossed.
Is Putin mad?
The problem for the west is that he only needs to appear to be mad to get away with it, and so far he’s doing a pretty good job at that.
We must hope against hope for China to restrain him, for a palace revolt, or anything to restrain his worst impulses.
And when we are through this, in however many years it takes, we must finally stop this irrational belief that nuclear weapons make us secure, and make us safer.
Post-Putin, the world will have been warned again of its folly in trusting in these genocidal weapons.
This essay is a personal piece with a personal viewpoint, as I am just an ordinary member of SGR these days, but I continue to support their great work.
Clearing out over 40 years of files can throw up so many surprises and emotions. This box contained files from the 1980s when I was in my spare time research coordinator for SANA (Scientists Against Nuclear Arms).
One file, preparatory work we did for a local authority preparing a report on the likely impacts of a nuclear attack (countering the whitewashing ‘Protect and Survive’ from the UK’s Home Office). I wrote a program that ran on an Amstrad 8256 to do the maths on casualties.
We helped several local authorities to speak truth to the powers that were telling people to whitewash their windows! The group became a kind of research group for the peace movement, often working quietly from bedrooms and offices, trying to make a difference.
Meanwhile those in the front line – those brilliant Greenham Women – faced the reaction of those who turned fear into hate, stirred by the same media outlets who today pour scorn on those demanding action.
Where is the statue to those brave ladies I muse, as I flick through another file full of newspaper cuttings?
Another file on readings of psychological responses to the nuclear threat, which I summarised on a one pager. Some of the insights seem universal. Denial is a complex condition, and I always cringe when those in denial on climate change feel they are being linked to holocaust denial.
The truth is that most of us are in denial much of the time, because we’d go crazy otherwise. But there are consequences to this.
In my life I stepped back from active work on the nuclear issue to focus on family and career. Burnt out you might say, and needing a break.
Only in my retirement did I wake up to climate change, after reading Naomi Klein’s ‘This Changes Everything’, and then hearing her speak in Cheltenham. That was quite the kick up the proverbial!
The nuclear threat has not gone away and Russia is now escalating the risks. C.E. Osgood said
“the policy of mutual deterrence includes no provision for its own resolution”.
The risks are pretty binary.
Climate change is different despite some who suggest otherwise (its scales of damage, creeping past us and towards us), but the psychology has common threads.
People ask why decent cultured Germans did not stop the Nazis. Their denial was much more relevant to our current situation than the denial of neo-nazis regarding the holocaust, or the denial of dangerous man-made global warming by the self-appointed ‘contrarians’ who control the right wing media.
It is not that people are intellectually ‘in denial’, any more than a smoker who knows very well the health risks. It is the emotionally centred denial that puts off action.
People are worried about climate change and want action taken – overwhelmingly they do, as studies clearly show – but they have been unable to get beyond that numbing inability to turn wishes into actions. It all seems too much.
As Sandman and Valenti said in relation to the nuclear threat: People are neither apathetic nor actively terrified, but they are psychologically numbed.
The “don’t make a fuss” narrative is alive and well, and soon to be brought into law by our Home Secretary Priti Patel.
But those who did and do make a fuss – The suffragettes, Greenham Women and XR – had the same energy, the same moral outrage which we too often keep bottled up. It hasn’t escaped my notice that it is often women who are the first to step forward, to speak up.
Being polite and “reasonable” can do a lot but rarely is enough to shift powerful forces using propaganda to manipulate public sentiment (to aid in the process of mass denial).
The great psychologist Dorothy Rowe said, in relation to Bomb that we need to convert anger & depression into hope and action. Protest is never enough.
E.L. Long wrote in 1950 in relation to nuclear threat
“scientists had overestimated … power of their message to reform a culture that has ignored other seers and prophets for many ages”.
Only positive visions and futures can change the psychology of mass denial on climate change. Nuclear threats are oddly more intractable, but ought to be simpler, to resolve.
On climate change I veer between despair and optimism, but as many wise heads have said, hope is important, but much easier to sustain if coupled with action: engaging with the community, local counsellors, national politicians, businesses and the rest.
As Katharine Hayhoe replies when asked “what is the first thing I should do about climate change?”
“talk about it!”
with family, friends, colleagues.
Those forces who want to delay action, are happy to have a psychologically numbed populace. Talk and engagement is a great antedote. Telling your counsellors and parliamentary candidates that your vote depends on them demonstrating they really mean action, is another. We all have agency in some or many forms.
Now, I must get back to clearing more of those boxes and piles of papers, no doubt uncovering more memories, triggering more musings.
Many MPs have tonight voted against measures to protect public health.
A majority of these did so in the name of freedom, in denial that the fast spreading Omicron variant of Covid-19 is any worse than seasonal flu. Freedom trumps all, according to these ‘contrarians’. The market is king, and the market solves all problems, so the main job of Government is to enable business to do its thing.
As is well documented, ‘doubt is our product’ is the motto that tobacco executives secretly adopted in the face of the unequivocal risk from smoking revealed by scientists, and is now the reflex modus operandi of the anti-science contrarians working assiduously to undermine experts.
It is always the same people. Whether it be smoking causing lung cancer, CFCs causing a hole in the ozone layer, action on climate change, or health measures during a pandemic, those anti-science contrarians will be voting against any regulations to protect people.
Of course the Catch-22 for those working on actions to avoid the worst, is that for the contrarians, they will use any such success as evidence that the worst projections were an exaggeration in the first place!
You fixed the roof – so all those dire warnings of an impending leak in the roof if it wasn’t retiled were just scare-mongering.
You banned CFCs, and the ozone layer is repairing – as we said there would be no dangerous levels of UV radiation.
You enacted measures to reduce human contact during a pandemic – We told you, the NHS would not be brought to its knees.
You set out actions needed to avoid dangerous man-made climate change – You doomers, the dangers are exaggerated and we should wait to see who is right.
To prove the point, the public officials and experts would have to not act, to let things rip, so that disaster then strikes, and they can then say ‘told you so!’ – but of course they do act.
But the contrarians rarely totally prevent action being taken – even with the worst Governments – but they can effectively delay it. They are good at that.
Many deaths that could have been avoided result from these delays. There is no freedom for them or their bereaved families.
insulation [both simple (loft) and deeper (e.g. external wall)], draught-proofing, moisture management, and last but not least, a heat pump.
a householder will in most cases need to make choices: don’t let the perfect be the enemy of the good
two identical homes may come to different decisions – there is no single ‘right’ answer
ignore anyone who says “you need deep retrofit before considering a heat pump” (the essay includes fully referenced debunking of this assertion, but it is widely believed and repeated ad nauseum)
Be clear about your priorities (comfort, costs, climate)
comfort is important, but it is subjective. MCS (Microgeneration Certification Scheme) has standards for target temperature in homes (21°C in living spaces, 18°C for halls and bedrooms). The term ‘comfort’ does not necessarily justify exceeding this standard.
capital costs and running costs both need to be considered – fluid and escalating gas prices are a major issue, whereas electricity can come from many sources (wind, solar, nuclear, tidal, etc.), so is future proofed.
if climate is your priority, be aware that timing is key, and the UK and other countries need to decarbonise heating, transport, etc. by 2050
in terms of domestic heating, getting off gas is the single best thing you can do, and because heat pumps are so efficient, heat pumps deliver the greatest carbon savings per capital investment by a very large margin of all retrofit measures,without necessarily an increase in running costs!
Maybe don’t rip out a NEW kitchen or NEW gas boiler
so if you spent £20,000 on a new kitchen 5 years ago, and are now told that the back wall needs insulating, and can’t be done externally for various reasons, maybe this option is not in play.
if you have a new gas boiler, check it is operating at optimal efficiency (that it is condensing and is running at lowest possible flow temperature to meet heat demand), thus reducing bills while maintaining comfort; and maybe deferring decision to switch to a heat pump.The Heating Hub offer ideas and support on optimising existing gas boilers, along with many other topics.
Decide on budgets/timescales
even with grants, household expenditure may be highly constrained
consider the disruption as well as costs of different measures, and a realistic plan
fabric measures can take several (or many) years to complete (when living with the work).
decide on maximum budget and timescale for all measures
Do as much fabric as budget allows
be aware that deep (fabric) retrofit could exceed cost of heat pump by factor of 3 ot 4
prioritise the “must do” ‘bangs for bucks’ measures such as draught proofing and loft insulation that are relatively cheap and with very high payback
going deeper is where the householder must make a balanced (dare I say “pragmatic”) decision.
Leave some money in budget for an air-source heat pump (ASHP), if you want one
ignore myths like “heat pumps can’t heat old buildings” or “they don’t work when its cold” (see here)
since an ASHP is much more affordable than alternatives (ground or water), it will be the default heat pump option (for those that are not in flats that may alternatively be connected to a district heating system, which itself can be ‘powered’ using a commercial-scale water-source heat pump).
if you are not planning ‘deep retrofit’ there are limited risks from modest ‘oversizing’ of an ASHP if installed before all insulation measures are complete (as a modern ASHP can already handle seasonal variations in demand); but discuss with expert installer.
you can get an ASHP early in your retrofit journey, if climate is your priority (and increasingly, running costs also); with no regrets!
Longer read:
If you are confused about what to do about retrofit, you are probably not alone. There is so much mixed and conflicting messaging. Often statements are made in the media that are untrue and go unchallenged.
Some experts say we need to insulate our homes so well they will hardly need any heating! Others say we need to get off gas as fast as possible by installing heat pumps.
Who is right?
Part of the confusion is that commentators can have different objectives in mind when expressing their opinions:
To reduce household bills;
To improve comfort;
To reduce reliance on gas;
To lower risks to future bills, from volatile gas markets;
To reduce the carbon footprint of heating.
Or some combination of these. But these assumptions are often not made clear, and homeowners can be led down different paths depending on who they talk to.
Now, in the face of the climate emergency, everyone is saying that the last of these is something they care deeply about, but the pathway to getting to net zero in heating is something that is hotly debated.
We don’t have much time to get this right, and as Voltaire once noted, the best should not be the enemy of the good. We need a pragmatic way forward.
Energy Performance Certificates
Householders will often be further confused when they look at the Energy Performance Certificate (EPC) of their home or one they want to buy. EPCs are increasingly seen as unfit for purpose in the effort to decarbonise heating. The Country Land and Business Association (CLA) stated (as quoted in an Historic England report from 2018).:
“The EPC confounds cost-effectiveness, energy efficiency and environmental performance, giving an inadequate estimate of all three. … it must focus solely on one of .. [to] be an effective baseline for policy interventions”
An EPC in its current form has never recommended a heat pump as a primary measure, because of in-built biases against heat pumps. If we really want to encourage ‘whole house’ retrofit that includes a sufficiency of insulation work and displacing gas (or oil or LPG) boilers with heat pumps, we will need instruments that are fit for purpose (see Updates A.)
So what to do?
Householders will naturally ask: How much will it cost? How fast can it be done? Who can I get to advise me? What is the carbon reduction? Who can do the work to a good standard?
Is ‘deep retrofit’ required?
The Committee on Climate Change (CCC) in their 6th Carbon Budget stated (based on very detailed modelling of scenarios, costs and risks):
‘By 2030 37% of public and commercial heat demand is met by low-carbon sources. Of this low-carbon heat demand 65% is met by heat pumps, 32% district heating and 3% biomass. By 2050 all heat demand is met by low-carbon sources of which 52% is heat pumps, 42% is district heat, 5% is hydrogen boilers and around 1% is new direct electric heating.’
for their ‘balanced pathway’, and they did not assume deep levels of retrofit (p.113):
‘Energy efficiency and behavioural measures in our Balanced Pathway deliver a 12% reduction in heat demand to 2050’,
which implies quite modest fabric retrofit. This, on average, requires an estimated budget (see p. 297) of just £10,000 per household. This is far below what is the estimated ‘deep retrofit’ budget of nearly £40,000 [1].
The CCC are clearly working on the basis of pragmatic or sufficient levels of insulation and other fabric measures, not ‘deep’ retrofit.
The Retrofit Academy is devoted to training to improve the quality of assessments and implementation of ‘fabric’ measures (insulation, air quality, etc.), which is to be applauded. It is however concerning that they essentially marginalise heat pumps [2]:
“Deep extensive retrofit and fabric first approach needs to be the main focus of reducing carbon emissions before we will be able to move to low carbon heating technologies 100%”.
There is clearly a problem here, as this is not an isolated opinion.
The ‘retrofit community’ generally have established an article of faith that ‘deep retrofit’ is essential. This is a belief that has very deep roots and predates concerns about the climate emergency. Key organisations in the public and private sector promote this belief.
Their motivation is to create greater comfort in homes and to lower heating bills, and who can argue with this?
The problem is that it isn’t a realistic strategy for reaching net zero in the fastest time possible [3].
The benefits in financial terms for householders do not favour a deep retrofit approach [4], but suggests that buyers do value heat pumps [5].
The Retrofit Academy justify their position on heat pumps based on the belief that that the grid cannot cope.
This is the same kind of argument that is often used for why we can’t adopt Electric Vehicles (EVs): because there aren’t enough charging points. On that basis we’d never have replaced horse-drawn carriages with petrol cars, or indeed any technology that displaces an old technology. In all such cases, the infrastructure is developed in parallel with the adoption of the technology in use. You don’t wait till you have a fully developed charging network and beefed-up electricity grid (particularly at its periphery) before you start selling EVs.
The electrification of much of our energy use is an inevitable strategic transformation of the energy system for many reasons, not least of which is the end-use efficiency improvements that technologies like EVs and heat pumps deliver. The other strategic game changer is that the end-use of energy does not care where the electricity comes from: a wind farm in the North Sea; the solar PV on a householder’s roof; a community solar scheme; a nuclear power station; or even, fusion energy (if it ever becomes a commercial reality). Electrification completely future proofs our energy system (even those parts of the economy like Aviation that need ‘chemistry’ to decarbonise, can get synthetic fuels from renewable electricity).
As for the grid, the issue has been overstated. There will be some strengthening of the grid required but a whole host of measures mitigate peaks in demand, including energy storage (at multiple scales), demand shifting, smart metering, etc. These will ensure that the grid can readily cope with future demand. No one is expecting that we have a 100% switch to heat pumps overnight, any more than petrol cars replaced horse-drawn carriages overnight. It is a multi-track transformation of energy generation, distribution and use. Local generation can have a remearkable impact on the scaling up of renewables as discussed here.
A Net Zero Toolkit for Retrofit
Retrofit assessors need to take an holistic and pragmatic view of the problem of decarbonising heating.
The ‘Net Zero Toolkit’ [1] is an encouraging document because it takes an approach which is very much along these lines. This document reiterates what PAS2035 is trying to achieve:
‘PAS 2035 follows two core principles:
A ‘fabric first’ approach to reduce the heat demand of a building as much as possible and to ensure newly airtight homes are well ventilated and avoid issues with damp and humidity.
A ‘whole house approach to retrofit’ to ensure retrofit plans for homes consider improvements to the fabric, services and renewable energy generation in a coherent way to minimise both risks and carbon emissions.‘
In other words, we need to consider fabric measures and getting off gas (or other fossil fuels) in parallel.
It also takes a ‘risk’ based approach, recommending that assessors consider the possible hurdles not only the benefits of different courses of action.
For a 90m² home (the average floor area for UK houses) the ‘Net Zero Toolkit’ provides costing for a both ‘shallow’ and ‘deep’ retrofit. Including all the potential measures it comes to a total cost of £14,770 for ‘shallow’ and £54,220 for ‘deep’ retrofit. But a heat pump is only included in the ‘deep’ retrofit case, so this is still pursuing the view that deep fabric measures are required before including a heat pump.
Leaving heat pumps till later, after the retrofit budget has potentially been blown on fabric measures, is not the answer. So while the ‘Net Zero Toolkit’ is a great improvement on the apparent Retrofit Academy position, it could go further.
In terms of actual measures recommended, I feel it still falls short of recognising that heat pumps need to be included much earlier in the conversation.
If we include only those measures related to ‘fabric’ (i.e. exclude heating systems and solar energy) the costs are reduced to £10,970 and £38,720, respectively.
How many 90m² floor area home owners have £38,000 to spend, and still have money and appetite left over to do the heat pump project?
‘Fabric first’ can easily become ‘Fabric only’ on this path.
We still have a lack of recognition of the urgency of getting off gas.
What does the Government say?
The Department of Business, Energy and Industrial Strategy (BEIS) in a recent study have findings that completely contradict the position of the Retrofit Academy. BEIS conclude:
‘This project shows that Great Britain’s homes can convert to electric heating at a cost far lower than the accepted wisdom. This can be achieved with no threat to comfort, and greenhouse gas emissions will fall very dramatically as a result.’
In answer to the question on what should be ‘the balance of heating technologies to insulation measures’ they conclude:
‘The work focused on total costs of ownership over 15 years. For most house types and most electric heating systems, the cost-optimal packages of measures have very limited fabric improvements – most commonly just draught-sealing and top-up loft insulation. High-cost improvements, like internal or external wall insulation, hardly ever repay the capital costs over 15 years.’
This is in part why the Government and Climate Change Committee are following a pragmatic approach and see a combination of heat pumps and district heating as cornerstones of heating decarbonisation.
Cost-effectiveness of fabric/ renovation measures to deal with peak heat demand
This essay is focused on decision making at the householder level, not at national system level, but since some concerns has been raised regarding peak (electricity) demand for space heating in winter, I have added this section to look at the cost dimension. There is research published since I first wrote this essay, that analyses the relative cost effectiveness of fabric measures in dealing with peak heat demand at a national level [8]. The paper says:
“Geographically, the amount of saved space heat differs strongly between countries (see figure 8). The strength of building renovation depends on the interplay between the costs of refurbishment and those for energy supply during the heating season. … Countries with a large share of wind generation, such as Great Britain, Denmark or Portugal, have cheaper electricity in winter and therefore a lower [requirement for] renovation as a result.”
In the UK, and assuming the distribution is allowed to strengthen (why wouldn’t it be, but conservatively with transmission grid is as it is today), then only a 10% reduction in heat demand using renovation/ fabric measures is cost effective. This is a suprising result, but arisies from the UK’s very significant wind assets and future potential, which correlates well with peak heat demand. It is similar to the number projected by the UK’s Climate Change Committee of 12% cited earlier.
I intend to write a separate essay ‘Peak Anxiety’, exploring the national system issue of peak electricity demand. Now I’ll return to the householder perspective.
Why heat pumps must be considered at the start of a retrofit conversation
If we focus on avoiding dangerous global warming, the single biggest thing a householder can do to reduce their carbon foot print is to install a heat pump.
Yes, it must be a fair transition and poorer families need help with grants or other measures to switch away from fossil fuels, but the direction of travel is clear.
I previously illustrated this (see here ), using data from the Energy Saving Trust, plotting the capital cost of different measures versus the carbon saving of those measures per year. I am including this graphic below.
Air-Source Heat Pumps (ASHPs) are now so efficient they compete very favourable with Ground-Source Heat Pumps (GSHPs), and at half the project cost, so we focus on ASHPs, which are likely to dominate the market [6].
An ASHP is the single best way for a householder to reduce their carbon footprint, by a long way.
A retrofit assessor may say,
‘Yes, but we have to consider comfort too. That bay window is poorly insulated so, whether it is a gas boiler or heat pump heating the home, sitting by the window will feel cool and only fabric measures can fix that’.
This is true and a householder needs to express their requirements clearly, and be presented with the options and costs. They can then judge which measures they ‘value’, in terms of the different criteria – comfort, capital costs, running costs and carbon reduction.
Different people with exactly the same situation may arrive at different conclusions.
But if they say that carbon saving is their number one priority, and secondly, they’d like to keep running costs similar, then a heat pump and modest fabric measures is an option that will score extremely well (or should do, if the assessment tools are fit for purpose).
There is another reason why we shouldn’t put all our eggs in the basket of insulation and other fabric measures. Recent research suggests that following such measures, there is a rebound effect and householders make behavioural changes (such as less clothing and higher thermostat settings) that can cancel out the carbon savings (see Update D).
Maybe, instead of the mantra ‘Fabric first’, we need ‘Efficiency first’, because it is that which delivers lower carbon emissions.
How do we deal with hard to treat homes?
The conversation often centres on old leaky homes, of which the UK famously has many. The Buildings Research Eastablishment (BRE) estimated a while ago that the UK had over 10 million ‘hard to treat’ homes (and there are nearly 30 million homes with gas boilers in the UK). About half of these buildings (about 5 million) were built before 1900.
These 10 million are often but not exclusively larger homes with high gas heating bills. So addressing the needs of this 1/3rd of the retrofit challenge would make a disproportionately large contribution to decarbonising heating in the UK.
But whether it is Roger Harrabin reporting on the BBC, or many others who count themselves as ‘green’, we hear it stated repeatedly (without reference to evidence) that householders must have high levels of retrofit before even considering a heat pump.
Some will even repeat the myth that you cannot heat old ‘leaky’ buildings with a heat pump. This is one of the myths that is addressed here.
Heat pump scepticism is wrong for several reasons:
you can heat any building with a heat pump that can be heated with a gas boiler (you just need to size the heat pump and the emitters/ radiators correctly);
with the efficiency of modern heat pumps and quite modest insulation, a heat pump can match or even reduce the running costs of the boiler it is replacing, as shown here and here;
because the electricity grid is getting greener and greener every year, once a heat pump is installed the heating gets greener and greener with every year that follows (as illustrated in the graphic earlier).
But the questions remain: how do we deal with hard to treat homes? How much insulation do we do before we get rid of the old gas boiler?
The heat demand of a building is an important measure of its efficiency, but how do you compare the thermal efficiency of a large 6-bedroom detached house with a 3-bedroom semi? The fair way to do it is to divide the heat demand by the floor area of the house, which gives a measure – the heat demand per unit area – that is a universal measure of the ‘efficiency’ of the building’s fabric.
In the UK, the average home has an annual heat demand, using this measure, of about 130 kilowatthours (thermal energy) per square metre per annum (or 130 kWh/m².a for short). A new build, highly efficient ‘PassivHaus’ requires only 15 kWh/m².a. The Association of Environmentally Conscious Builders (AECB) have a target of 50 kWh/m².a when carrying out a (fabric) retrofit project, but they will relax this (e.g. for a Listed Building) to 100 kWh/m².a in some case, because some measures (like wall insulation) may prove impractical or impossible to include.
The implementation of retrofit on old buildings needs to be done with considerable experience and care, as a report by the Sustainable Traditional Buildings Alliance (STBA), in part sponsored by Historic England, explored.
Let’s start with a 90-100m² home with solid walls that is poorly insulated and ‘hard to treat’, and requires nearly 200 kWh/m².a to heat it currently with its gas boiler.
The following sequence considers a sequence of options (A-E) for when to install an Air-Source Heat Pump (ASHP), alongside increasing levels of ‘fabric’ retrofit measures. As we move from left to right on the bottom axis, fabric measures are added that reduce the heat demand of the building. That in turn will reduce the cost of the heat pump project.
Because we still need hot water and some heating, the drop in the cost of the heat pump project is less dramatic than the rise in the cost of the fabric measures, and there will be a cross-over point where the cumulative cost of the fabric measures is equal to the cost of installing a heat pump (at that level of building efficiency). Let’s run through the options.
A. Doing nothing on fabric or gas means bills will escalate
This is the start – the ‘do nothing’ option.
There is a serious risk that such a home will have lower resale value in the future, and will of course not contribute to lowering the carbon footprint of the home.
By starting to think about retrofit (including getting off gas), home owners might find themselves doing things they have put off for years, like clearing the loft (ready for insulation), and fixing that leaky front door.
B. Getting off gas early prioritises planet, without bills needing to rise
In this case, the householder installs an ASHP early in their retrofit journey, alongside limited fabric measures, such as loft insulation to modern standard, and seals / brushes for doors and sash windows to deal with drafts.
It may be a surprise to people that getting off gas early prioritises planet, without bills needing to rise. The reasons for this are:
A 25 year old, 70% efficient gas boiler wastes energy, so the net cost of a unit of ‘heat energy’ delivered is greater than 3p (the nominal unit price for a kWh of gas in July 2021), so 3p/0.7 = 4.3p per kWh of heat delivered/ required.
The nominal cost of electricity to run the heat pump (at July 2021 rates) is 15p per kWh. Taking a performance of 300% for a modern properly installed heat pump over the seasons, the householder would be paying 15p/3.0 = 5p per kWh of heat delivered.
Assuming that the limited measures taken mean that heat demand reduces by 20% less then we would paying effectively 4p instead, which is lower more than old unit cost (4.3p)
As levies on electricity move over to gas in coming years (as the Government has indicated), the running costs of the ASHP will lower further (and will rise for the gas).
As the electricity grid gets greener and greener, so does the heat pump, without the householder having to do anything, so the carbon reductions delivered improve year on year.
It is crucial that the house has a proper heat loss assessment done, and the heat pump is sized correctly, and that radiators are also assessed and upgraded where necessary on a room by room basis.
This refutes the belief that early adoption of a heat pump is a no-go area for hard to treat homes.
C. Further pragmatic fabric measures lower heat demand and bills
In this case a householder installs an ASHP, and in addition to limited fabric measures – loft insulation to modern standard, and seals / brushes for doors and sash windows to deal with drafts – installs:
pragmatic window measures (replacing some windows with double or triple glazing but prioritising lower cost secondary glazing, particularly in conservation area), and;
for one or two rooms, additional measures for cold walls or floors where possible, for comfort reasons if nothing else, and;
might add localised mechanical ventilation and heat recovery (MVHR) for a specific room or two (kitchen and shower) to deal with condensation issues.
Alongside reducing bills, these fabric measures can deliver improved comfort (such as in key problem areas like bay windows).
D. More fabric measures reduce bills, but can delay getting off gas
The householder installs an ASHP late after extensive and often disruptive retrofit measures to many rooms, including double or triple glazed new windows throughout and insulation for some floors and walls, and extensive MVHR (Mechanical Ventilation and Heat Recovery) recommended to deal with moisture that would otherwise be trapped.
Older buildings are used to ‘breathing’ and that prevents the build up of moisture. As we greatly reduce leaks in these buildings, and add insulation, there are significant risks of harm to the traditional underlying fabric of the building due to moisture. Historic England has documented many cases where harm has been done in old buildings, and they recommend the use of breathable insulation materials to minimise such risks. Moisture can give rise to health issues if mould results.
That is why PAS2035/ PAS2030 aims to deliver improved skills in doing more extensive fabric retrofit. I am concerned that the skills required to effectively assess and implement these more extensive measures, and the costs, will deplete a house owners ‘retrofit budget’ to the extent that there is no money left to switch off gas and install a heat pump.
This is also problematic because a householder will rarely implement fabric measures in a single short-term project. In practice it can take many years to implement a wide range of measures; especially where householders are living with the work.
Often, debates on retrofit fail to take account of these real-world issues of limited budgets, extended timelines, and risks of poor delivery of deeper retrofit. Conversely, the challenges of fitting heat pumps are overstated by comparison. We need a much better balance in these debates.
E. Further fabric measures very difficult to justify
A householder installs an ASHP very late after an extensive and disruptive building project:
Removing problematic fabric and replacing with energy efficient materials for walls (internal or external), floors and windows;
Possibly going below ground floor level at walls to eliminate thermal bridging issues with floor insulation, and;
Full external cladding of building, or internal wall insulation;
Installs MVHR throughout the house.
These measures would greatly increase comfort and minimise bills. Heating requirements theoretically become minimal (although hot water would still be required, and specialised heat pumps dedicated to hot water are available).
However, in practice, such levels of fabric retrofit are not achievable for hard-to-treat homes at reasonable levels of cost and disruption. And for Britain’s housing stock, this is not achievable on a timescale commensurate with the climate emergency. This point seems to be lost on advocates for deep retrofit.
People talk about the lack of heat pump engineers, but I would argue that training these up is a relatively simple task when compared with the breadth of knowledge required to deal with a large range of historic and current building materials and how to use them in a way that avoids creating problems.
Pragmatic ‘save the planet’ Retrofit
So these are the householder options:
A) Doing nothing on fabric or gas means bills will escalate;
B) Getting off gas early prioritises planet, without bills needing to rise;
C) Further pragmatic fabric measures lower heat demand and bills;
D) More fabric measures reduce bills, but can delay getting off gas;
E) Further fabric measures very difficult to justify.
And for me, concerned about the urgency to limit dangerous global warming, options B or C are the pragmatic way forward in many cases.
‘Insulate Britain! Yes, but by how much?‘ House owners are asking.
‘By enough’ is the answer, and far less than is the received wisdom of those calling for ‘deep retrofit’.
It certainly needs to be at a level that leaves enough in the budget to get off burning fossil fuels. For many or most householders, that means installing an Air-Source Heat Pump [7].
Anything less is not treating the climate emergency with the urgency it requires.
Postscript
A slew of reports recently (see Updates D-F) have backed up the idea that we need a more nuanced approach to ‘fabric first’ and should seriously consider heat pumps as an early intervention for home owners.
Nesta in their report (see Updates E) state their recommended approach to insulation: “… we propose a pragmatic approach to insulating homes in the UK alongside a heat pump rollout. In our view, the UK should insulate many more homes, but it is not cost effective to insulate every home to a high standard. Our proposed approach is that: ● we should aim to improve roughly 13 million homes to reach the equivalent of EPC C standard or equivalent by 2030, with an estimated investment requirement of around £60 billion ● properties with easy-to-treat cavity wall and loft insulation should be targeted as a priority over hard-to-treat properties ● greater emphasis should be placed on insulating properties in fuel poverty, and governments in the UK should aim to insulate fuel poor households and social housing to a high standard wherever possible ● there is a strong case for higher standards in private rental properties, and governments in the UK should regulate for minimum standards of insulation ● this insulation rollout should happen alongside a low-carbon heating rollout, and households should not be discouraged from buying a heat pump if their home is poorly insulated.”
In relation to the goal of decarbonising heat, perhaps the strongest statement on this comes from the Green Alliance report (see Updates F):
“To achieve net zero, the UK’s housing stock must be decarbonised through improvements to energy efficiency and changes to heating systems. Simultaneously, heating costs need to come down, especially for the least affluent. This analysis challenges the ‘fabric first’ approach, showing that heat pumps are up to 44 per cent cheaper than an insulation-only deep retrofit approach, with a cost per tonne of carbon saved that ranges from 6 to 13 times cheaper. The upfront and running costs of heat pumps are predicted to fall over time, as cheap renewables decarbonise electricity. In the short term, we suggest measures to bring down energy bill costs, such as enabling users to avoid peak times for electricity prices. The government should adopt a heat pump led approach, retain support for the mass uptake of cheap loft and cavity wall insulation, and only pursue deep retrofit for appropriate fuel poor households.”
And even the PassivHaus Trust that has traditionally pushed for deep retrofit, is now presenting a more nuanced case (see Update G), no longer claiming that heat pumps cannot work in less well insulated buildings, but focusing on the areas where fabric issues are most clearly the focus, such as “the effects of poor building fabric and ventilation on health and comfort”.
And while I missed it when it came out, a piece ‘Fabric Fifth’ by Nigel Banks has created a lot of interest and discussion. If one takes into account the embedded carbon of substantial fabric measures and compares their carbon lowering potential with that of other measures, then fabric comes out fifth in priority, and an ASHP comes out first. There’s a great discussion on this on the BetaTalk – The Renewable Energy and Low Carbon Heating Podcast, with guests Nigel Banks, Technical Director at Octopus Energy and Dan Kelly, Managing Director at Dartmoor Energy, ‘Fabric Fifth – has SHDF Mismanagement Wasted Tax Payers Money?’
But still the meme that heat pumps require insulation to replace gas boilers, or that they will raise bills, or that they blow up the electricity grid, are still alive and well as a conversation which I was involved in reveals (see Updates H). Some myths are so well embedded thay can be hard to shift, but it is clear that mainstream opinion amongst expert bodies has undergone a major shift in the recent past, and that is to be welcomed.
(c) Richard W. Erskine, 2021 (updates added below, 14th January 2022)
This toolkit was commissioned by West Oxfordshire, Cotswold and Forest of Dean District Councils, funded by the LGA Housing Advisers Programme. It is licensed under Creative Commons Licence 4.0 International (CC BY-NC-SA 4.0). Licence Deed: https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode
[3] Consider a householder who spent £25,000 on a new kitchen 7 years ago and is advised that they need to insulate the back wall of the kitchen and the floor. This would require the kitchen to be removed and expensive and disruptive work must be done to accomplish the work, even assuming the kitchen can be refitted. In practice, many of those who do attempt ‘deep’ retrofit do so only over an extended period rather than as a ‘big bang’ project.
[4] Lucien Cook of Savills was on BBC Radio 4’s ‘Your and Yours’ (8-11-21), quoting from research done by Savills, said that to get from EPC D to C, a householder would need to spend £6,500 but would only reduce energy bills by £180 per year (which would take 36 years to break even).
[5] Lawrence Bowles of Savills, commenting on research on valuations of homes:
‘By analysing average values of homes transacted between 2018 and 2020 we found that homes with newer, cleaner, methods of energy demand a much higher price tag. Across England and Wales, buyers purchasing a home with a heat pump fitted are paying on average 68 per cent more for the offer of cleaner energy.’
[6] As the Renewable Heat Incentive (RHI) ceases at the end of March 2022, with a much higher grant for GSHPs than ASHPs, to be replaced (it has been signalled) by an upfront grant with an expected marginal uplift for GSHPs, the likelihood is that the great majority of heat pump installations will be air source (even for the minority of home homeowners that have the land area for laying the slinkies required; and bore holes are even more costly and risky for a single householder to attempt).
[7] For those who live in flats or dense dwellings in towns and cities an ASHP may be problematic because of lack of space for a cylinder, for example (although small systems are being developed). For such case, and for office buildings, low carbon District Heating will often be the preferred alternative, as the Climate Change Committee recognises. But remember that District Heating refers to a heat distribution network, which still needs a heat source. The heat source may itself be a large scale heat pump, such as the water-source heat pump planned for Stroud District Council. Since towns and cities are typically close to rivers or the sea – carrying huge quantities of thermal energy – this is likely to be a popular approach that is already being implemented, to decarbonise heating in many urban settings.
[8] “Mitigating heat demand peaks in buildings in a highly renewable European energy system”, Elisabeth Zeyen, Veit Hagenmeyer, Tom Brown, https://doi.org/10.48550/arXiv.2012.01831
Thanks also to Lisa Zeyen for private communications regarding these results, although I naturally suggest readers access the original work to get a full and complete understanding of the results. I hope I have not misrepresented them!
B. The latest Government (BEIS) research concludes: “Decarbonised electricity offers the promise of very low or even zero-carbon heating for homes – without necessarily carrying out extensive deep retrofit work. This project shows that Great Britain’s homes can convert to electric heating at a cost far lower than the accepted wisdom. This can be achieved with no threat to comfort, and greenhouse gas emissions will fall very dramatically as a result.”
and tellingly also concludes:
“The work focused on total costs of ownership over 15 years. For most house types and most electric heating systems, the cost-optimal packages of measures have very limited fabric improvements – most commonly just draught-sealing and top-up loft insulation. High-cost improvements, like internal or external wall insulation, hardly ever repay the capital costs over 15 years.”
Although some might argue with the 15 year time horizon, this is hardly a slam dunk for deep retrofit; quite the opposite.
C. ‘On the interesting question of continuous versus intermittment heating when using a heat pump’, Nicola Terry (in “Will heating your house constantly use more energy?”, 12th January 2022) clearly comes down in favour of continuous heating (the main reason being the relative inefficiency when a heat pump has to heat a house from cold/ colder state).
First posted 2021. Note [12] added 19th July 2023. Doesn’t change the argument and evidence for increased risk of extreme events as global / regional average temperature increases, but adds nuance to Hansen’s claim of fattening of the tail
Alok Sharma, President for COP 26, told a recent meeting:
“Every fraction of a degree makes a difference”
Reported by Shaun Spiers, Executive Director of Green Alliance UK, on Twitter (@ShaunSpiers1, 5th October 2021.
Alok Sharma talked powerfully of the real impact of climate change across the world. Richer countries have a moral duty to act, and it’s in their self interest.
Roger Harrabin, the BBC’s Energy & Environment Analyst since 2004, responded:
“This is such a hard concept to get across. @AlokSharma_RDG is right – every fraction of a degree really does matter. But how to you explain that to the public who may not even take off a layer of clothing for two degrees?”
I would direct Roger and anyone else seeking an answer to Katharine Hayhoe, who is the supreme master of communication on such questions. Her short video “What’s the Big Deal With a Few Degrees?” answered the question in a very accessible way.
1°C is already big deal
As Katharine Hayhoe concludes, the Earth is already “running a temperature”, and on Twitter said:
“Using our body temperature is one simple and surprisingly relevant analogy. A fever of 2°C has significant, noticeable, and if sustained long-term, dangerous impacts on our health & well-being.”
The Earth System is very complex, and so is the human body. Part of this wonderful complexity is the ability to self regulate. Under normal conditions this manifests itself as a stable system in dynamic equilibrium, albeit with minor variations and cycles (such as the seasons and mentrual cycles).
Since the end of the last ice age, the concentration of carbon dioxide in the atmosphere, and hence the global averaged temperature of Earth (not to be confused with weather), has remained remarkably stable, despite large flows of carbon associated with the carbon cycle (which tend to cancel each other out). Human civilisation and its agriculture have emerged over 10,000 years, benefiting from this largely stable climate.
Human emissions since the industrial revolution about 200 years ago have now increased carbon dioxide concentrations in the atmosphere by 1/3rd from 280 parts per million to 414 ppm [1]. This is level is more than at any time in the last 2 million years [2].
This is already causing a major disruption in the delicate balance that has existed in pre-industrial times, and we are already seeing the impacts in the increasing frequency and severity of extreme weather events. Each fraction of a degree is important in limiting the damage.
To explain what seems at first to be such a surprising consequence from such a small change is important to realise a few things:
the land on Earth is under 30% of the total surface area, and the ocean’s temperature is moderated by the heat capacity of a large volume of water, so land is proportionally more affected.
as was predicted in 1967 [3], there is proportionally more warming as you move towards the poles. This not only warms high latitude regions, but disrupts the jet streams that help drive weather patterns at lower latitudes.
the rises in temperature are not evenly spread around the world and in a cruel twist, many regions which are the poorest and least responsible for emissions will face the worst impacts.
a shift in the averaged temperature hides a massive increase in the chance of weather extremes.
at both extremes of the hydrological cycle (dry regions and wet regions) there is a tendency to magnify these extremes (dry regions get drier, wet regions wetter).
Adapted from Hansen & Sako (2016, 2020) (also see note [12])
Even with ‘just’ a 0.9°C increase (relative to pre-industrial, this is a 1.2°C increase) in a global mean surface temperature between the 1951-1980 average, and 2009-2019 average, Hansen and Sako have shown [4]:
hot summers on land in the Northern Hemisphere already occur twice as often and,
extremely hot summers (like 2003) already at least 200 times more often
As Katharine Hayhoe explained, a 1°C rise in GMST is an enormous amount of energy.
The difference between a 1.5°C rise and a 2°C rise is highly significant. The IPCC’s 1.5°C Special Report [5] [6] showed a number of ways in which the impacts of 2°C are significantly magnified compared to 1.5°C:
“At 1.5 degrees Celsius warming, about 14 percent of Earth’s population will be exposed to severe heatwaves at least once every five years, while at 2 degrees warming that number jumps to 37 percent.”
Humanity has left it so late to act that avoiding 1.5°C is now well nigh impossible (according to the IPCC), but we can still decide and act to keep below 2°C, and must avoid the increasingly dangerous higher temperatures.
We are warming very fast
Climate change is happening in a mere flick of the fingers on geological timescale.
Going back as far as the emergence of Homo Sapiens less than 300,000 years ago, the rate of increase in carbon dioxide levels has never been this fast, and the global mean surface temperature has never risen this fast.
It got me thinking about how to articulate why the current rate of change is truly unprecedented.
It is important to note that there is usually an initiating cause of a global warming episode in Earth’s deep past – such as orbital changes that provide the drum beat for ice ages, or even earlier, extreme volcanism. But the main cause of the warming has without exception, since life has existed on Earth, been the release of greenhouse gases. These have been principally carbon dioxide and methane released over thousands of years (short on geological timescales).
Our current situation is quite different for 3 reasons:
The initiating cause and the main cause are one and the same: human caused emissions of carbon dioxide from fossil fuels (3/4 of the problem) and emissions of greenhouse gases from agriculture (1/4 of the problem).
The period over which this is occurring is an instant in geological terms, just 200 years or so since the start of the industrial revolution,
whereas for the exit from the last ice age, it took 8,000 years [8]
another analogue to the current fast warming is the PETM (Paleocene–Eocene Thermal Maximum) with an initial burst of greenhouse gases and warming over a period of between 3,000 and 20,000 years [9]
Human choices are the ultimate cause, and we can stop it.
Currently we have warmed by about 1.2°C in less than 200 years.
The rate of increase in carbon dioxide concentrations is a useful indicator of risk, because it is the doubling of concentrations that give rise to an increment of warming of about 3°C. Only by stopping emissions can we stop further warming.
The rise in CO2 concentrations averaged over 200 years is 0.67 part-per-million per year (ppm/yr), which is unprecedented. The PETM higher rate of rise of 0.42 ppm/yr comes close, but the exit from the last ice age is much slower, at a rate of 0.01 ppm/yr.
If we continue on the high emissions path we are on, we could reach 4.4°C of warming (3.3°C – 5.7°C range, relative to pre-industrial) [10].
This results from a further increase on carbon dioxide concentrations at a rate of 9 ppm/yr [11], which would far exceed even the upper estimates of the rate of increase during the PETM.
I have summarised all this in the following table.
Rate of change of carbon dioxide concentration currently compared to prior events (Richard Erskine, 2021)
I wonder how anyone can imagine we are not in a climate emergency looking at this table.
(c) Richard W Erskine, 2021.
[correction – I transcribed the wrong numbers from the table to the narrative for duration of PETM pulse – now fixed]
The IPCC states “In 2019, atmospheric CO2 concentrations were higher than at any time in at least 2 million years” (in Ref. A, section A.2.1)
Manabe and Wetherald in 1967 published results using the first full model of the greenhouse effect including radiative, convective, and other key aspects, to model the greenhouse effect on earth (Manabe having received a share on the 2021 Nobel Prize in Physics for his contributions)
Hansen and Sato use baseline 1951-80, which is 0.3°C above the accepted Pre-industrial baseline. So the 0.9°C of warming to date, is equivalent to 1.2°C relative to pre-industrial.
See IPCC Reference C, and and useful summary by NASA, Reference D.
During a much earlier period in geological history, about 56 million years ago, when the world was already warm and ice free, there was an event that lead to extremely fast (in geological terms) warming. It is called the Paleocene–Eocene Thermal Maximum (PETM). This is described by the IPCC as follows (Ref. A):
“A geologically rapid, large-magnitude warming event at the start of the Eocene when a large pulse of carbon was released to the ocean-atmosphere system, decreasing ocean pH and oxygen content. Terrestrial plant and animal communities changed composition, and species distributions shifted poleward. Many deep-sea species went extinct and tropical coral reefs diminished.”
The Last Glacial Maximum was 23-19 thousand years ago (Reg. A). The current period of interglacial temperatures has lasted 10-11 thousand years. I take 19-11=8 thousand years are the period of exit from the last ice age.
For PETM, numbers taken from IPCC (Ref. A) are: 900->2,000 ppm CO2 (sect 2.2.3.1); 0.04-0.42 ppm CO2/yr(Table 2.1) and estimate of 5°C (4°C – 7°C range) globally averaged warming (sect 2.3.1.1.1). Although a new study (Inglis (2020) suggests greater warming.
The SSP5-8.5 high emissions scenario gives rise to a warming of 4.4°C [3.3°C – 5.7°C range] relative to pre-industrial by 2100 (see Table SPM.1 in Reference A).
Box TS.5 in Ref. A indicates SSP6-8.5 would have cumulative emissions of 11,000 GtCO2. But Figure SPM.7 has 38% of these emissions absorbed by ocean and land/biosphere, so 0.62*11,000=6,820 GtCO2 CO2 remains in atmosphere (for a long time). Now Mackay noted “A useful way to calculate things is to remember that 127 part per million (ppm) of CO2 in the atmosphere equates to 1000 GtCO2”, so 6,820 GtCO2 equates to 6.82 * 127 = 866 ppm CO2. We need to add that to the pre-industrial level of 280, giving a total 1146 ppm CO2. Now, dividing this by 80 years (2020 to 2100) gives 9 ppm CO2 per year on average. Note that this case includes high GHG emissions, but also incorporates a reduced level of take up of greenhouse gases in the oceans, land and biosphere (something that many who criticise this scenario as ‘pessimistic’ fail to grasp).
The question of not whether Hansen is correct in seeing not just a shifting of the distribution (which is undeniable) with increased risk of extreme weather. This may surprise some but is basic statistics, and a long expected result, as shown, for example in the IPCCs 4th Asessment Report (2007) , buried deep in the Technical Summary Box TS.5: Extreme Weather Events. The question is whether Hansen is right in seeing a change in shape of the distribution (fatter tails, or in maths speak, a change in the standard deviation). This was challenged by one author (see Ref. E). However, to reiterate the key point: this doesn’t alter the fact that we will expect an increasing level of extreme weather events as the ‘mean’ temperature increases.
References
A. IPCC, 2021: Summary for Policymakers. In: Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change [Masson-Delmotte, V., P. Zhai, A. Pirani, S. L. Connors, C. Péan, S. Berger, N. Caud, Y. Chen, L. Goldfarb, M. I. Gomis, M. Huang, K. Leitzell, E. Lonnoy, J.B.R. Matthews, T. K. Maycock, T. Waterfield, O. Yelekçi, R. Yu and B. Zhou (eds.)]. Cambridge University Press. In Press.
B. James Hansen and Makiko Sato (2016) Environ. Res. Lett. 11 034009
This is in the context of the latest IPCC Report. Commenting on it Dr Emily Schuckburgh noted in Carbon Brief:
“Ever more certain, ever more detailed. That’s the brief summary I would give the AR6 WG1 summary for policymakers (SPM). Once again it provides a comprehensive chronicle of extreme weather induced by climate change and the risk of catastrophic future impacts. It estimates the remaining carbon budget from 2020 for a reasonable chance (67%) of limiting warming to 1.5C is 400bn tonnes of CO2 (GtCO2). With global emissions in 2020 of 40 GtCO2, this re-emphasises that this decade is critical”.
There is no dispute that hydrogen will play an important role in decarbonising some areas of the economy, especially hard to deal with ones like steel and fertiliser production.
But the report is a little disappointing in sitting on the fence on a number of issues, notably transport and heating, where there is doubt as to the role hydrogen will play. The report says (p. 62):
“Before hydrogen for heating can be considered as a potential option to decarbonise heat in buildings, we need to generate further evidence on the costs, benefits, safety, feasibility, air quality impacts and consumer experience of using low carbon hydrogen for heating relative to other more established heat decarbonisation technologies.”
And (p. 65):
“We recognise that the longer-term role for hydrogen in transport decarbonisation is not yet clear, but it is likely to be most effective in the areas where energy density requirements or duty cycles and refuelling times make it the most suitable low carbon energy source.
But despite these sensible cautionary words, the report goes on to try and give the impression that domestic heat and transport are still in play, given more research. But are they?
In the area of cars, many car manufacturers have halted or are cutting back R&D on hydrogen fuel cell cars. One of the issues is the relative inefficiency compared to Electric Vehicles (EVs), but building out the infrastructure is another concern.
“You won’t see any hydrogen usage in cars,”
said Volkswagen chief executive Herbert Diess, speaking to the Financial Times, adding that the idea of a big market for hydrogen fuel cell vehicles is …
“very optimistic … not even in 10 years, because the physics behind it are so unreasonable,”
For heating, if we were to use ‘Green Hydrogen’ (created via electrolysis using renewables) to heat our homes, it would require nearly 6 times as many wind turbines compared to directly using the electricity to power heat pumps (which harvest ambient energy in the environment, and so are much more efficient) [1]
The Committee on Climate Change rather highlighted this in their 6th Carbon Budget where they state (for their ‘balanced pathway’):
“By 2030 37% of public and commercial heat demand is met by low-carbon sources. Of this low-carbon heat demand 65% is met by heat pumps, 32% district heating and 3% biomass. By 2050 all heat demand is met by low-carbon sources of which 52% is heat pumps, 42% is district heat, 5% is hydrogen boilers and around 1% is new direct electric heating.”
Or as Professor Cebon said in the Financial Times:
“Hydrogen should be used only as a last resort for sectors that have no option to electrify … Directing public funds towards hydrogen in sectors that have more effective alternaive solutions is a mistake”.
In other news, Octopus Energy will soon be making a major announcement on heat pumps (they have been teasing the market on Twitter), and are expected to offer a much reduced cost for components and services, to provide a mass market offer. If the Government comes through with an up front grant of several thousand pounds for installation of heat pumps (air source), to replace the Renewable Heat Incentive (which expires in March 2022), this could be a game changer (in terms of mass adoption).
It has been a turbulent week for hydrogen.
Chris Jackson, chair of UK Hydrogen & Fuel Cell Association Chair has stepped down owing to the Government’s continued support for ‘Blue Hydrogen’ (derived from natural gas, and which involves burying a by product, carbon dioxide, using a method called ‘carbon capture and storage’ that has not yet been proven at scale, but is being pushed by fossil fuel companies like Shell). Chris Jackson said:
“I would be betraying future generations by remaining silent on that fact that blue hydrogen is at best an expensive distraction, and at worst a lock-in for continued fossil fuel use,”
It feels like the debate over hydrogen will continue, just as it has been for decades, with fossil fuel interests continuing to try to shape the debate in their favour, with arguably far too much influence in policy circles.
In the meantime we need to decarbonise fast, and we don’t have time to waste – just 10 years to put a serious dent in emissions as the IPCC has indicated. Do we really have the time to keep kicking the hydrogen can down the road?
They say the market will decide.
The good news is that for both cars and heating we have electrification solutions (EVs and heat pumps) available, and they are growing in popularity.
Figure from the above report. For ‘Green Hydrogen’ we would need a factor of 270%/46% more renewables generation to match the heat provided by heat pumps, that is, nearly 6 times as many off-shore wind turbines operating in winter when we need the heat, for example.
There is a lot of debate about the role of individual actions in relation to climate change. Allegra Stratton was rightly mocked for suggesting people should refrain from rinsing plates before they are put in the dishwasher. Michael Mann makes a much more serious point, saying that fossil fuel interests – having moved on from climate science denial – are,
“trying to convince people that climate change is not the result of their corporate policies but of our own individual actions”(Scientific American, January 12, 2021)
And of course, Michael Mann does not say that behaviour change is unimportant, but it should not be used to distract us from the much bigger actions that large organisations (especially fossil fuel ones), supply chains and Governments must take.
Whilst others stress the importance of systems change, and the coupled role of behaviour change. Lloyd Alter writes that behaviour change is important:
“… because we have to stop buying what the oil and car and plastics and beef companies are selling; If we don’t consume, they can’t produce. It makes a difference; I vote every four years, but I eat three times a day.”(Treehugger, May 11, 2021)
And we have to recognise there are limitations to personal actions when not supported by the system. If I want to ditch the car and take an EV bus to go to work 10 miles away, I cannot do that if there is no EV bus (and maybe no bus at all, at the times I need them).
So, at whatever scale we look at it, and through whatever ‘lens’ we choose, we see the connectedness of actions by individuals, businesses, public institutions, local government, national government and multi-nationals.
I want to show at the scale of a town, how we might think about the power that resides in the hands of individuals; and they can possess multiple persona. Yes, they are consumers, but they are so much more. They are voters, employees, church-goers, parents, children, neighbours, and so much more.
If we break the silence and talk about climate change – not the science but what it can mean in terms of progressive action – it’s amazing how easy it is to start a conversation.
We need to think about the ‘agency’ that individuals possess, within the network of actors in a local community. The influence they have is much more than the narrow framing of consumerism. We see a richer systems view of influence and reinforcing feedbacks, with multiple actors involved, and individuals taking on a variety of personas. Here is a little illustrative doodle I created:
Each of these actors can be self-reinforcing too. The householder can influence a neighbour, just by chatting over the fence (I left out these little looped arrows, to avoid making the schematic too busy).
A climate action group (not shown) can – if it is being effective – engage with all the actors in this schematic by various methods and channels, by networking, engaging, and promoting interactions between them.
For example, holding a fair on house retrofit, and inviting relevant businesses, community groups, councillors and the local member of Parliament. If you don’t ask, you don’t get, my mother used to say!
This does not mean that personal action is unimportant – far from it – but when it can be seen as part of a collective goal to promote changes throughout the system, it is far more powerful. While personal actions today might only impact a fraction of the UK’s carbon footprint directly, indirectly it can have a much greater impact. System change (access to low carbon transport, help with decarbonising heating, etc.) together with personal choices is of course where we need to get to for a high impact on emissions.
The individual will also begin to realise the agency they have to promote not just change, but system change.
Chris Mason on BBC News at Ten (tonight, on 9th August 2021, the day that the IPCC published the science part of their 6th Assessment Report) stated that (in relation to heat pumps):
“you need lots of insulation to make them work”.
This is completely false: you can heat any building with a heat pump that you can with a gas boiler. Why do BBC reporters keep repeating these myths?
In fact, and in the context of the IPCC report, if one’s main concern was the household carbon footprint, a heat pump would be the first thing anyone would do, as I showed here (given the diminishing carbon intensity of the UK’s electricity grid).
The counter argument commonly used is that the costs of running a heat pump (given today’s unit price for electricity) is unaffordable unless there is huge levels of insulation. Of course if we had a fair fight between gas and renewables, electricity prices would come down relative to gas.
But simple maths shows that even at current gas and electricity prices, if one were to replace an old 70% efficient gas boiler with a modern air-source heat pump (ASHP), then the running costs would not be any greater, with only modest insulation measures, as shown here.
A heat pump can heat any home that a gas boiler can. But it makes no sense to try to heat a barn – with a heat pump or a gas boiler! Insulation and draught reduction make sense – and help improve the comfort of a building – and so ‘fabric first’ is an important message. This leaves open the question: how much insulation a householder considers before they invest in a heat pump?
Depending on how an individual wants to spend their retrofit budget, there will be a cross-over point where acquiring an ASHP will trump any further increment in fabric spending. Adding fabric measures will reduce the size of the heat pump required, but only to a point, as there are some base costs for the system, and we need hot water whatever the state of the fabric.
There are small and large houses, well insulated ones and leaky ones. How do we make sense of the numbers?
A useful metric is the heat energy required to heat one square metre of a home per year (this is measured in kilowatthours per metre square per annum, or kWh/m².a). The average UK house – because our historic housing stock is quite leaky – requires about 130 kWh/m².a. The Association of Environmentally Conscious Builders (AECB) aims, when insulating homes, to reduce this measure to 50 kWh/m².a, although might accept as much as 100 kWh/m².a in the case of (say) a Listed Building. A new build Passivhaus aims to achieve just 15 kWh/m².a.
It is easy to work out what figure currently applies to your house.
Look at your annual energy bills. If heated by gas look at the kWh total for the year. Make a guesstimate for how much of this is space heating, say 80% in the current case. Now divide this figure by the floor area of the home. The question then to think – with the help of a retrofit assessor – how far you can reduce this number.
If you currently have poor loft insulation, then fixing this is relatively cheap and has great pay back. Similarly for cavity wall insulation, and for reducing draughts from doors and windows. You don’t need to rip out your sash windows and replace with double glazing; window brushes, and secondary glazing can make a great contribution with a modest investment. Pragmatism is often required, when assessing where you can get the ‘biggest bang for your buck’.
The other key idea is to think in terms not of ripping out things, but taking opportunities when they arise. So, if a new kitchen is being fitted, then why not use the opportunity to insulate that cold back wall, and maybe even consider underfloor insulation and heating? This is why retrofit can often best be seen as a journey to be followed over a number of years.
What follows is an illustrative schematic showing the balance between the money spent on fabric measures (solid line) and what would need to be spent on an ASHP system at a given level of heat demand. As the heat demand reduces (as a result of fabric spend), so does the cost of the ASHP system (including the heat pump and radiators). The schematic envisages a 4-bedroom semi-detached house with solid walls and poor insulation that is hard to treat, and starts (at the left hand side of schematic) with a terrible figure of 200 kWh/m².a [the numbers are illustrative only – each house is different]:
We then start to move from the left towards the right. Spending even modest money on fabric will mean that the size (and cost) of the heat pump system you might buy progressively reduces (e.g. there is a big drop if one moves from a cascade heat pump system to a single heat pump).
At some point, the marginal cost of incremental insulation will rise above the cost of an ASHP (when the solid line cross the dashed line). For example, replacing all the windows with double or triple glazing is a non-trivial expenditure.
And of course, to try to turn a leaky Victorian house into a Passivhaus makes little sense, so there are natural constraints in how far one goes, depending very much on house and site specific factors.
Some people may decide to adopt the ASHP early for a number of reasons (they need to replace an old gas boiler and care about the climate future, or they live in a house in a conservation area where measures like external wall insulation will not be accepted). They may live on a terrace and external wall insulation for one house without the whole terrace joining in, would meet a lot of resistance (including from the planning department). For whatever reason they make their choice, I call these, ASHP ‘eary adopters’.
On the other hand, they may live in a house that can absorb a lot of retrofit insulation measures – perhaps as new owners wanting to start with a ‘blankish’ canvas – and with the help of a retrofit assessor/ expert, strive to get to the AECB 50 kWh/m².a figure. Let’s suppose they don’t have constraints such as conservation issues to deal with. We might call them ASHP ‘late adopters’.
In practice, householders will be somewhere on a spectrum between these two example – in a decision spread zone. A whole set of factors may come into play in their decision making: wish to improve comfort, or reduce carbon emissions, or concern over gas prices in the future, to name just a few.
It therefore makes no sense to say “you need lots of insulation to make them work”.
No, you need “a sufficiency of insulation” to make the running costs “fit your expectations”, and everyone may arrive at different expectations.
But don’t try to heat a barn, with a gas boiler or a heat pump.
Soil carbon is important but it is staggering that both Minette Batters and Prince Charles have made unchallenged statements on @BBCr4today (14th July 2021): That some (livestock) farms are already carbon neutral and that soils could take up 70% of the world’s emissions.
This is all in an effort to promote sustainable livestock farming. Like Graham Harvey in his book ‘Grass-Fed Nation’ they have been seduced by the claims of Allan Savory; but these have been thoroughly debunked by the Food Climate Research Network (FCRN)
The fallacy rests on a confusion between fast and slow carbon cycles, between carbon stocks and flows, which with a little bit of naive maths creates a myth that now permeates the NFU’s PR on the future of farming.
We need better soil health to reduce net carbon release in a warming world, but it is no good using this as a ploy to retain high levels of meat consumption; and we need a massive reduction in the consumption trend.
Godfray et al [1] show the path we are on:
Good soil health will help create sustainable arable farming, but not as a silver bullet to cancel our fossil fuel emissions. Massive reductions in meat production mirrors the same reversal that is needed in all sectors of our economy, and it is a fantasy to suggest otherwise.
Efficient land use is also an issue. Today, over 50% of the UK’s land is devoted to livestock (and this does not include the foodstock we import to supplement their diet), and we import over 40% of our food. To be more self reliant, we have to make a radical shift in diet and land use, as the Centre for Alternative Technology clearly demonstrates in their report Zero Carbon Britain: Rising to the Climate Emergencyfrom which the following Figure is taken:
Livestock reduce the efficiency of calories produced per hectare [2], which is a major issue when it comes to feeding the world.
In the context of the climate emergency, the other issue is that livestock makes a high and increasing contribution to our carbon emissions [1]:
Trying to hide these emissions amongst some warm aspirational words about regenerative livestock farming in idyllic English countryside, is pure delusion (as well as being heavily funded PR), with no scientific basis.
It is such a shame that the NFU (National Farmers Union) are promulgating junk science to advance their meat-first agenda, and it seems that Prince Charles is also on board.
. . . o o O o o . . .
Science references:
[1] Godfray et al., ‘Meat consumption, health, and the environment’, Science 361, 243 (2018)
[2] Cassidy et al., ‘Redefining agricultural yields: from tonnes to people nourished per hectare’, Environ. Res. Lett. 8 (2013) 034015
Having worked with acrylics, watercolours and pastels for some years, I decided, finally to take the plunge and start to use oils. The scene is the view across to the River Severn and Wales from Selsey Common.
I’m still pinching myself that I managed to pull this off.
The foreground gave me the heebie-jeebies.
Grasses were quite straw like with subdued green, and there was lots of undulations.
I remembered to use a little red to ‘knock back’ the greens, and then added combo of yellow ochre and white to progressively lighten it; and some raw sienna in the other direction (to darken), maybe a smidgen of red too in places.
Also, some slightly larger brush strokes in the foreground to suggest more resolved grass.
I remembered to ‘think tonally’ to observe and think about light and dark – there was a huge range to cope with here. I used some Prussian Blue to help with the deep shadows.
The distant fields was just a kind of noodling around trying to get a sense of distance – so cooler, more muted and less defined the further away.
Wales is just a light purple sliver beyond the Severn, which itself is just a hint of reflected light.
The two fields on the right were compositionally crucial to me as they helped establish a near-ground scale beyond the foreground.
Some flecks of white on the mid distance right for buildings – never forgetting the power of gestalt to allow the viewer to see what their mind reconstructs based on the tiniest of visual clues.
The sky was a struggle – I miss the dynamism of working with acrylics or watercolour, so need to practice my skies – but the good thing is that the dark clouds suggest a darkening of the distant land below, and the few yellowy green bright streaks suggestive of sun breaking through on some fields. A little green in the sky is another fully transferrable trick of the trade.
The foreground is in full sun with slopes facing the sun almost white.
The pros of oils are also the cons.
You can keep fiddling for days if you want (although I finished this over 5 hours on and off); so blending on canvas, and wiping away sometimes, is all possible. Acrylics allow for multiple layers and drying in between, with spraying and all sorts of jiggery-pokery; but the palette needs constant attention to stop it drying out. I think they are both wonderful – its like trying to choose a favourite dish – why choose?
I think I’m going to fall in love with oils … just like I did before with acrylics, watercolours and pastels!
Retrofitting our often old housing stock to reduce heat loss is crucial, but we also need to stop using natural gas as the source of heating if we are to have any chance of meeting our goal of halting global heating.
It got me thinking about this question – if someone asked about retrofitting their house, and was motivated by the desire to reduce the carbon footprint of heating their home:
what is the first thing they should do?
It may seem a somewhat artificial question, because in any real world situation, several measures are likely to be advisable, but bear with me.
Many retrofit professionals repeat the mantra “fabric first”, which means, focusing on insulating the building, dealing with leaks, and so forth. This sounds like good advice, given that the cost of some measures, like insulating a loft, are relatively cheap and deliver big savings in carbon emissions.
However, in many cases this is expressed in stronger terms, like “deep retrofit”, which can mean doing everything possible to reduce the heat loss of a building. This could include external wall insulation to homes with solid walls (which cannot benefit from cavity wall insulation), new windows, and dealing with associated issues related to moisture, for example. This school of thought suggests that we should only consider using a heat pump after deep retrofit is complete [Note 1].
The mantra “fabric first” then effectively turns into fabric only, because it is not difficult to exhaust a householder’s retrofit budget with changes to the fabric of a building.
So why should we be considering heat pumps alongside changes to the fabric of a building?
A heat pump harvests the ambient energy outside a house – either from the air, the ground or water. This ambient energy comes from the sun (when the ground is used as a source it is never deep enough to harvest energy from the core of the Earth, even with a bore hole, and is simply extracting energy from the ground that has been warmed by the sun and stored there).
For every unit of electrical energy put in to drive the heat pump, it is able to deliver at least 3 units of heat energy into the home. A nice simple explanation of this process is provided here.
“Let me spell this out. Heat pumps are superior in efficiency to condensing boilers, even if the heat pumps are powered by electricity from a power station burning natural gas. … It’s not necessary to dig big holes in the garden and install underfloor heating to get the benefits of heat pumps”
He was calling for the adoption of Air Source Heat Pumps (ASHPs). He didn’t use the words ‘silver bullet’ but it is clear he was a big fan and frustrated at the low level of take-up. As he wrote
“heat pumps are already widely used in continental Europe, but strangely rare in Britain”.
I thought about how to present some information to help explore the question I have posed, and compare fabric related measures to an ASHP. I took data from the Energy Saving Trust website for a typical semi-detached house and plotted the capital cost of different interventions against the annual carbon saving that would result.
The capital costs are indicative and include the parts and labour required.
The only change I made to the Energy Saving Trust data was I reduced the savings for an Air Source Heat Pump (ASHP) from about 4.5 tonnes of CO2 to about 3, because that better reflects the carbon intensity of the national grid in the UK in 2020.
I also indicate the level of disruption involved with colour coding, because this can be a factor in a householder’s decision making. The graph is based on the data tabulated in the Table at the end of this essay [Note 4] (click on image to see higher resolution):
This householder is aiming for the highest carbon saving, and there is only one answer glaring out at you from this graph: the ASHP.
If I change the question slightly we get a more nuanced answer:
what is the first thing they should do, based on carbon reduction ‘bangs for your buck’?
The ratio of the annual carbon saving to the capital cost is a measure of ‘bangs for your buck’. On this criterion, it makes sense to do the low hanging fruit of loft insulation and fixing drafts, but then once again the ASHP scores very well (this allows for the fact that typically more than half the radiators will need to be upgraded [Note 2], and are included in the cost estimate).
Whereas external wall insulation would typically be similar in cost to an ASHP, but deliver only a third of the annual carbon saving and be a much more disruptive intervention; and in many cases not one that is practical to implement.
A counter argument would be that if we did manage (for a solid walled home) to do all the fabric related measures, we would achieve about 2,400 kgCO2/yr carbon saving (over half the current emissions of 4,540 kgCO2/yr) for an outlay of about £18,500; and then the ASHP could be added and would then ‘only’ need to deal with the remaining half, and that could mean that a lower capacity heat pump could be installed, reducing its cost somewhat.
Each situation will be different and depend on what interventions are possible. If a building is listed and external installation is prohibited (and the alternative of internal insulation dismissed), then the fabric related measures would then total 1,500 kgCO2/yr, leaving two thirds of emissions to be dealt with.
In either case, in order to maximise the emissions reduction one would require a heat pump.
My argument is not that you must fit a heat pump first, but that you should consider all the available options and to think about a plan (possibly over a number of years).
I went on to plot another graph where I included the following additional features:
To show the ‘best’ reasonable case fabric interventions (which would raise the cost of say, wall insulation, but at the same time increase the carbon saving).
To include a Ground Source Heat Pump (GSHP) option, either with horizontally laid slinkies, or using vertical bore hole(s).
To show what happens as the UK electricity grid moves from 2020 carbon intensity levels to being 100% green.
The following illustrative graph was the result:
There are a number of interesting observations based on this graphic.
Firstly, as the arrows show, we can increase the carbon savings for each fabric related measure, but these improvements come at extra cost. There will be some trade off for each householder situation as to how far they can go.
Secondly, once you have a heat pump, the annual carbon saving will increase every year as the grid gets greener (as it has been in the UK), without the householder having to lift a finger.
Thirdly, while a GSHP may have a better performance than an ASHP, it is likely to be quite limited [Note 3], and as Paul Kenny said in his talk to Carbon Coop ‘Heat Pumps – Learning and experiences from Ireland’, if you have extra money spare, why not do the easy thing and spend it on further upgrades to radiators, when you can achieve a target COP without the major disruption and risks associated with a GSHP project (assuming that is even an option).
Fourthly, a borehole GSHP is even more costly, and more risky. There are significant risks associated with, for example, drilling into water tables, but the real killer is the cost. For a single householder it makes little sense. Of course, one can imagine scenarios where several houses could share the costs, but these are likely to be exceptional projects; not the basis for mass roll-out of heat pumps.
Some will argue that an ASHP requires supplementary heating during very cold spells in winter. However, in the same talk referred to above, Paul Kenny used data from a significant number of retrofits in Ireland that had ASHPs, using a design parameter of -3°C for cold winters. When the beast from the east came and these houses experienced -6°C, they were all fine and did not require supplementary heating. He wrote a piece on LinkedIn about this experience, which flies in the face of much of the ‘received wisdom’ in the retrofit community.
And in the UK, without the much larger grant that GSHPs enjoy, as compared to ASHPs, it is doubtful there would be anything other than a marginal role for GSHPs. It will be no surprise if ASHPs dominate the heat pump market in coming years and for some installers, this is already the case.
So, are Air Source Heat Pumps a silver bullet to decarbonising the heating of homes?
One has to say in many ways they are!
But of course, in reality, it makes sense to consider them in the mix of other retrofit measures, and to carry out some improvements to the fabric of a building as part of a ‘whole house’ plan.
We just shouldn’t let the ‘deep retrofit’ mantra put people off considering an ASHP; maybe even as one of the first things you do.
(c) Richard W. Erskine, 2021
Notes
1. While I am a huge fan of PassivHaus and similar standards, we must remember that these standards cannot easily be applied to existing stock, and would be hugely expensive. 80% of the homes in 2050 already exist, so 80% of the problem of decarbonising heat in homes is already there; and BRE estimates there are 9 million ‘hard to treat’ homes in the UK.
2. Upgrading radiators is usually required to increase the effective surface area. This is needed because heat pumps operate at a lower flow temperature, and the heat delivered is a function of the temperature of the radiator and its surface area. The surface area can be increased by using 2 or more panels with fins sandwiched between them. This can also help reduce height and width of the radiator that would otherwise be necessary, while making the radiator somewhat deeper / fatter.
3. A GSHP has a better Coefficient of Performance (COP) in winter, an ASHP could do better in Spring and Autumn. The overall Seasonal COP for a GSHP will probably be higher but unlikely to be higher than 15%. We need real world studies to get a good figure here. But the cost of a GSHP using slinkies in 1.2m trenches (for those unusual cases where householders have sufficient land to achieve the area necessary) is something like double that for the ASHP.
4. Table of Typical three bedroom, solid walled, semi-detached house
Governments of all shades, and energy utilities, tend to believe that large, centralised solutions are the most cost-effective because of the economies of scale. There is a belief that local solutions will increase costs.
Ground-breaking work by an energy modelling company in the USA (Vibrant Clean Energy (VCE)) has turned this argument on its head, and this could, or should, have profound implications for any strategy to decarbonise the power grid in any country, including the UK, with renewables playing a dominant role in the future.
The present study finds that by including the co-optimization of the distribution system, the contiguous United States could spend $473 billion less on cleaning the electricity system by 95% by 2050 and add over 8 million new jobs. … The findings suggest that local solar and storage can amplify utility-scale wind and solar as well as provide economic stimulus to all regions across the contiguous US.
The study finds that wind, solar, storage and transmission can be complements to each other to help reduce the cost to decarbonize the electricity system. Transmission provides spatial diversity, storage provides temporal diversity, and the wind and solar provide the low-cost, emission-free generation.
We understand that what is true for USA can be true of the UK.
Now, in the UK, various groups have already published reports based on modelling of the grid to show that net zero is achievable. The Centre for Alternative Technology (CAT) produced a report ‘Zero Carbon Britain – rising to the climate emergency’ that showed how this could be achieved. They used granular weather data to help model supply and demand at national scale. Energy storage was included at utility scale (using excess energy on windy/ sunny days to produce synthetic gas that could be used to generate electricity during periods when both wind and solar were too low to meet total demand).
VCE have gone much further in the sophistication and granularity of the modelling:
Firstly, they have modelled the dynamical behaviour of the grid at all scales – with 5 minute intervals and 3km square spatial grid over a minimum of 3 calendar year (and for planning reserves up to 175 years hourly at 30km grid). There was always a suspicion with other models that even if the national supply and demand appear to match up at a point in time, the grid will experience issues at particular points in the grid, particularly at local pressure points. VCE have addressed these weaknesses.
Secondly, the economics of how the roll-out of the capacity is achieved is key to policy. The modelling includes economic aspects to show the marginal cost of each new tranche of generating capacity; and so modelling the evolution of the network, not just an assumed end point. VCE have modelled the period between ‘now’ and future end dates to see what impact different scenarios have on the marginal and net costs.
The astonishing result that VCE have found is that local renewables with local storage – even at only 10% of the total generating capacity – make a disproportionate impact on the speed and cost of further roll out of associated utility scale renewables. This is because it creates flexibility in the grid and relieves pressure points.
VCE note that this was an emergent behaviour of the system, which the modelling revealed, and certainly not obvious to energy specialists, because its only emerges when the model reaches a sufficient level of sophistication.
The bottom line is that we should see local renewables (including community energy schemes) not as marginal additional capacity in the transition to a greening of the grid, but as a key ingredient to both speed up – and lower the cost of – the transition. We should see small and big as beautiful, working collaboratively, to accelerate the greening of the grid.
This may seem quite a technical point for those who are not students of the energy system, but it is truly remarkable and transformative, and from a policy perspective, it highlights the need for Governments to continue to promote and invest in large, utility scale renewables, but also to assist in the roll out of local renewables and associated storage.
Emergent behavior is characterized by properties and behavior that is not dependent on individual components, but rather the complex interactions and relationships between those individual components. Therefore, it cannot be fully predicted by simply observing or evaluating the individual components in isolation.
Remembering Hiroshima – the dead, the survivors and the blight that nuclear weapons have brought on this world – on this day, 75 years after the first nuclear weapon was used against a civilian population. A weapon, remember, that was developed because of a fear that Nazi Germany would develop it.
Joseph Rotblat, a scientist who I admire so much, left the project to develop a nuclear weapon when it became clear to him that Germany would not be able to develop it. Few if any others on the project possessed his moral vision and authority.
In 1981, Professor Mike Pentz, who led the formation of the OU (Open University) science department, founded Scientists Against Nuclear Arms (SANA). I was at Bristol Uni. doing a Post Doc at the time.
I went to hear him speak. What an amazing and inspiring speaker he was; I signed up on the spot. The cruise missile crisis was in full swing.
Within months it seemed I was on the National Coordinating Committee of SANA.
It kind of killed my passion for science, something I’d been in love with since a young boy. I had a lab when I was just 12.
I left my research in computational quantum chemistry.
I went into the world of industry and in my spare time spent a lot of the 80s working in the background helping to develop tools for the anti-nuclear movement.
This included a Program to assess the impact of nuclear attacks, which I managed to squeeze onto an Amstrad PCW 8256 – with no hard disc and a memory of just 256K! Or 0.25Mb, or 0.00025Gb.
This program was given free to local authorities.
During this time I had also married the beautiful nurse who I met in Bristol, and we brought up two girls. So Bristol always has a special resonance for me, on so many levels.
Eventually I was pretty burnt out and stepped back from nuclear activism – after all, we got rid of cruise. Job done, right?
If only.
SANA evolved into SGR, Scientists for Global Responsibility, a great organisation that is still going strong and doing good work.
Nevertheless, it might explain why it took a while for me to realise there was another great elephant in the room – global warming.
This time, it was Naomi Klein, and specifically her book This Changes Everything which was the kick up *** I needed. I have a signed copy from when she spoke at the Cheltenham Book Festival.
Now I spend a lot of my time in retirement on climate change matters, but focusing my efforts on local community action.
I never lost my love for science, even if things turned out different to my boyhood dreams.
But damn you, nuclear weapons, and damn you fossil fuels, and you, the same old, same old vested-interest apologists.
Dr Fredi Otto is the Acting Director of the Environmental Change institute and an Associate Professor in the Global Climate Science Programme where she leads several projects understanding the impacts of man-made climate change on natural and social systems with a particular focus on Africa and India.
Her new book, Angry Weather: Heat Waves, Floods, Storms, and the New Science of Climate Change published by Greystone Books is due out on 17th September in the UK (and 2 days earlier in USA), and I for one, can’t wait to read it.
The attribution work Dr Fredi Otto has helped pioneer is extremely important and here’s why …
We need to be able to move beyond the general global trends that have tended to dominate the conversation on climate change. These demonstrate beyond doubt that human greenhouse gas emissions are the dominant factor in creating a warmer world, where extreme weather events are an expected outcome.
What has been harder to assess is the ability to pin a particular extreme weather event on man-made global warming.
I am hoping that this book will help me – and maybe you? – on a journey of discovery, to learn more about advances in our understanding of how to make that link (I am ordering my copy through my local bookshop The Yellow Lighted Bookshop, not Amazon, because (a) I support local businesses whenever I can and (b) YLB are just brilliant!)
In a previous era when smoking and lung cancer cases first began to appear inthe courts, the tobacco companies would use the defence that nobody could be sure if this or that particular case was due to smoking or would have happened anyway. It was just bad luck!
No matter that the bad luck was rising exponentially amongst smokers.
The fossil fuel companies can and will use the same cynical defence.
Sir Richard Doll and collaborators did pioneering work to demonstrate the link between smoking and lung cancer in 1950, using novel statistical methods to overcome the charge that ‘correlation does not mean causation’. In this case it most certainly did. Remember, that this long ago, the underlying biochemical mechanisms were not that well understood, and it was 3 years before we had even the basic structure of DNA established, in that seminal year when I was born 🙂
So, climate attribution science – the ability to pin man-made climate change on particular extreme weather events – is a complete game-changer.
The advantage here is that the underlying physical mechanism are extremely well understood, relying on 200 years of accumulated fundamental science. No need here for any new fundamental physics.
But once again, statistics is the hurdle that must be overcome.
Because while at a global level, the uncertainties as to the human causation for global climate change have now essentially decreased to the point where humanity’s fingerprints are all over man-made global warming, as one gets to smaller and smaller scales, the uncertainties mount up, for quite basic statistical reasons.
Once again, innovations are required in order to demonstrate the link at the level of a Hurricane Sandy, or the recent extreme Australian Fire season.
But imagine the implications of being able to make these connections.
We would then be in a position to hold businesses and politicians to account for their inaction, and put a price on the consequential damage, at least in the narrow sense of the quantifiable impact on property; something they at least understand [1].
As Dr Otto says:
“If governments don’t do their job and don’t do enough to put a stop to climate change, then courts can remind them of their purpose.”
So, far from being about some dry technicalities regarding climate attribution and statistical analysis, this book could become part of the tool-kit of everyone involved in action to limit the extent and severity of man-made global warming.
I really hope it does.
(c) Richard W. Erskine, July 2020
NOTES
[1] It is tragic that we seem – at least in the Anglo-Saxon culture – to put so much more weight on loss of property than loss of habitat or life even, but that bias can be turned to our advantage.
I have had a number of conversations over the last few years with friends and associates working in climate and green groups who are sceptical about the focus on electrification in decarbonising our energy. They are, for want of a better phrase, green electrification sceptics.
They will argue that only massive reductions in the consumption of energy is the way forward, while of course they agree that we should stop using fossil fuels and are not opposed to electrification per se.
They are neither climate deniers nor renewables deniers (those two being birds of a feather). But they do represent a significant strand of opinion that believes the UK electricity grid won’t be able to cope, within the required timescale, with the demands of transport (Electric Vehicles) and heating homes (using Heat Pumps), because of the huge amount of energy we currently use nationally in the form of gasoline and natural gas.
They would instead argue for a modal shift towards walking and cycling, and public transport and – for many homes – deep retrofit. This should be the focus they would argue, instead of trying to do the same things we do today – with all the wasted energy that involves – and try to decarbonise that.
Well, I agree with this sentiment.
Driving a few miles to a shop to get a loaf of bread when we could have walked or cycled; heating our homes with gas boilers with upstairs windows half open; and all this with no price paid for the damage done by our carbon emissions.
It’s crazy and I agree with that.
However, people do need to move around, and for some in rural areas at least cars are unavoidable, even with improved public services. We certainly should not need 30 million cars in the UK in 2050 or even 2030, but zero is also not the right answer. And we need to heat our homes in winter, and we are not going to apply PassivHaus levels of retrofit to the (according to BRE) 9 million ‘hard to treat’ homes in the UK – at least on that timescale. We need a plan, and the numbers that back up the plan must have a sound basis.
This is where I want to challenge green electrification sceptics, because I see a tendency to bolster their arguments with information that doesn’t stack up. This helps no one, because it doesn’t get us to a realistic plan we can all work towards. And we need to scale up whatever we do pretty damn quick, with solutions that we already have to hand (techno-futurism is a tactic used by the denialists to delay action, and we shouldn’t fall for it).
Electricity in 2016 was about 20,000 ktoe (Kilotonnes of oil equivalent – a unit of energy) and (natural) gas plus petroleum was about 150,000 ktoe.
So, the argument goes, we’d need to increase the electricity energy generated by at least 7 times to displace the gas and petroleum, and this doesn’t sound feasible by 2050 let alone 2030 (the date that many local authorities in UK are committing to getting to net zero in these sectors).
My Response
The basic issue here is confusing primary energy, shown on this graph, with delivered energy, and this overstates the amount of electricity that would need to be generated to displace the fossil fuels shown.
‘Primary energy equivalents’ includes not only the delivered energy, but any energy lost as part of the transformation from one form (e.g. gas) to another form (e.g. electricity) of energy.
But there are other factors to take into account when considering the feasibility of electrifying transport and heat. I have listed them here, and they fundamentally change the basis for any debate regarding the electrificationof transport and heat in the next few decades:
Primary energy equivalents: For fossil fuels these shouldn’t be used as measures of the energy required in a transformed system, without appropriate adjustments.
End-Use efficiency factors: Inefficiencies of internal combustion engine (30% efficient) compared to a EV (90% efficient); see Note 1.Heat Pump (typically 300% efficient) is also at least 3 times as efficient as a gas boiler (90%), again meaning a reduced demand to do the same job; see Note 2.
Modal changes: By doing more to get people out of cars (as the new Decarbonising Transport report from UK Gov’t calls for) – walking, cycling and more use of public transport – we can reduce energy required for travel.Reduced consumption and electrification are not mutually exclusive..
Smoothing / lowering peak demand: On the consumption side at grid scale, there is lots that can be done to lower and smooth demand. For EVs, smart charging means we can eliminate large peaks in demand. For buildings, off-peak water heating means less wind turbines to do the same job.
Energy storage / flexibility: Comes in many forms, including electrical (batteries), thermal mass (e.g. hot water tanks), pumped storage, etc. – EV cars can become part of the solution, rather than the problem, by helping to build a flexible and adaptive network at local and national scales.
These factors together mean that instead of 7 times more electricity energy per year for a future UK it would be much less than this. Even if we carry on doing more or less the same things, it would be 2.7 times more according to David Mackay (see Note 3).
If we adopted the level of modal shift and retrofit proposed in the Centre for Alternative Technology’s ZCB scenario (Zero Carbon Britain), then we could reduce annual demand for energy by 60%, including an 80% reduction in the energy required for all forms of transport (cars, buses, planes, etc.) (see Note 4).
With Covid-19, but even before, there were many questioning why someone needs to do a 100 mile round trip for a 40 minute meeting. The digitisation of many sectors of the economy can make a big dent in the need for journeys – by any means – in the future.
… can we reduce the energy we consume for heating? Yes. Can we get off fossil fuels at the same time? Yes. Not forgetting the low-hanging fruit – building-insulation and thermostat shenanigans – we should replace all our fossil-fuel heaters with electric-powered heat pumps; we can reduce the energy required to 25% of today’s levels. Of course this plan for electrification would require more electricity. But even if the extra electricity came from gas-fired power stations, that would still be a much better way to get heating than what we do today, simply setting fire to the gas. Heat pumps are future-proof, allowing us to heat buildings efficiently with electricity from any source.
Further thoughts on EVs
At this point, the Green Electrification Sceptic might say…
Ok, I see what you’re saying, but charging all the cars (that will remain at current levels for some time) is still going to need a massive increase in generating capacity, to deal with the peak load
The flaw in this argument rests on the assumptions that everyone is charging at the same time, but in reality the load can be spread, lowering the peak demand. Nationally, 73% of cars are garaged or parked on private property overnight, according to RAC Foundation. Utilities are offering deals to help them to do smart management of the grid, offering customers some perks for signing up to these win-win deals. You just tell the service provider via your charging App you want to be charged by 7.30am tomorrow morning and the software decides when to schedule you. So the peak demand will be considerably less as a result, and in fact, EVs with their batteries then become part of the solution, rather than the problem. And the charging infrastructure need not be the hurdle many assume it to be with most charging occurring at home. EVs will actually help create the flexible and adaptive grid we need in the move to renewables.
A McKinsey report on The potential impact of electric vehicles on global energy systems, concludes that the expected uptake of EVs globally is entirely manageable, assuming the relatively simple measures such as load shifting and smart charging we have discussed are deployed.
However, as a society we are still too obsessed with cars. Fetishising cars needs to end. A large EV SUV is still using a lot more resources and energy than would be needed by someone able to use regular and affordable public transport (say an EV bus), or a bike (electric or not). There is an issue of fairness at work here too, for the many people who cannot afford an EV, even a small, less resource hungry one.
Having an expensive EV car sitting mostly idle is not a great solution either, because it fails to maximise use of resources.
In the future, people imagine autonomous vehicles which would remove the need to even own a car, and instead we would have a ‘car as a service’ via an App on your phone, which could mean we need many fewer vehicles (but maximising their usage) to cover the same miles required (the cynic might say “isn’t that a taxi?” – yeh, but minus the human driver).
For cities, it is already questionable whether people need a car; many don’t bother because of the hassle.
This not the case in the rural setting, so car ownership will not end anytime soon, but we need to have a major investment in public transport, cycle lanes, and cycle infrastructure in general – and policy measures like dynamic road pricing – to nudge people out of cars, as part of a comprehensive approach to decarbonising mobility and transport.
Further thoughts on Heat Pumps
Gas boilers and a lack of any charging for the damage caused by carbon dioxide emissions have encouraged a culture of flagrant wastage of energy in the UK. Someone with a house with a 6kW heat loss might typically have a 20kW gas boiler, so it can be heated in no time, even while windows are left open!
This is our instant gratification – ‘I want it now’ – culture.
There is no imperative to insulate the home because of artificially low gas prices (which of course will sky rocket in the future, just you wait and see).
It is the kind of attitude that ensures that when heat pumps are installed to replace gas boilers without any serious attempt to educate and monitor behaviour, the nameplate performance will be ruined by people continuing to try to heat the town as well as their homes, or oversize the heat pump and also end up killing its measured coefficient of performance (COP).
Let me spell this out. Heat pumps are superior in efficiency to condensing boilers, even if the heat pumps are powered by electricity from a power station burning natural gas. If you want to heat lots of buildings using natural gas, you could install condensing boilers, which are “90% efficient,” or you could send the same gas to a new gas power station making
electricity and install electricity-powered heat pumps in all the buildings; the second solution’s efficiency would be somewhere between 140% and 185%. It’s not necessary to dig big holes in the garden and install underfloor heating to get the benefits of heat pumps; the best air-source heat pumps (which require just a small external box, like an air-conditioner’s) can deliver hot water to normal radiators with a coefficient of performance above 3.
But people still seem to think it’s magic, and myths abound around heat pumps and especially Air-Source Heat Pumps (ASHPs) …
… they don’t work on older, larger homes
… they don’t perform well in cold spells
… they are really noisy
… you’ll need deep retrofit to Passivhaus levels to make it worthwhile
All untrue. But people have had bad experiences due to a combination of poor assessments, poor installation and tuning, and poor operation.
The more insidious issue with heat pumps is that people think it’s magic that you can apparently heat a house with cold water or air. The BBC’s record on reporting heat pumps is dismal (see Note 6).
Now, because only a minority or householders have a water or ground source sufficient to heat their homes, so the assumptions is that we would expect the great majority of homes to use air-source heat pumps (ASHPs).
The ‘Green Electrification Sceptic’ will say they understand how heat pumps work, but then repeat some of the myths around ASHPs and say that the Seasonal Coefficient Of Performance (SCOP) – the COP averaged over the year – is not the oft quoted 2.5 for ASHPs, but 2 or even lower. What I think this reflects is bad experiences based on poorly installed or operated systems. This bad experience – in some cases dating back years – is being used as a reason to reject ASHPs.
I attended an excellent webinar hosted by Carbon Coop from Paul Kenny, former CEO of the Tipperary Energy Agency who conducted a pilot, including many homes (working with the Limerick Institute of Technology to assess the results).The video recording is here and his slides are here. These were all ASHP installations.
During a period of October 2017 and May 2018 the overall COP ranged from about 2.6 to 3.6 and averaged 3.1, pre-optimisation. During an exceptionally cold 2 week period, where external temperatures were down to minus 6oC, the COP was never below 2.5 and ranged from 2.5 to 3.
Key points to note:
They did necessary and sufficient retrofit but not to a Passivhaus standard.
There was no external wall insulation, for example.
They did not upgrade 2 panel radiators to 3 panels. They did pragmatic emitter upgrades.
When asked whether it was worth going for a Ground-Source Heat Pump (GSHP) because of extra nameplate SCOP, Paul Kenny said no, because if one has some extra money, they should spend them on upgrading emitters (e.g. get those 3 panel radiators), and you can close the performance gap without the disruption of digging up an area of garden (assuming one has that option, which many won’t have).
It is a very positive story of how to make ASHPs successful (and, btw, Carbon Coop are a great source of material, sharing real-world experiences of whole house retrofit).
He does caution that one needs a properly qualified assessment done, and ‘sufficient’ remedial retrofit is obviously required. But properly sized and installed, there are really no issues using the approach they have now refined. Every house is different, but the ingredients are the same.
He cautions also against oversizing a heat pump (and I think the combination of EPC (Energy Performance Certificate) and RHI (Renewable Heat Incentive) may push this outcome sometimes, by being pessimistic about the achievable SCOP), because then they may well be kicking in and out of operation, and this will kill their measured COP.
Increasingly we are seeing ASHP and PV combos (see some examples from Yorkshire Energy Systems here) because, while the peak need for heat and minimum for solar PV coincide in the year – hardly ideal – the ‘shoulder seasons’ (Spring and Autumn) do provide significant benefits, and some households are finding the net cost of operation competitive with gas. When, finally, gas attracts the level of carbon tax it deserves, it will make it easy for ASHPs to compete on a level playing field in price terms.
Final Thoughts
I support the call for reduced consumption in all its forms, and it should be encouraged as much as possible, but this is not mutually exclusive with electrifying transport and heat. On the contrary, electrification helps in this endeavour, because of increased efficiency and flexibility. But it needs to be coupled with approaches that ensure fair access and market reforms.
We need to acknowledge the issues hitherto in increasing the skills base for retrofit and renewable heat, and improving the quality of installs, but that is not a good argument for dismissing heat pumps. It’s an argument for a major push on the required training and quality systems, something the Government has lamentably failed to prioritise.
As CAT ZCB says, we need to ‘power down’ (stop wasting energy, use it more efficiently, and change some behaviours and norms), but then ‘power up’. The power up bit requires a lot electricity from renewable capacity, and a fair amount of storage too. They have a plan we can get behind.
Currently, the UK Government does not have a coherent plan across all sectors, but whatever plan we decide to finally put some real effort into, it needs to be one that stacks up.
And for those that claim that the CAT ZCB models and assumptions are optimistic, it is worth looking at others who are independently modelling the transition, and are optimistic about our ability to decarbonise the grid in relatively short timescales (see this commentary on a Colorado study).
As the sadly departed David Mackay said, he was not biased in favour of any one solution, but was in favour of maths. We all need to be fans of maths, and be clear about our assumptions, when conceiving and debating options.
Ultimately, electricity is a great democratiser of energy. Generation is de-coupled from consumption in a way that was not (and never can be) true for fossil fuels used for cars or heating homes.
If you consume electricity in a light bulb, EV car, heat pump, fridge or lawn mower, you can take the renewable energy from any source – a wind turbine array in the North Sea, or a community energy scheme, or the solar PV on your house. All powered ultimately by the sun.
It is not surprising that those who have controlled the energy supply chains – from exploration and production to the petrol station forecourt or gas metre at your home – are putting up a fight to retain control, including greenwashing galore, and fake green gases, with the help of lobby groups and big marketing budgets, which is nothing to do with finding the right solution for consumers or the planet (as the dash for methane gets marketed as a dash for Hydrogen).
What is more surprising is that greens do not always appreciate the importance of electrification to both the decarbonisation and democratisation of energy.
It’s time they did.
(c ) Richard W. Erskine, 22nd July 2020
NOTES
NOTE 1 – EV efficiency compared to Internal Combustion Engine (ICE)
EVs are about 90% efficient (so for every 1kWh of energy in its battery,an EV will use 0.9 kWh to do work), whereas the Internal Combustion Engine (ICE) is typically around 30% efficient (so for every 1kWh of potential energy in the fuel, only 0.3kWh will do any work). That is a relative efficiency of 3 to 1 (in both cases excluding the energy losses between the engine and moving wheel).
Another way to calculate it is to take a figure of 60 mpg figure for a petrol car, and using a figure of about 30 kWh per gallon, that equates to approximately 2 miles per kWh of primary energy for a petrol car. Whereas, this source indicated 41 kWh battery capacity for a Cleo with a range of 250 miles, this is (250/40) approximately 6 miles per kWh. So, again, a relative efficiency of 3 to 1 in switching to a similar sized EV car.
Hydrogen is not a miraculous source of energy; it’s just an energy carrier, like a rechargeable battery. And it is a rather inefficient energy carrier, with a whole bunch of practical defects.
Cars
A hydrogen cell car is about 40% efficient in its end-use of energy, whereas an EV is 90% efficient. If it is ‘green hydrogen’ created from a wind turbine through electrolysis, the overall efficiency for the hydrogen cell car is roughly 50% * 40% = 20%. Whereas for the EV it is 90% efficient (in both cases ignoring relatively minor network losses – for gas or electricity – and in both cases excluding the energy losses between the engine and moving wheel).
20% versus 90% is not a great look for hydrogen cell cars, and would mean (9/2 =) 4.5 times as many wind turbines to support the same level of green mileage by UK drivers.
Heat
And if hydrogen is a poor choice for cars, then providing ‘low temperature’ heat for homes is a little crazy in my view. Whatever hydrogen we do produce needs to be reserved for high temperature industrial applications.
But fossil gas is not the fuel of 2050. Hydrogen appears to be waiting in the wings to replace fossil gas in the grid. However, hydrogen is unlikely to be available in large quantities across Europe for home heating, as the available hydrogen goes first to those uses that rely on high temperature heat – which hydrogen can produce but electricity cannot. In the various 2030 and 2050 European decarbonisation scenarios, hydrogen for use in buildings is almost absent in 2030 and provides a small share of energy consumption in only some 2050 scenarios.
Importantly, projections show hydrogen will likely be significantly more expensive than a heat pump for home heating, and adapting to hydrogen will require upgrades of both the grid and home heating systems.
The availability and cost of hydrogen for domestic heat are at best uncertain. If low-income households are disproportionately reliant on gas, they will pay higher costs for infrastructure and be open to the uncertainty and price shocks of replacement fuels.
Sourcing Hydrogen
An important question is: where would the energy come from to manufacture the hydrogen? Fossil fuel companies would love that we continue to source it from methane (currently 95% of hydrogen is produced this way), but a by-product is carbon dioxide, and then you have to believe it can be successfully buried using ‘carbon capture and storage’ (CCS). Yet CCS is unproven at the scale needed, and the timescales require urgent action. So the full supply chain for hydrogen today is far from green. And then there is the cost of storing this gas, and the infrastructure.
A study done for the Climate Change Committee in Analysis on abating direct emissions from ‘hard-to-decarbonise’ homes (Element Energy & UCL) , July 2020 looked at different scenarios. Interestingly it seems that for those scnearios involving hydrogen, the (probably prohibitive) costs of CCS and the storage off hydrogen are not included in their comparative cost analysis (because of their uncertainties). Whereas the oft stated hurdles for using widespread adoption of heat pumps such as developing the supply chain and raising the skills (relatively trivial things to fix) are highlighted ad nauseum.
But these hurdles could be addressed tomorrow, with an appropriate push from Government (e.g. legislating for air-source heat pumps for all new builds and post-build energy performance certification; and no gas connection). This would force the laggardly big boys in construction to institute the training required and pump-prime the supply chain. It ain’t rocket science. The UK Treasury need to end the short-sightedness that killed the zero carbon homes plan and the Government should tell the UK’s largest house builder to pull their fingers out!
Other ways of producing hydrogen exist, one of the most talked about is by electrolysis using excess energy from renewables, producing so called ‘green hydrogen’, but that these will never be greater than 100% (and electrolysis is around 50% efficient), so can never compensate for the lower of efficiency of hydrogen-cell cars when compared with EVs.
Much of my 45-year career in industry and academia has been spent studying energy efficiency and power production and supply. I believe that hydrogen has a limited role in decarbonisation, and that businesses with a vested interest in promoting hydrogen are doing so at the expense of British consumers.
Michael Liebreich has written on the economics of hydrogen in Separating Hype from Hydrogen, both on the Supply Side and Demand Side.
He has also published a Hydrogen ‘Use Case Ladder’ showing which applications of hydrogen make sense and which don’t. Cars and Heating are in the ‘don’t make sense’ section of the ladder (see NOTE 7).
Hydrogen will play an important role in industry, and on the electricity power grid, providing a form of stored energy that addresses need to balance generation and demand over longer periods. Michael Liebrich shared a figure – the hydrogen use ladder – showing where hydrogen can/ should be used, and where it shouldn’t:
Whichever way you look at it, the hype around hydrogen around transport and heat is overblown.
Synth Gas
Nevertheless, there will need to be a role for synthetic gas – hydrogen or others – as an energy carrier and/or storage medium.
The CAT ZCB report includes a significant role for synth methane for energy storage and backup. Their argument being that they can leverage existing gas infrastructure for backup power generation, for example, using truly green synth gas (so no CCS required).
Chemical storage is an important potential complement to gravitational (pumped storage, hydraulic storage) and battery storage, because it can be inter-seasonal in scope. But each must be judged according to its qualities (cost, carbon intensity, capacity, latency, storage, transmission, etc.).
Imagine arrays of solar PV in the Sahara generating electricity; how do you get that energy to where it is needed (Africa and Europe, say)? It could be via an electricity distribution network, but could also be by producing synthetic gas, and transporting that gas via pipelines. If the gas is easy to liquify (as Ammonia is), other options are possible. Instead of Liquified Natural Gas (LNG) from Qatar, we could have liquified renewable sunshine from Australia, which could become a leading post-coal energy exporter, with the help of Ammonia.
Conclusion
Ultimately, though, electricity is a great democratiser of energy, when freed from fossil fuels in its generation. Heat Pumps can get their electricity from any low carbon source and so, as David Mackay said, are future proofed.
NOTE 3 – Sustainable Energy without the hot air (2009), David Mackay
This book, available online, should be required reading for anyone who wants to discuss how to decarbonise a country’s energy supply and usage, not because it was the final answer on any scenario (nor claimed to be), but for its approach, which was to provide a tool kit for thinking about energy; to increase our energy literacy. The kiloWatthour (kWh) is a usefully sized unit of energy employed throughout the book, and also one that appears on our utility bills. A kWh per person per day (kWh/p/d) is a measure that makes it simple to assess our average consumption, and compare different options.
Mackay showed how energy consumption in UK would drop purely through electrification (assuming we still do more or less the same things), and since fossil fuels would be displaced by electricity generated without fossil fuels, we would eliminate most of the carbon emissions, but of course, the electricity generation would need to increase in the process (Mackay said that 18kWh/p/d should rise to 48kWh/p/d, or an increase by a factor of about 2.7, or an additional 170% electricity capacity) – See Figure 27.1 on p.204:
NOTE 4 – Centre for Alternative Technology’s scenario
Inefficiencies exist in the combustion of fossil fuels to produce useful ‘work’, but also in different end-use settings, such as electrical white goods (e.g. fridges) and lighting.
There are also reductions in demand possible by changing some of the things we do today, such as increasing the use of walking, cycling and public transport compared to car use, for example. Taking all those into account, CAT propose a 60% reduction in demand in their ZCB scenario:
How is demand reduced? For homes, it is a combination of retrofit and smart controls:
For transport it is mainly through reductions in car use and electrification of transport…
Leading to a very large reduction in transport energy demand …
NOTE 5 – Net efficiency illustration
In this quote from Mackay, he mentions a net efficiency of using gas to electrify heating, based on a Figure provided on page 150. I will do a simple calculation to illustrate a net performance figure. Mackay used a figure of 53% efficiency for gas powered electricity generator (top of line at the time of Mackay’s book) and an 8% transmission loss (92% transmissions efficiency); and an ASHP between COP of 3 – at the lower end of modern ASHPs – and 4.The overall efficiency would be in the range between 0.53 * 0.92 * 3.0 = 1.46 and 0.53 * 0.92 * 4.0 = 1.95, that is, between 146% and 195% efficiency.Mackay uses the range 140% to 185% in the quotation. The point being that any of these figures is much greater than the 90% efficiency from sending the gas to a boiler in the home to provide heating.
NOTE 6 – Heat Pumps are an old idea and not magic
By the mid 19th Century heat was understood as the jostling of atoms – the ‘kinetic theory of heat’ as pioneered by Maxwell and Boltzmann. The greater the temperature above absolute zero (0 Kelvin or -273.15 Centigrade) the greater the average velocity of molecules. A sea of water at 5oC contains a huge amount of thermal energy. We should be careful not to confuse the temperature of a body with its energy content! The energy content will be a function of the temperature and volume of the body of water (the same principle applies to a body of air). With a large enough volume, the temperature becomes relatively less important; there will still be plenty of energy to harvest.
There is no magic. Heat pumps harvest the ambient heat (which can be in the air, ground or water) that ultimately derives its energy from the Sun. This is done through a process that is like a reverse fridge, but in this case moving heat from the outside (often at a relatively low temperature) to the inside (at a relatively higher temperature), with the help of a refrigerant medium and a pump and compressor. No magic is required, just a little A-level physics.
Typically, if a heat pump uses one unit of electrical energy to drive the system it produces three units of heat. This equates to a 3/1 = 3 efficiency factor, or 300%.
It’s barely believable that this sea water has enough heat to warm anything, it’s pretty chilly at this time of year, but yet, thanks to an extraordinary technology called a heat exchanger, it’s the sea that’s going to heat this house.
It is incredible but true that a BBC energy correspondent appears to not understand the distinction between the temperature of a body of water and its thermal energy content, and believes the technology is novel and new. This is not the only report he has made on heat pumps that demonstrates a complete misunderstanding of how they work.
The gas network lobbyists championing allegedly sustainable gas in various forms must absolutely love Roger.
BBC’s More of Less is a real gem, shining a light on numbers bandied about in the news.
Today’s episode again discussed the UK Government’s Covid-19 testing claims.
It demonstrated to me quite clearly why we cannot trust the numbers being presented by the Government (Govt).
I have listened to the extremely informative 5.5 minute segment of More or Less (MoL) covering this topic – from just after 8’30” into the programme to about 14’ in – and taken some notes that I thought were worth sharing:
On Sunday 10th May Boris Johnson said
“we must have a world beating system for testing potential victims and for tracing their contacts, so that all told, we’re testing literally hundreds of thousands of people every day”
Matt Hancock has tweeted again on Monday 11th May, claiming “100,490 tests yesterday” (i.e. Sunday)
MoL have concluded that the Govt did not reach or surpass the 100,000 target.
Matt Hancock’s figure included 28,000 samples put in the post that day, but yet to be tested.
Actual tests conducted, that produce results, are “a long way” from reaching its target of 100,000 (let alone “ramping it up”), according to MoL.
Even by it’s own “somewhat questionable figures”, the Govt has only reached its target twice in May, and MoL don’t think they have ever reached it.
Govt has not acknowledged that a sample in the post is not equivalent to a test completed.
Sir David Norgrove, chair of the UK’s Statistics Authority, wrote to the Health Secretary asking that he shows more clearly how targets are being defined, measured and reported, to “support trustworthiness”. Ouch!
The Govt won’t publish the actual number of completed tests (positive or negative) from postal samples. Instead they simply add the number of positive tests to the daily number of confirmed cases. Their ‘excuse’ being that they wanted to avoid double counting [my comment: as though this is not possible by other means in this day and age!].
MoL have no ideas on the actual number of postal tests being carried out, despite repeated attempts to find out.
It is not just the postal tests that are causing confusion.
Since the middle of April, the Govt’s testing data have included tests from other organisations, such as Universities, and their’s are not just swab tests but antibody tests that can show who has had the virus in the past. They are doing this to look at the prevalence of the virus and to answer other research questions, such as how accurate the home testing kits are.
The Govt say that because the research tests are not for diagnostic purposes they are not included in the daily count of people who are tested. Yet, this week more than 17,500 of these tests are included in the number of tests completed!
“It’s almost as if they don’t care if the number of tests figure is consistent or indeed accurate, as long as it’s big” (Tim Harford, MoL Presenter)
This leads to another issue. The number of tests carried out is not the same as the number of people tested!
Now, of course, some individuals may be tested multiple times so we would expect the number of people tested to be lower than the number of tests [my comment: I would say clearly over a period of a week say, for medical staff, but for the general public? surely not].
But recall Boris Johnson talked of “hundreds of thousands of people every day”.
On May 10th, almost 70,000 tests were actually carried out (not including the number for those postal tests samples put in the post that day), but the number of people actually tested was 37,000 (as MoL gleaned from Department of Health (DoH) data).
This is roughly 2 tests per person each day.
MoL have asked the DoH for an explanation “but they haven’t got back to us” yet.
MoL are not sure what could be the issue, but wondered if there is an error in the collating and labelling of data. They will keep trying to get an explanation.
<End of Notes>
After listening to this MoL episode, I was discussing it with my ex-nurse wife who suggested “maybe they are using two swabs”. Duh, of course.
Is this the simple explanation we need? So, I tried to find out what happens at test centres.
The Govt YouTube video on the test centre process (viewed by only about 100,000 people) doesn’t mention 2 swabs; and implies just one.
Then I found Jack Slater’s piece in the METRO(Sunday 15th March 2020)
“One swab will be put in the back of your throat, and another will be placed inside both nostrils.”
So 2 tests are done for each 1 tested person, at least at some test centres; possibly all.
My cynical thought was then: is there some creative ambiguity going on in not distinguishing ‘tests’ from ‘people’. I tend to prefer the cock-up theory of history, but who knows?
Thank you More or Less for again offering a clear interrogation of the Government’s claims on testing.
I would conclude that the Government is in a complete shambles with respect to simply counting the number of actual tests carried out per day, and also, the number of people tested (whose sample or samples have been tested on said day); and clearly distinguishing these numbers.
Unless the Government can demonstrate clarity and accuracy in its presentation of the testing numbers, how can we trust it to implement a coherent strategy to achieve a “world beating system for testing potential victims and for tracing their contacts”?
Or even, how can it execute its basic job of protecting the public?
Currently, one has to conclude that the UK Government’s Covid-19 testing figures are untrustworthy.
The ‘science’ represents the evidence, the ‘Is’, but we need values, the ‘Should’ to arrive at what’s possible, the ‘Can’, and then leadership, capabilties and capacity to turn that into action, the ‘Will’. Only when the plans are executed is it ‘Done’. The refinement loops come from ‘measure effectivity’ and ‘weigh opinion’, and there will always be a tension – sometimes a conflict – between these.
It has been a mantra repeated every day at the UK Government’s Covid-19 press briefing that they are following, or are guided by the science.
What does this mean or what should it mean?
Winston Churchill famously said that scientists should be on tap, but not on top.
This meant, of course, that politicians should be the ones on top.
Scientists can present the known facts, and even reasonable assessments of those aspects of a problem that are understood in principle or to some level, but for which there remain a range of uncertainties (due to incomplete data or immature science). As Donald Rumsfeld said, there are known knowns, unknown knowns and unknown unknowns. Science navigates these three domains.
Yet, it is the values and biases, from whatever colour of leadership is in charge, that will ultimately drive a political judgment, even while it may be cognisant of the evidence. The science will constrain the range of options available to an administration that respects the science, but this may be quite a wide range of options.
For example, in the face of man-made global warming, a Government can opt for a high level of renewables, or for nuclear power, or for a radical de-growth circular economy; or something else. The science is agnostic to these political choices.
The buck really does stop with the politicians in charge to make those judgments; they are “on top”, after all.
So the repeated mantra that they are “following the science” is rather anti-Churchillian in its messaging.
If instead, Ministers said, “we have considered the scientific advice from the Chief Scientific Adviser, based on discussions of a broad range of scientific evidence and opinion represented on SAGE (Scientific Advisory Group for Emergencies), and supporting evidence, and have decided that the actions required at this stage are as follows …”, then that would be correct and honest.
And even if they could not repeat such a wordy qualification at every press conference it would be like a proverbial Health Warning – available on Government websites – like on a cigarette packet, useful for anyone who feels brave enough to start smoking the daily propaganda on how brilliant the UK is in its response to Covid-19 (which, despite a lot of attacks on it, has not been as bad as some make out, and the Chief Scientific Adviser (CSA) and Chief Medical Officer (CMO) have rightly gained a lot of credibility during the crisis).
The uncomfortable truth is that ‘following the science’ is about proaction not reaction; about listening to a foretold risk years in advance and taking timely and substantive actions – through policies, legislation, projects, etc. – to mitigate against or build resilience in the face of known risks.
Pandemics of either a flu variety or novel virus kind have been at the top of the UK’s national risk assessment for a decade. Both SARS and MERS were warnings that South Korea took seriously to increase their preparedness. The UK was also warned by its scientists to be prepared. The UK Government under different PMs has failed to take the steps required.
Listening to the science in the midst of a pandemic is good, but doing so well in advance of one, and taking appropriate action is a whole lot better. Prevention is better than cure, is a well known and telling adage.
Of course, the naysayers will come out in force. If one responds to dodgy code prior to 2000 and nothing bad happens, they will say that the Y2K bug was a sham, an example of alarmism “gone mad”; they will not acknowledge the work done to prevent the worst outcomes. Similarly, if we mothball capacity for a pandemic, then once again, expect the charge of alarmism and “why so many empty beds?”.
Our economy is very efficient when things are going well – just-in-time manufacture, highly tuned supply chains, minimal redundancy, etc. – but not so great when shocks come, and we discover that the UK cannot make PPE (personal protective equipment) for our health and care workers and we rely on cheap off-shored manufacturing, and have failed to create sufficient stocks (as advised by scientists to do so).
Following the science is not something you do on a Monday. You do it all week, and then you act on it; and you do this for risks that are possibly years or decades in the future. You also have to be honest about the value-based choices you make in arriving at decisions and not to hide being the science.
Scientists don’t argue about the knowns: the second law of thermodynamics, or that an R value greater than 1 means exponential growth in the spread of a virus. But scientists will argue a great deal about the boundary between the known and unknown, or the barely known; it’s in their nature. Science is not monolithic. SAGE represents many sciences, not ‘the’ science.
For Covid-19 or any virus, “herd immunity” is only really relevant to the situation where a vaccine is developed and applied to the great majority of the population (typically greater than 85%), with a designed-in strong immunity response. Whereas immunity resulting from having been naturally infected is a far less certain outcome (particularly for Coronaviruses, where there is typically a weak immune response).
So, relying on uncontrolled infection as a basis for herd immunity would be naive at best. It is true that it was discussed by SAGE as a potential outcome, but not as the core strategy (as Laurence Freedman discussed here); the goal was always to flatten the curve, even if there was great debate about the best way to achieve this.
One of the problems with the failure to be open about that debate and the weighing of factors is that it leaves room for speculation as to motives, and social media has been awash with talk of a callous Government more interested in saving the economy than in saving lives. I am no fan of this Government or its PM, but I feel this episode demonstrates the lack of trust it has with the general public, a trust that Boris Johnson failed to earn, and is now paying the price in the lack of trust in his Government’s pronouncements.
Yet I do have confidence in the CSA and CMO. They are doing a really tough job, keeping the scientific advice ‘on tap’. They cannot be held responsible for the often cack-handed communications from Ministers, and failure to be straight about PPE supplies and the like.
Some people have criticised the make up of SAGE – for example, because it has too many modellers and no immunologists and no virologists. I don’t understand the lack of immunologists.
Virologists are clearly key for the medium-long term response, but a vaccine is probably over a year away before it could be deployed. So, at the moment, containment of the spread ‘is’ the Emergency, and social distancing, hand-washing, isolation, hospitals, testing, etc. are the tools at hand, and it might be defendable that they are not currently the focus of the discussion.
Groups at Oxford University and Imperial College are being funded to help develop vaccines and to run clinical trials. Virology is not being ignored and it is rather odd to suggest otherwise. But again, transparency should be the order of the day – transparency on who is invited onto SAGE, when and why, and transparency on the evidence they receive or consider. But having a camera in there broadcasting live discussions may inhibit frank debate, so is probably not a great idea, but the Minutes do need to be published, so other experts can scrutinise the thought processes of the group.
The reason why Dominic Cummings (or any other political role) should not be sitting on SAGE, in my view – even if they make no contribution to the discussion – is that there is a risk (a certainty, probably) that he then provides a backdoor summary of the discussions to the Prime Minister, which may conflict with that provided by the CSA. It is the CSA’s job to summarise the conclusions of the discussion and debate at SAGE and provide clear advice, that the Government can then consider and act on. The political advisers and politicians will have plenty of opportunity to add their spin after receiving the scientific advice; not during its formation or communication.
Now, it seems, everyone agrees that testing and contact tracing will be key tools in ending or reducing the lock down, but of course, that means having the systems in place to implement such a strategy. We don’t yet have these.
The British Army, I understand, don’t use the term “lessons learned”, because it is so vacuous. We have “lessons learned” after every child abuse scandal and it doesn’t seem to make much of a difference.
A lesson truly learned is one that does not need that label – it is a change to the systems, processes, etc., that ensures a systemic response. This results in consistently different outcomes. It is not a bolt on to the system but a change in the system.
Covid-19 asks lots of questions not just about our clinical preparedness but the fairness of our systems to safeguard the most vulnerable.
Like a new pandemic, the threats from global warming have also been foretold by scientists for decades now, and UK politicians claim to be listening to the science, but they are similarly not acting in a way that suggests they are actually hearing the science.
As with Covid-19, man-made global warming has certainties and uncertainties. It is certain that the more carbon dioxide we put into the atmosphere the warmer the world will get, and the greater the chance of weather extremes of all kinds. But, for example, exactly how much of Greenland will melt by 2100 is an on-going research question.
Do the uncertainties prevent us taking proactive action?
No, they shouldn’t, and a true political leader would take the steps to both reduce the likely size of impacts (mitigations), and increase the ability of society to withstand the unavoidable impacts (adaptation), to increase resilience.
The models are never perfect but they provide a crucial tool in risk management, to be able to pose ‘what if’ type questions and explore the range of likely outcomes (I have written In Praise of Computer Models before).
Following the science (or more correctly, the sciences) should be a full-time job for any Government, and a wise one would do well to listen hard well in advance of having to respond to an emergency, to engage and consult on its plans, and to build trust with its populace.
Boris Johnson and his Government need to demonstrate that it has a plan, and seeks support for what it aims to do, both in terms of prevention and reaction. It needs to do that not just for the Covid-19 crisis, but for the array of emerging crises that result from man-made global warming.
We need to change the system, before the worst impacts are felt.
(c) Richard W. Erskine, 2020.
FOOTNOTE – Sir Mark Walport and John Ziman – on science policy and advice
I listened to Sir Mark Walport a few years ago in a conversation about the role of Chief Scientific Adviser (a post he has held), which was very interesting
“ON STANDING FOR SCIENCE AND WHERE SCIENCE FITS IN POLICY”, SIR MARK WALPORT, Science Soapbox,
[This episode was recorded on July 21, 2016 in front of a live audience at Caspary Auditorium at The Rockefeller University.]
He said that any policy must look at a problem through 3 different lenses:
– Evidence lens
– Deliverability lens
– Values lens
and that science can only help with the first of these.
He made the point that trust in science is very context specific: Science can say anything about the Higgs Boson and be believed, but on an issue like embryology, values kick in and there will be much less trust.
He also makes a strong distinction between ‘pollable’ questions and non-pollable questions. I will give examples.
“does extra CO₂ in the atmosphere lead to increased global warming?” is a non-pollable questions (the unequivocal answer is: yes); whereas “should UK focus on renewables or nuclear power to decarbonise the grid?” is a pollable question (answer: Brits much prefer renewables, by a wide margin).
Scientists need a special range of skills to be able to do the advice job, above and beyond their scientific skills. John Ziman explored the differences between scientific discourse and political debate in his paper (2000) “Are debatable scientific questions debatable?”
He explains how complex most scientific questions are, with rarely a simple resolution, and conducted in a way quite different to political debate (yet no less argumentative!). The two styles sit awkwardly together.
Yet public and political discourse (especially on social media, but in newsprint, and parliament too) often expects a binary answer: yes or no, right or wrong. Shades of grey are often not tolerated, and if you don’t ‘choose a side’, expect to get caught in the crossfire.
I haven’t read the belatedly released SAGE Minutes yet but I expect there will have been lots of discussions on points where Walport’s lenses (Evidence, Deliverability, Values) sit uncomfortably alongside each other.
At some point, I imagine a fly on the wall, hearing …
“we need to do test, trace and isolate as soon as possible”
“agreed, but we need to recognise the constraint that the test capacity is limited at the moment, so we’ll have to wait till we have flattened the curve enough, to reduce the testing demand, but also build up capacity; meanwhile we cannot avoid a lockdown”
“can someone answer this – how well will the public comply and how would this change the numbers?”
“we ran some sensitivity analysis, and we need very high compliance to make it work”
“…”
Leading to a messy compromise set of ‘options’ and scientists NOT the ones with the authority to choose which ones.
The scientists didn’t choose a context where Governments had failed to take on board prior recommendations over some years, to build capacity in PPE, etc. So the advice is very context dependent.
It is highly disingenuous of politicians to say they are ‘following the science’ when that is just one element in the decision making, and where a poor starting position (e.g. the lack of prior investment in pandemic responsiveness) is neither something they influenced, nor can change.
…. o o O o o ….
Updated with Diagram and Footnote on 28th June 2020
“you need three things for paintings: the hand, the eye, and the heart. Two won’t do. A good eye and heart is not enough; neither is a good hand and eye”
David Hockney reflecting on the Chinese attitude to art
I wrote about my ‘awakening’ in moving originally from a science background and finding my way to becoming an artist since my retirement.
Art may seem to be completely different to the science I grew up with, because there appear to be no rules. But the artist and scientist do have quite a lot in common, as I discussed previously:
a curiosity and playfulness in exploring the world around them;
ability to acutely observe the world;
a fascination with patterns;
not afraid of failure;
dedication to keep going;
searching for truth;
deep respect for the accumulated knowledge and tools of their ‘art’;
ability to experiment with new methods or innovative ways of using old methods.
The difference is that in science we ask specific questions, which can be articulated as a hypotheses that challenge the boundaries of our knowledge. Whereas in art, the question is often simply ‘How do I see, how do I frame what I see, and how do I make sense of it?’ , then, ‘How do I express this in a way that is interesting and compelling?’.
In art we do not have rules like in science, but in order to make progress, it is important to articulate guidelines, or fundamental principles if you like.
My starting point is a desire to create representational art, but including impressionistic styles, and even abstractions. I am not interested in trying to create a perfect copy of a scene, because as I often say, if that was my objective I would use a camera, not a paint brush.
This is not about being lazy and not wanting to create all that perfect detail, but rather to highlight the fact that a painting is doing a completely different job to a photograph; it is an expression not a record.
This is the second essay in a series I am writing on Becoming an artist, and I want to turn to the fundamentals.
The fundamental principles can be loosely grouped under the following headings:
Heart – learning to develop one’s creative impulse;
Eye – (sometimes expressed as Head, to provide alliteration!) learning to observe as an artist does;
Hand – learning general techniques applicable to any medium.
I cover just those principles that I have internalised particularly over the last few years as a student of Alison Vickery, my mentor on this journey.
I haven’t read Hockney’s reflections on the Chinese artist tradition, but it is curious that I independently – in my first draft of this essay – came up with Practice, Observation and Technique. It was Alison who suggested it fitted well with the Chinese approach that Hockney espouses, so I changed the headings to Heart, Eye and Hand. Maybe these really are universals that any student of painting, from any culture, will recognise on reflection.
I am always questioning things “why do you do that?” or “how do you do that?”, and then trying to find the why behind the why.
Alison has never written down this list, this is my appreciation of the lessons I have learned, and I am sure I will have missed some important elements out, or presented things differently to how a professional artist like her would articulate things.
I am sharing my journey and the ideas that have helped me, and so feel free to use and abuse these ideas in your journey.
Heart
The emotional side of painting is in many ways the most important. When you feel free to express yourself in the way you want, you are, by definition, an artist.
While you try to be someone else and to follows somebody else’s standards of what you think you should be and do, then you will struggle to find your voice, and your language for expression.
Under the heading of ‘Heart’ I highlight the things that most influenced me in finding my voice, which includes aspects of expressiveness.
Loosen up
In sport you need to do a warm up, and in art it is also important to free up any tension in your mind or body. Try to start a session on a cheap piece of paper first (so you won’t stress about ‘wasting’ an expensive piece of art paper).
Even when doing a piece of work you wish to develop, you still need to be bold and work fast, initially at least, and avoiding tightening up.
Start with the biggest brush you can get away with, hold it loose not tightly.When you start painting, avoid fine brushes altogether; they will kill your ability to work loose.
For watercolour, try starting with a size 12 brush – a decent quality one that will hold a lot of pigment but still make a point.
Loosen up. Make mistakes, they may be happy ones.
Work the whole area
The opposite extreme would be to start painting in the bottom left and work your way up to the top right, or for Michelangelo to carve out the perfect head of the David before moving on to doing his willy! No, as with a sculptor, you must ‘chip away’ all over the subject in broad bold strokes.
As you move to less bold and more detailed strokes, still keep working all areas of the ‘canvas’.
A huge benefit of this approach is that you will being to see things that influence how you develop the painting. Maybe you had a preconception of how you wanted to develop it, but you can now see how to make it better.
Don’t be afraid of white space
Working every areas of the paper is not the same as covering every area with paint – it is ok and actually desirable to leave areas of white space (or whatever the background colour might be). You need the light to get in.
White might be used to suggest light falling on a subject; a painted tree might be dark on one side where there is shadow, and white on the other, suggesting light from the sun.
This is particularly important with a medium like watercolour, where you need to compensate for an inherent difficulty in creating good contrast, and the white space can help to achieve it.
Use sketches and studies
As part of trying to get to know the scene better, do several quick sketches or studies, maybe starting with a charcoal drawing, then a very quick watercolour. Give yourself a short time just to produce something. Use cheap paper again.
Perhaps tear off bits of paper and see how it changes your perspective on things.
Mess around. Don’t self censor. Just go for it.
If you create something that you think ‘that looks interesting’, then cut it out and paste it into an art journal, and add some written side notes “I wetted the paper and dropped in a swathe of cobalt blue, then dabbed it with kitchen towel to create a cloud”. Build up an inventory of such experiments.
Play with composition
After doing an initial sketch of a scene, you can use a ‘window’ cut out of card (with the required aspect ratio for the final painting), placed at different distances from the sketch, and use it to see how much you want to include in the final piece.
Maybe you decide that the house in the foreground is a distraction from the copse on the hillside which is what you really want to be the main focus in the composition.
Learn when to stop
Picasso once said that a finished painting is a dead painting.
It is so easy to over-work a painting, so learning where that inflexion point occurs – between improving a piece and killing it – is perhaps the most difficult skill of all.
Always err on being slightly underdone to overdone.
Eye
Painting is not photography. You are not trying to replicate what a camera would see.
You are creating an impression that speaks to you (and you hope, will speak to others, but that is a bonus). While the work is representational, that does not mean you cannot be impressionistic.
You can decide to remove the annoying road sign that is upsetting the composition; make the clouds more moody; or whatever you care to. But it is important to learn to observe. Having a good eye is as important as having a good brush!
Paint what catches your eye or interests you
It might be the shape of a tree that intrigues you, or the curve of a river, or the curious shape of a cloud, or the tree line on a brow of a hill. Whatever it is, it is a great subject for you, because you are emotionally invested in it.
Learn to be acutely observant
How much time are you spending looking at the paper, and your brush strokes and how much time observing the subject matter? As a novice it is often a 80/20 split in time, when if anything it should be a 20/80 split.
The more you look, the more you see. The brain is telling you that the grass is green, but look closely and in the evening sunlight there seems to be some blue grass in the shadows of the tree – impossible? No, trust what you see.
The light from the window makes the shoulder of the sitter look almost white, but how can that be – they are wearing a black jacket. Look again, trust what you see.
Even if you don’t particularly like drawing, it is worth having a go, because it is another way to help develop one’s observational skills.
Think tonally
It is so easy to become obsessed with finding the right colour to use, but much more important than colour is tone.
Seeing the dark patches lurking in the depth of the wood, and noting that even on the apparently uniformly yellow daffodil there are shades and shadows, that help create a sense of volume; these are example of being tonally observant.
Having a good tonal range can really bring a painting to life.
Doing charcoal studies can really help to develop a sense of tone, unencumbered by considerations of colour.
When preparing to compose a picture, establishing the tonal range of the scene or subject is one of the most important things you can do.
Hard and soft edges
Often we feel compelled to paint or draw a hard edge because our brain says ‘there is a vase there, so I will draw around it’. Look more carefully and the brightly lit side of the vase blends in with the brightly lit background, creating a soft barely discernible edge. Resist drawing what you cannot see!
Look through someone else’s eyes
Take time out to step back, get a sup of tea, and then imagine you are someone else viewing the painting for the first time.
Does it grab you? Have you resolved the different elements of the composition? Have you established a focal point that draws the viewer in?
Look out for symmetry
Humans seem to like patterns in nature and one of the most universal patterns is simple bilateral symmetry – the kind created by the reflection of a scene in a body of water (with a horizontal line of symmetry), or created by the centre line of a tree (with a vertical line of symmetry).
It can really help draw in the viewer to exploit the symmetries we see around us, in our paintings.
Background
A background may naturally present itself, as in a landscape, but in a studio, doing a still life for example, there may be a white wall behind the subject and little else. To avoid a painting looking flat, it is helpful to create a background, even where none exists. Maybe some imagined shadows or some texture on a wall will help.
Think about how a background might enhance the composition. It is so easy to get lost in a subject in a foreground, and forget how important a background can be in developing a composition.
Hand
Most, but not all, of the techniques described below are applicable to any of the painting mediums I have in mind: charcoal, pastel, watercolour, ink and acrylic.
Later essays will focus on techniques specific to each medium. There are hundred of different techniques and ‘tricks of the trade’ out there. You will never stop learning new ones, but it is easy to get overwhelmed. I have included here the ones I feel are most important, at least to me.
Experiment with mark making
Try using different shaped brush heads, and other tools to create marks on a page.
We cannot all be Van Gogh who created his own brilliant style of mark making, but we can all just have a play.
To illustrate this, think about how you might paint a branch of a tree. You could use a classical pointed watercolour brush and carefully follow a line to mark out the branch. But you might struggle to control the thickness of the branch.
Alternatively, you could use a very wide headed flat brush to create the branch with a single dab of the brush.
Use brushes of different shapes and sizes, twigs, bunched up cloth, sponges, palette knives, or whatever; depending on the medium.
There are no rules with mark making – only that you approach it with confidence – so best to just try out as many variations as you can. Find out what works for you.
Play with negative spaces
A brightly lit vase on a table with a dark background might be approached first by painting the dark background – the vase will appear out of the darkness.
This idea can we be used in different ways, even when doing a simple sketch. Wainwright’s pencil drawings of the Cumbrian hills often include sheep, brightly lit from above. So instead of outlining the back of a sheep, he drew the grassland in the background; a sheep then appears as the negative of the grassland.
Use layering / glazes
When a medium is translucent or thinly enough applied to effectively be so, one can build up multiple layers to create a desired effect.
In some cases – particularly with pastels – the painting may need to be fixed before proceeding further to avoid muddying the colours.
Surprisingly, even when using a medium as basic as charcoal, it is good to think in terms of layering.
With watercolour, glazes can help to develop depth.
Just as an old piece of furniture develops a patina, a painting can also develop a sense of complexity from multiple glazes.
Thin and thick
In any medium, it is normally best to start thin and only later to use a thicker form of the medium.
In acrylics, this is very important (in oils also); using a more diluted medium at first. But the same applies to watercolours where one starts with light washes on the wet side, and only later might use some gouache on the drier side for some highlights.
The idea applies to pastel painting also. You should use light strokes with the side of a pastel stick at first.
Minimal palette
Try when working in colour to use a minimal palette. Primary colours and white at a minimum.
It is a great discipline to learn how to make one’s own greens, browns and greys. With 2 yellows and 2 or 3 blues you can make a huge range of greens, for example. As with all rules, you may want sometimes to break this rule; a ‘sap green’ can be difficult to replicate and is useful for bright foliage.
By using a small palette it makes it easier to tie the painting together, chromatically.
One can always add a few additional hues to finish a painting.
Knocking back
Sometimes a pigment is too intense (termed saturated) for the current situation, such as on a grey day in winter. This is also something that one need to do when trying to represent landscapes as they recede towards the horizon. If you add saturated hues where they are not really experienced in practice, the resulting painting can look a bit kitsch.
By adding a little of the complementary colour (on the opposite side of a colour wheel), it dulls the intensity of the pigment you are going to use; the technical term is de-saturation. But it doesn’t always work well; adding red to green can easily go wrong if you add more that the tiniest amount. Sometimes adding a brown like Raw Sienna is an easier way to ‘de-saturate’ with more control.
You can also, of course, reduce the hue intensity by adding white (white gouache is an essential addition to any watercolour set). If white is too milky for the scene add a tiny amount of a suitable hue to it, like purple or yellow. Experiment.
Use of resist mediums
A ‘resist’ medium is something you can place on the paper (or canvas, or board) that will not absorb the pigment being applied to the surface. This can be for a range of reasons.
A masking fluid can be used to precisely cover a shape that must remain white in the final piece, or at least, not be covered by whatever is about to be painted over the medium. The fluid must dry fully then be removed by rolling a finger over it. This is ideal, for example, for snowdrop flowers.
The other kinds of resist medium tend to be ones that are used to cover a line or area and remain in place. For example, wax or a clear oil pastel crayon. These can be used to create texture – when wanting to create some extra effects in clouds, or in some landscape or on a building. One can even use masking tape (particularly with acrylics). For example when doing landscapes and wanting to create a bold expressive sky, you could mask the land (finished or not) and then have free rain with the brush, then repeat in reverse when the sky is dried.
Alternatively, resist might be used to suggest gaps between trees or foreground grasses, or some other effect where you don’t want the background (usually white, but not necessarily so) painted over.
Wet and dry
Particularly with watercolour but also with acrylics, the amount of water used when applying pigment can have a big impact on the picture. There is frequently a benefit to starting quite wet and allowing pigment to flow a bit. This avoids getting hard edges too early in a painting’s development. You can also just drop in other pigments and just see what happens.
You may need to use a hairdryer at some point to allow you to move onto a new wash or glaze / layer.
Later on, it may be you need to do some relatively dry work, dragging a relatively dry and lightly loaded brush – without completely covered the area – in order to deliberately generate striations. In a watercolour, this might be done with water colour pigment added to white gouache, for example.
Dabbing, rubbing and scraping
Sometimes, it is useful to be able to partially remove medium in order to create a necessary effect.
When doing a charcoal sketch, the rubber is as important as the charcoal in building up a patina to develop the image. Also, the edge of a rubber can then allow you to create light reed on a pond, or silver birch trees on the edge of a woodland.
In watercolour, a paper towel can be all one needs to instantly create a cloud in a sea of blue that has just be painted. Using a clean and damp (but not too wet) brush, you can lift out paint if needed; those silver birches for example using a wide flat brush.
For acrylic, scraping an upper layer of pigment away – before it has completely dried – to reveal pigment below can be used in number ways, such as helping to suggest a line of trees on the ridge of a hill. You can even use sand paper to do this.
Flicking and spraying
No one wants to paint every leaf on a tree and there is no need to. Look at a tree painted by Turner or Constable and you will see a fair number of brush strokes for foreground trees, to give the impression of detail, without excessive labour, but only broad strokes for distant trees.
Modern painters will often use an additional technique of flicking or spraying pigment to suggest the necessary complexity of the foliage. It can be repeated for different hues to create additional complexity.
Flicking of white gouache, slightly diluted can be used to help suggest the froth of a breaking wave, for example.
It is useful to have a cheap brush with quite stiff bristles (such as one might use for applying PVA in collage; if not available, an old toothbrush will also do the trick), as this allows one to do flicking by merely stroking the bristles with your forefinger (rather than using the wrist), giving much greater control.
Consider the interplay of simplicity and complexity
As we have seen with use of layering, resist and flicking techniques, there are several ways in which to develop complexity, and the human eye is intrigued by complexity.
That is why we prefer to look at a rusty corrugated tin roof to one that is pristine and uniform. Yet we also like simplicity. A perfectly rendered blue sky, a flat sea and a wide sandy beach – with just a small sailing boat in the distance – brings a sense of calm.
In developing an idea for a painting we can observe this interplay of complexity and simplicity in the world around us, and then decide how we might render it.
Consider the interplay between precision and imprecision
The painter must choose where to put effort into developing detail.
Typically, the subject is given more attention and other elements of the composition are allowed to be imprecise. A photographer, when doing a portrait amongst a landscape, will often use depth of field to make the background loose focus, and in a way so is the painter, but with greater freedom to emphasise or play with this imprecision.
It may be that one needs the woodland on the distant hill to frame the picture of the family by the river, but the trick is to be very imprecise in how it is rendered – less is often very much more. If you add detail to the background when it is not really experienced in practice, the resulting painting can look a bit kitsch.
Choice of paper or other surface
There is a bewildering array of different surfaces to paint on.
Papers can come in different weights and also levels of roughness of the surface.
Pastels require some grain on the surface to ‘take’ the pastel. Watercolour paper can be smooth or mottled and it depends a great deal on how wet you want to work, and whether you find the texture a help or a hinderance.
You will learn about stretching paper, and about priming paper or board with gesso.
For any single sheet of paper, you need a board and masking tape to secure it to the board. Whether you need an easel or not depends on how you end up working. Some artists work so ‘wet’ they need to use a flat surface to work on with the ability to raise one side to cause the medium to flow; this is a long way from the classic image of an old master with the canvas on an easel.
Ensure you have some cheap cartridge paper you can experiment with, so you don’t get frozen by the thought that ‘this board is so expensive I better make this one a masterpiece!’.
It can also help to have a range of sizes, so try doing small watercolour pieces, before migrating to larger formats. It is quicker to get a result and also takes the pressure off you.
Whereas for charcoal, you generally need to work on a bigger piece of paper straight away; but a relatively low cost large format ring-bound sketch book with hard covers (around A3 size) is fine for this purpose.
Mixed media
In truth, many painting use mixed media, although some more obviously than others.
For example, a watercolour may use a number of other media:
pencils or pens to resolve some features (but best used sparingly), such as railings;
inks to help develop greater tonal depth;
gouache to finish a piece with greater colour intensity, for flicking effects or for white highlights;
pastels to help develop a light glaze of texture – for foliage or other features – as a finish.
There are also numerous special materials that can be tried, such as liquid pencil, to create effects.
But there is no obligation to throw everything at a painting, and it can be easy to get carried away with mixing media.
A great artist like Kurt Jackson has developed his own brilliant style – a vocabulary that is special to him – and his use of mixed media feels unforced and natural.
It is always best to start simple and work on adding ingredients over time, as and when they come naturally to you, rather than merely including them to try to emulate Kurt.
Conclusion
These fundamentals are the things I have internalised from an intensive three years of learning to become an artist, with the help principally of my mentor Alison Vickery, but also some other helpers along the way.
In the following essays, I want to show how these fundamental are reflected in sketches, studies and a few developed pieces I will share, from my endeavours.
I often forget these principles, catching myself in an act of regression, and then have to remind myself. Alison’s voice is often in my head …
‘paint what interests you’
‘don’t get too fiddly’
‘work the whole area’
‘stop right there!’
‘put down the pencil’
‘think tonally’
‘loosen up’
…
I call them “Alison’s Aphorisms”.
It takes years to internalise the fundamentals of being an artist, and even then, so easy to get carried away and still fall flat on one’s face.
Equally, as time goes by, nice surprises happen.
You find that you have ‘accidentally’ created something quite good, and you scratch your head and ask ‘How did I manage that?’.
Don’t be surprised, you are becoming an artist!
Gradually, the better stuff happens more frequently and the not so great become less frequent. The art folder gets fatter and the dustbin less full of discarded pieces.
But everything you do provides a learning moment. Keep some of the not so great paintings to remind yourself of how far you have travelled.
Keep asking questions; it worked for me when I was a scientist and as a consultant, and it is something I continue to do as an artist.
Keep experimenting, and keep asking questions.
Making mistakes is fine, because that is the only way to learn.
(c) Richard W. Erskine, 2020
Next essay in this series will be Becoming an artist: sketchbooks
This is about my journey. Everyone’s journey will be different. I am addressing those, who like me, have spent a long time thinking about doing art, but never finding the time or courage to do it.
How many people suffer from that debilitating idea “I can’t paint*”. This is often because someone told you so, or gave your confidence such a knock, you never quite recovered enough to try again [* paint, or anything else you would like to do – learn to play an instrument, be a sculptor, do maths, play the drums, or whatever].
Schoolchildren are expected to make a choice quite early in life as to what they want to be. At face value it seems reasonable to expect a student to start to specialise at some point, but the mirror image of this is that they must ‘drop’ a whole load of stuff that is valuable in life. Little wonder that in older age people often pick up on subjects they loved but did not have an opportunity to develop when young.
I chose to specialise in science, even before I was forced to make that choice.
I’d happily freeze to death looking at the moon and stars through a small but much loved telescope, clutching my Observer’s Book of Astronomy (I think I may have had a 1st edition from 1962, when I was just 9). Geometry was my favourite subject.
A little later but still quite young I had a laboratory, and loved to do experiments with bits of apparatus such as a Liebig Condensor, regularly causing a stink that required all the windows in the house to be opened to clear the smell.
I was never a rote learner. I always asked questions and challenged my teachers. I love the ability of small children to ask “why?” then why again, to never be afraid to ask questions. But it is also important to learn how to listen to the answers, to reflect on them and then to do work to explore things more deeply. This gives rise to more questions.
I wanted to understand the world and how it was put together, and went on to study Chemistry at university. To highlight my tendency to question things, there is a story from my final exams I want to share.
There was a question about chemical bonding I didn’t like because of the way it was framed, so I answered it just like I knew the examiner would want it answered, but then wrote “However, I want to challenge the framing of this question, and believe the question ought to have been …”.
I then wrote a second answer to my newly framed version of the question. The external examiner (Prof. S F A Kettle, I believe) was so impressed he told my mentor that he would have happily awarded me an upper first if such a thing existed. Nevertheless, I was very proud of the 1st Class Honours degree I did receive.
I stayed in academia for a while, doing a PhD at Cambridge and then a postdoc in Bristol, where I met Marilyn, who was to become my wife.
For a range of reasons, I decided to leave academia in 1982, and worked in computer-aided design for several years, but for the final 30 years of my career up to 2016 I was an information management consultant, helping large organisation to be better at breaking down the information silos in their organisations, and be better custodians of their knowledge.
I enjoyed using creative ways to discuss and articulate problems. I never stopped asking questions. Clients liked my thoughtful approach, and the fact I didn’t try to ram software products down their throats (as had been their experience on the previous times somebody had promised to fix their issues).In ways that I now recognise only in retrospect, my scientific and artistic sides both found expression in the way I did consultancy.
Throughout this time, I was always questioning myself, always learning from new engagements about other ways to look at things. Even when one thinks one has mastered a skill, there will always be opportunities to explore nuances or discover new variants of a skill.
Over my 63 years before I retired I had tried on a few occasions to learn to paint. Even at school there was a group of us scientists who showed artistic promise and the art teacher allowed us access to the studio to paint just for fun, not for any examination. And I have attended classes on watercolours 30 years ago, but it never went anywhere.
Meanwhile, one of the favourite activities that Marilyn and I enjoyed over these years was visiting art exhibitions, and we have numerous catalogues to testify to this. I was great at looking at art, but not doing it.
There could have been many reasons for the failure of my early attempts to develop further.
I had a time-consuming and at times stressful job, involving a lot of travel abroad. Marilyn and I brought up two girls, and there were always too many projects (that, funnily enough, seems not to have changed!). In Bristol I was an early recruit to Scientists Against Nuclear Arms (SANA), and became its Secretary for a while. Writing and speaking took up a lot of my extra curricula head space (SANA later became SGR, Scientists for Global Responsibility, and is still active).
Since my retirement, I have become very active on climate change, giving talks and helping to found a group, Nailsworth Climate Action Network in my home town, which I am currently Secretary of.
Despite being busy with family – now with grandchildren – and home, garden, climate change, etc. I decided I wanted to have another go at learning to paint.
Marilyn and I have for several years tried to stop buying stuff – we have too much already – and instead buy vouchers for experiences or classes.
About 6 years ago she bought me a voucher for a set of 1-to-1 art lessons from our dear friend Di Aungier-Rose. Unlike previous art teachers I had tried, Di was very good at getting me to loosen up and not stress about what I was doing; to not obsess about colour and so on. To just have fun, and see what emerged. She imparted little nuggets of wisdom here and there, but without overloading me.
This unlocked the first door to me becoming an artist, and gave me a boost in confidence. I knew from that point on that I had an innate ability to become an artist, even while I knew it would be a long journey.
However, the ‘3 steps forward, 2 steps back’ rule seemed to hit me. I got waylaid by climate change, sorting out my pension for retirement, etc. There is always a long list of things stopping us doing what we want!
Also, I was really hankering after learning how to use watercolours, and had a lot of admiration for the work of another local artist, Alison Vickery. So, a few years ago Marilyn bought me another present: to attend a batch of classes at Alison’s weekly art class, held at Pegasus Art in Stroud.
I will talk more about what Alison has taught me in later essays in this series, but the key point here was that I started to carve out a time during the week – every week – when I wouldn’t be distracted by the other things crowding in on me. Wednesday afternoon was to be art time. So even if I didn’t manage to do any art during the rest of the week, this time was sacrosanct.
I have kept attending these classes ever since.
Maybe that is the secret – and of course a lot easier when you are retired – to find a space to do your art.
If you are very disciplined and no longer require a mentor, then it is perfectly possible to create this time and space for yourself. It may mean creating a Woman Shed or Man Shed in the garden, to get away from domestic distractions.
However it is done, you need to find your time, and your space.
You need to unlearn the “I can’t do X” gremlin in your brain.
Now it is time to loosen up; to experiment; to ask questions; and to rediscover the joy of learning something new.
How do people respond to ‘signals’ regarding their health and well-being?
Some people will refuse to respond, such as these smokers I saw outside a hospital a few days ago (where I was visiting my daughter, thankfully now discharged after a nasty infection; not coronavirus).
There is a large sign ‘Strictly No Smoking’, that is routinely ignored.
And what of people who read Richard Littlejohn and others, for years in the Daily Mail, Daily Telegraph, The Spectator, etc., railing against the ‘nanny state’ or ‘elf and safety’ ?
Large swathes of people are effectively inoculated against alarm, and will not respond to signals, even if a megaphone was put to their ear.
These are the super-spreaders of denial and complacency.
I am not talking here of professional dissemblers in the climate realm who make their living trying to undermine the scientific consensus. Those who write opinion pieces claiming, wrongly:
more CO2 is good for us because plants will flourish (Matt Ridley);
or claiming ocean acidification is non-existent (James Delingpole);
or that it’s the sun’s fault (Piers Corbyn);
or that we are about to enter an ice age (Daily Mail and Daily Telegraph every 6 months for the last 10 years) .
Like stories of Lord Lucan sightings, these lazy opinion formers simply dust off the old rubbish to serve it up again, and again. Year in year out. It pays the mortgage I suppose. And when they tell people what they want to hear – that we can carry on regardless – there is no shortage of chortling readers. Ha ha ha. How very funny, poking fun at the experts.
No, I am not talking about these dissemblers, but rather, the mass of those who have been reading this rubbish for 30 years and are now impervious to evidence and scornful of experts.
And there is an epidemic of such people, who believe
no need to be alarmed, staying calm and carrying on regardless
It is not just health or climate change, but is applied universally. For example, the Millennium Bug was apparently overblown according to these people (having seen the code that needed fixing, I can assure you, it wasn’t).
However, those who deal with addressing threats are in a no-win situation: if they act and prevent the worst happening, then people – who are largely unaware of what is being done behind the scenes – will say ‘you see, it wasn’t a problem’. If they didn’t act, then guess who would get the blame.
Yet when people do raise the alarm, such as when parents wrote letters complaining of the risks of the vast colliery tip adjacent to the Welsh town of Aberfan, they are often brushed off, and the result was a disaster that lives on in our memory (see Note).
Now we have the Covid-19 virus.
It is no surprise that there have been many saying that people are being unnecessarily alarmed; and the message is the same – we should ‘Keep Calm and Carry On’.
It’s just like seasonal flu, don’t worry. It will disappear soon enough.
These are often the same people who rail against ‘climate alarmism’.
Man-made global heating will be orders of magnitude worse than Covid-19, across every aspect of society – food security, sea-level rise, eco-system collapse, mass migration, heat stress, etc. – and over a longer timescale but with increasing frequency of episodic shocks, of increasing intensity.
Unlike Covid-19, there will be no herd immunity to climate change.
But we have the ability to halt its worst impacts, if we act with urgency.
We cannot quarantine the super-spreaders of denial and complacency, but we can confront them and reject their message.
I wonder, as the mood seems to be changing, and experts are now back in fashion it seems, could this be a turning point for action on climate change?
Can we all now listen to the experts on climate change?
Can we Keep Calm, but Take Action?
(c) Richard W. Erskine, 2020
Note
There was a collapse of part of the massive colliery spoil tip at 0915 on 21st October 1966The main building hit was Pantglas Junior School, where lessons had just begun. Five teachers and 109 children were killed in the school.
As one example of numerous correspondence prior to this, raising concerns, was a petition from parents of children at The Grove school raising the issue of flooding undermining the tip. This was passed up through the bureaucracy, but a combination of the Borough Council and National Coal Board failed to act. As the official report noted in unusually strong words:
“As we shall hereafter see to make clear, our strong and unanimous view is that the Aberfan disaster could and should have been prevented. … the Report which follows tells not of wickedness but of ignorance, ineptitude and a failure in communications. Ignorance on the part of those charged at all levels with the siting, control and daily management of tips; bungling ineptitude on the part of those who had the duty of supervising and directing them; and failure on the part of those having knowledge of the factors which affect tip safety to communicate that knowledge and to see that it was applied” (bullet 18., page 13)
1966-67 (553) Report of the tribunal appointed to inquire into the disaster at Aberfan on October 21st, 1966
Professor Katharine Hayhoe, a leading climate scientist, and hugely influential communicator, is often asked:
What is the first thing I should do about climate change?
Her answer is simple:
Talk about it!
How on Earth can that reduce our carbon footprint you may ask?
On the other hand, it is a common phenomenon when climate groups start, that the first thought is often ‘we need to build a solar PV array on the edge of town’.
I am not saying don’t do that, but there are big benefits to talking about it, and not rushing to build.
Firstly, if people are not fully on board with the idea that urgent action is needed to address global warming, then some talking will really help change hearts and minds.
Secondly, there are many different ways we can reduce our carbon footprint, and we need to push forward on all fronts. Don’t let the enthusiasm for one project crowd out ideas for other things that need to be discussed, and weighed up.
Thirdly, if we focus solely on technological solutions like electric cars, we potentially exclude a lot of people who are put off by technology, or cannot afford to invest in them; and would like a reliable bus service to be a priority!
We need to build a much bigger tent where we discuss topics like consumption, waste, heating, public transport, energy efficiency and local food. Topics that will draw in as wide a population as possible.
Finally, by developing a wide perspective on all different approaches and potential initiatives, the group will be in a better position to call on community support for emerging projects.
Some will argue: but why is the challenge of addressing dangerous global warming being placed on the shoulders of householders and local communities?
Surely, Government and big business have the resources and power to make it happen?
I reject the implied binary thinking here.
In fact, Government, big business, pension funds, County Councils, District Councils, Parish Councils, local businesses, householders – you and me – can all make a difference and influence what happens.
Ok, so there are some things that only Governments and big business can do. But ultimately, every product and service is – directly or indirectly – created for us.
We have agency – we can decide:
what we do,
how we do it.
and how often we do it.
We can choose to car share twice a week; or opt for that staycation; or reduce our meat consumption. Every family is different, but we make lots of choices, intentionally or not; and every choice matters.
We started NailsworthCAN in 2016 around the time of the Paris Agreement. Our focus was always on practical action rather than protest. But action comes in many forms: engaging, influencing, networking, capacity building, constructing.
We have spent a lot of time developing the conversation with different groups in the community: with the Town Council, Church, Schools, Rotary, Transition Stroud, etc., and with our previous and current MP.We act sometimes to lead, sometime to act as a catalyst, and sometimes simply to provide support to others. Hence the use of the word ‘network’.
We have run stalls, organised talks on diverse topics, and identified a range of projects. We created and distributed a Carbon Pledges sheet. We have met and talked with hundreds of local people, and we have recruited members with a fantastic range of skills and knowledge.
We have ran workshops to gather ideas on local projects that people are interested in across a range of topics –
Food and agriculture;
Mobility and transport;
Buildings and their environment;
Energy generation;
Waste;
Nature and the Environment;
Health and Wellbeing.
We have worked with the Town Council to help develop an outline plan across these areas.
One specific initiative is to conduct a survey of hospitality venues in town to assess current practice on energy use, waste, etc., and identify ‘wins’ for these venues, the town and the planet.
Another initiative is to develop a 5-year tree planting plan on council land.
And another is a community-led domestic retrofit scheme.
And yes, we have a few renewable energy generation schemes in the pipeline.
Each of the climate groups I have met has its own personality, way of organising, and methods for coordinating their efforts with their respective Parish councils.
Each has had ideas on how to push forward on different fronts, and all can learn from each other.
The great evolutionary biologist E. O. Wilson – when being interviewed on BBC Radio 4’s ‘The Life Scientific’ said:
“Humanity has Palaeolithic emotions, medieval institutions and god-like power, … and that is a dangerous combination”.
But I would respond by saying we also have the capacity to overcome our destructive power, and work collectively to reveal the positive side of our humanity.
Don’t be critical if you start with talking, then move to small actions.
Just don’t stop at small actions.
Small actions can provide learnings and help us move to larger ones.
Share and celebrate success, as we do on social and printed media.
Small conversations can be the foundation for bigger ones, resulting in significant actions, and system change.Ultimately, this is all about system change; business as usual will not get us to where we need to be.
Remember, it is a marathon not a sprint, and like a marathon, we need to help each other stay the course.
I wish Minchinhampton every success as it starts its conversation.
Thank you.
…. o o O o o ….
Richard W. Erskine, Secretary of NailsworthCAN
Invited talk at the launch of Minchinhampton Climate Action Network.
That’s a belief I am finding increasingly common, but it really isn’t what the science is telling us.
The science is saying that things are very serious and every year we fail to “bend the curve down” as Greta Thunberg puts it, the worse the outcomes. We know from the IPCC (Intergovernmental Panel on Climate Change) 1.5oC Special Report that 2oC is significantly, perhaps surprisingly, worse than 1.5oC.
That is not a reason for a dystopian view that all is lost if we fail to get to zero after 12 (or is it now 11 years) if we don’t get to net zero by then.
The science is not that certain. The IPCC said that 2030 global net emissions must reduce by 45% versus 2010 emissions to achieve 1.5oC, and get to zero by 2050.
That is not to say we should not have highly ambitious targets, because the sooner we peak the atmospheric concentration of CO2 in the atmosphere, the sooner we peak the global warming (see Note 1).
Because it is such a huge challenge to decarbonise every sector of our economies, we should have started 30 years ago, and now we have to move very fast; whatever date you put on it. So, if I question some of the dystopian memes out there it is certainly not to question the need for urgent action.
Feedbacks and Tipping Points
I think what lies at the root of the dystopian message is a belief that tipping points – and there are quite a number in the Earth system – are like dominoes, and if one goes over, then all the rest follow. At a meeting I went to that included policy experts, XR, scientists, and others, I got into a chat about feedbacks and tipping points.
The person I spoke to was basically 100% convinced that if we did not get to net zero after ’12 years’ we would set off feedbacks and tipping points. It really would be game over. I want to summarise my side of the conversation:
I appreciate your concern about tipping points; they are real and need to be taken into account.
It is complicated and there are cases that can runaway (take Venus), but there is often a response that limits a particular feedback.
For example, extra CO2 causes warming, which due to the Clausius–Clapeyron relation means that additional water vapour (gaseous form of water, not clouds) is added to the atmosphere (7% extra for every 1C of warming). Since H2O is also a strong greenhouse gas that causes more warming.
This is a crucial ‘fast feedback’ included in climate models. It means that the expected 3oC of warming from doubling CO2 in the atmosphere is actually 1oC from the CO2 and 2oC extra from the H2O feedback (see Note 2).
Ok, so why doesn’t this warming carry on as a runaway (there is plenty of water in the ocean)?
The reason is Stefan’s Law (or ‘Planck Response’).
A body at temperature T emits energy at a rate proportional to T to the power 4. So the loss of heat accelerates and this at some points stops the feedback process (see Note 3).
A way to think about this is a plastic container with a hole at the bottom (say 7mm wide). Pour water from a tap at a constant rate, say half a litre per minute, into the container. What happens? The water level in the container rises to a point that maintains this level. At this point the pressure at the base of the container has increased to the point that the rate of flow of water out of the bottom is equal to the rate of flow in. They are in balance, or ‘equilibrium’.
If I now plug the 7mm hole and drill a 6mm one instead (yes I did this for a talk!), then with the same flow rate coming in, the level of water rises, because it requires more pressure at the base to drive water out at the rate required, to bring the system back into balance (when the level of water stops rising).
We are in both cases having the same amount of energy leaving as entering the system, but in the latter case, energy has been trapped in the system.
This is a very good analogy for what happens with the Greenhouse Effect (see Note 4), and the level of water is analogous to the trapped energy (which means a hotter planet), and the world warms even though the rate at which energy is coming in (from the Sun) is constant. We can explain the Greenhouse Effect via this analogy simply:
The increased heat trapping power of the atmosphere with an increased concentration of CO2 restricts the exiting (infra-red) radiation to space – this is analogous to the reduced hole size in the container – and so …
The temperature of the Earth rises in order to force out radiation at the correct rate to balance the incoming energy – this is analogous to the increased level of water in the container.
This demonstrates that the planet must stabilise the flow of energy out so that it equals the energy in, but with extra energy behind captured in the process (see Note 5).
The main point is that feedbacks do not inevitably mean there is a runaway.
Professor Pierrehumbert wrote a paper reviewing the possibility of a runaway in the sense of heading for a Venus scenario, and it seems unlikely “it is estimated that triggering a runaway under modern conditions would require CO2 in excess of 30,000 ppm”.
Even in more complex cases, such as melting sea ice and ice sheets, the feedbacks do not imply inevitable runaway, because in each case there is often a compensating effect that means a new equilibrium is reached.
But there is not one possible end state for a particular level of warming, there are numerous ones, and we know from the climate record that flips from one state to another can happen quite fast (the ocean conveyor belt transports huge amounts of heat around the planet and this is often implicated in these rapid transitions).
So, this is not to say that the new equilibirum reached is a good place to end up. Far from it. I agree it is serious, and the level of CO2 in the atmosphere is now unprecedented for over 3 million years. We are warming at an unprecedented rate, thousands of times faster than the Earth has seen in that period.
It is very scary and we don’t need to say a runaway is inevitable to make it even more scary!
Arguments that a feedback will trigger another, and so on, ad infinitum, may sound plausible but are not science, however confident and high profile the speaker may be. It does the XR cause no good to simply repeat wild speculation that has no scientific foundation, merely on the basis of a freewheeling use of the ‘precautionary principle’.
I hope this clarifies my point, which was not to minimise the urgency for action – far from it – I am 100% behind urgent action.
However, I think that sometimes it is important to be scientifically pedantic on the question of feedbacks and runaway. The situation is scary enough.
I really worry about the dystopian message for our collective mental health, and that this might freeze people and even limit action amongst the wider public who are not activitists (but need to participate in our collective actions).
We need a message of hope, and this is it:
The sooner we can peak the atmospheric concentration of CO2 (by stopping emissions), the sooner we can halt warming, and
the lower that peak in the atmospheric concentration, the lower the level of warming.
We can make a difference!
We have to act to make hope meaningful, because being alarmed, and frozen in the headlights, and unable to act, is not a recipe for hope.
However, being duly alarmed and having hope are not mutually exclusive, if we recognise we have agency. We can all make a contribution, to agitate for, or implement, a plan of actions and the actions that follow.
(c) Richard W. Erskine, 2019
NOTES
(1) The IPCC 1.5C Special Report (p.64) talks about ‘committed warming’ in the oceans that is often assumed to mean that the Earth will continue to warm even when we stop CO2 emissions due to thermal inertia of heated oceans. Surprisingly for many, this is not the case. The IPCC reiterate what is a long known effect, regarding what they term the Zero Emissions Commitment:
“The ZEC from past CO2 emissions is small because the continued warming effect from ocean thermal inertia is approximately balanced by declining radiative forcing due to CO2 uptake by the ocean … Thus, although present-day CO2-induced warming is irreversible on millennial time scales … past CO2 emissions do not commit substantial further warming”
(2) This excludes clouds, and the effect of clouds at lower and higher levels can, for this simple example, can be regarded as cancelling each other out in terms of warming and cooling. Water Vapour in the atmosphere referred to here is not condensed into droplets but is a gas that is transparent to the human eye, but like carbon dioxide, is a strong absorber of infra-red. Because carbon dioxide is a non-condensing gas, but water does condense, it is the concentration of carbon dioxide that is the ‘control knob’ when it comes to their combined warming effect. In 1905, T.C. Chamberlin writing to Charles Abbott, eloquently explains the feedback role of water vapour, and the controlling power of carbon dioxide:
“Water vapour, confessedly the greatest thermal absorbent in the atmosphere, is dependent on temperature for its amount, and if another agent, as CO2 not so dependent, raises the temperature of the surface, it calls into function a certain amount of water vapour, which further absorbs heat, raises the temperature and calls forth more [water] vapour …”
(3) Strictly, it is a ‘black body’ – that absorbs (and emits) energy at all frequencies – that obeys Stefan’s Law. When using the law, we express T in Kelvin units. To a reasonable approximation, we can treat the Earth as a black body for a back of the envelope calculation, and we find that without carbon dioxide in the atmosphere, the Earth – at its distance from the sun – would be 258K, or -15oC on average, a frozen world. That would be 30oC colder than our current, or pre-industrial, average of 15oC.
(4) John Tyndall originated this analogy in his memoirs Contributions to Molecular Physics in the Domain of Radiant Heat published in 1872, although he used the example of a stream and dam, which is raised, my exposition is essentially based on his precedent.
(5) One other aspect of this re-established equilibrium is that the so-called ‘Top of Atmosphere’ (TOA) – where the energy out in the form of infra-red, is balancing the energy in – is at higher altitiude. The more carbon dioxide we add, the higher this TOA. Professor Pierrehumbert explains it in this Youtube exposition, from the film Thin Ice, where he pulls in a few other aspects of the warming process, as it works on planet Earth (e.g. convection).
I keep hearing this meme that goes along the lines of “a Google search will use X amount of energy”, where X is often stated in a form of a scary number.
I think numbers are important.
According to one source a Google search is about 0.0003 kWh of energy, whereas a 3kW kettle running for one minute uses 3 x (1/60) = 1/20 = 0.05 kWh, which is 160 times as much (another piece uses an equivalent figure – Note 1).
On the UK grid, with a carbon intensity of approximately 300 gCO2/kWh (and falling) that would equate to 0.09 gCO2 or roughly 0.1 gCO2 per search. On a more carbon intensive grid it could be double this, so giving 0.2 gCO2 per search, which is the figure Google provided in response to The Sunday Times article by MIT graduate Alex Wissner-Gross (cited here), who had estimated 7 gCO2 per search.
If the average Brit does the equivalent of 100 searches a day, that would be:
100 x 0.0003 kWh = 0.03 kWh, whereas according to Prof. Mackay, our total energy use (including all forms) is 125 kWh per person per day in UK, over 4,000 times more.
But that is not to say the that the total energy used by the Google is trivial.
According to a Statista article, Google used over 10 teraWatthours globally in 2018 (10 TWh = 10,000,000,000 kWh), a huge number, yes.
But the IEA reports that world used 23,000 TWh in 2018. So Google searches would represent about 0.04% of the world’s energy on that basis, a not insignificant number, but hardly a priority when compared to electricity generation, transport, heating, food and forests. Of course, the internet is more than simply searches – we have data analysis, routers, databases, web sites, and much more. Forbes published findings from …
A new report from the Department of Energy’s Lawrence Berkeley National Laboratory figures that those data centers use an enormous amount of energy — some 70 billion kilowatt hours per year. That amounts to 1.8% of total American electricity consumption.
Other estimates indicate a rising percentage now in the low few percentage points, rivalling aviation. So I do not trivialise the impact of the internet overall as one ‘sector’ that needs to address its carbon footprint.
However, the question naturally arises, regarding the internet as a whole:
how much energy does it save, not travelling to a library, using remote conferencing, Facebooking family across the world rather than flying, etc., compared to the energy it uses?
If in future it enables us to have smarter transport systems, smart grids, smart heating, and so on, it could radically increase the efficiency of our energy use across all sectors. Of course, we would want it used in that way, rather than as a ‘trivial’ additional form of energy usage (e.g. hosting of virtual reality game).
It is by no means clear that the ‘balance sheet’ makes the internet a foe rather than friend to the planet.
Used wisely, the internet can be a great friend, if it stops us using planes, over-heating our homes, optimising public transport use, and so forth. This is not techno-fetishism, but the wise use of technology alongside the behavioural changes needed to find climate solutions. Technology alone is not the solution; solutions must be people centred.
Currently, the internet – in terms of its energy use – is a sideshow when it comes to its own energy consumption, when compared to the other things we do.
Stay focused people.
Time is short.
(c) Richard W. Erskine, 2019
Note 1
I have discovered that messing about with ‘units’ can cause confusion. So here is an explainer. The cited article uses a figure of 0.3 Watt hours, or 0.3 Wh for short. The more commonly used unit of energy consumption is kilo Watt hours or kWh. As 1000 Wh = 1 kWh, so it remains true if we divide both sides by 1000: 1 Wh = 0.001 kWh. And one small step means 0.1 Wh = 0.0001 kWh. Hence, 0.3 Wh = 0.0003 kWh. If you don’t spot the ‘k’ things do get mighty confusing!
Boris Johnson is no longer PM of the UK, but fusion disinformation continues unabated.
The UK Government’s Spring 2022 UK’s Public Attitudes Survey includes questions on fusion energy, as though it is in any way relevant to our current energy needs, or even in the near future; this is weird. 16% claim to know ‘a lot’ or ‘ a fair amount’ about fusion energy, yet 48% ‘strongly support’ or ‘support’ it. It seems that supporting things one doesn’t have a clue about is the new politics!
Needless to say, the same survey shows overwhelming support for renewables. 85% support or strongly support. 79% agree that ‘It’s important that renewable energy developments provide direct benefit to thecommunities in which they are located’. This is another aspect of renewables that contrasts so greatly with fusion: It can scale to local, regional and national scale, and so is not in the hands of a few. It is also understood by the populace. Energy democracy is key to a just transition in energy policy and implementation.
I have not changed the essay as I still stand by every word, but I have added two rather good and accessible resources produced by two physicists who have critiqued the hype on fusion energy: Michael de Podesta and Sabine Hossenfelder, whose twitter handles are @Protons4B and @skdh respectively, and very much worth a follow. They blow a hole in the fusion hype.
———————————————————————-
I mean it, it is the future.
Or rather, to be accurate, it could be the future.
In the core of the sun, the energy production is very slow, thankfully, so the beast lasts a long time. You need about 10,000,000,000,000,000,000,000,000,000,000 collisons between hydrogen nuclei before you get 1 that successfully fuses, and releases all that energy.
Beating those odds in a man-made magnetic plasma container (such as a Tokamak) is proving to be something that will be done by tomorrow, plus 50 years (and repeat).
Boris Johnson obviously believes that the way to show a flourish of leadership is to channel dreams of technical wizardry that goes well beyond the briefings from those experts in the know.
But who believes in experts in magneto-hydrodynamics? Stop over complicating the story you naysayer PhDs. Positive mental attitude will confound physics! Get back in your box experts!
*CUT TO REAL WORLD*
Man-made fusion energy as an answer to the man-made climate emergency by 2040 is not just ignorant, it is a deliberate and cynical attempt to delay action now. It is a form of techno-fetishism that deniers love. Boris Johnson spends a lot of time with these people.
We have relevant solutions available today, and just need to get on with them.
We do indeed have a functionally infinite fusion energy generator available to humanity, and it is free.
It’s called ‘The Sun’ (an astronomical entity, not a rag masquerading as a newspaper).
If man-made fusion energy is commercialised it *MAY BE* relevant to a world *POST* resolving the climate crisis, but is definitely not part, or even maybe part, of that resolution.
Please politicians – left, right and centre – stop playing games and take the climate emergency seriously.
It may surprise you that while Boris’s cult following will swallow anything (almost literally), the rest, and particularly the rising youth, will not.
But I am prepared to compromise. A deal is possible.
Fusion is indeed the future …
… it is the energy from the Sun!
And you might be surprised to hear that it gives rise to …
direct Photovoltaic (PV) capture of that energy,
and indirect forms of capture (e.g. wind energy).
Problem solved.
As to man-made fusion, the jury is out (and a distraction for now), and we don’t have time to wait for the verdict.
Resources
A brilliant video talk by Dr Sabine Hossenfelder exposes the key dishonesty in all the reports of fusion energy success, and that is the failure to distinguish between the energy produced in the plasma of a fusion reaction (compared to the energy put into it), rather than the overall or end-to-end energy payback possible from electricity generation plant powered by a fusion reactor. See it here https://youtu.be/LJ4W1g-6JiY
A much loved Radio 4 science programme ‘The Curious Cases of Rutherford & Fry’ has done some great episodes. I particularly liked one on the properties of water and its role in biological processes. However, it has made a big flop on fusion energy, channelling the same old hype that is reported in hushed and unquestioning tones by journalists. Dr Michael de Podesta has written a strong critique of this episode his blog Protons 4 Breakfast: ‘Fusion is a failure’, 21st September 2022, See https://protonsforbreakfast.wordpress.com/2022/09/21/fusion-is-a-failure/
(c) Richard W. Erskine. 2019 (Resources added Sept. 2022)
Chris Wilde, Managing Director of Yorkshire Energy Systems (YES), gave a talk Renewable Technologies: Facts, Fiction and Current Developments on 5th September 2019 at The Arkell Centre in Nailsworth, hosted by Nailsworth Climate Action Town (NCAT). The focus was on domestic renewables in UK.
Chris exploded many myths and misunderstandings that even some supporters of renewables believe in. The audience included an influential range of people, from the national political level, to district and parish councillors, from Transition Stroud, local climate groups, Severn Wye Energy Agency, and local renewable energy businesses. It was an excellent talk and very well recieved.
I will be sharing a fuller record of the talk, but to briefly summarise his words that accompanied the pictures used in the talk, using my notes …
Whereas 5 years ago, or even 6 months ago, the majority of householders installing renewables were doing it simply for financial reasons, rather than to reduce their carbon footprint, that has now changed, and about half of those now doing it are motivated by concerns about global warming. Greta Thunberg and Extinction Rebellion can take a lot of credit for raising awareness.
Chris showed an aerial view of a large 110 kW (kilowatt, a unit of ‘power’) solar PV system YES did for a company close to Wembley Stadium. What is shocking is that there are huge areas of commercial roof space without solar surrounding this installation. As Chris said, it shouldn’t be a question of seeking permission to have solar – particularly on new homes or new commercial buildings – it should be required that they do have solar, and it is much cheaper to do it at build time than to retrofit later (“solar” will be used as shorthand for solar photovoltaic (PV) in the text below):
Solar Myths
Myth 1 – Solar is ugly. Leaving aside the point that saving the planet might be seen as more important than the aesthetics of roof lines, the fact is solar panels have been getting slicker and more aesthetic. It is now possible to replace tiles completely with in-roof panels.
Myth 2 – You can only have 4kW on your installation. No, you can only have 4kW per phase before seeking permission from the grid (kW here mean kWp, the peak kW power achievable).
Myth 3 – Cannot have solar without a south facing roof. Actually, the variation in input from west or east, versus south, facing panels can be as little as 15%, and in fact having east and west facing panels can be better for households needing more energy in the morning and afternoon. On flat roofs, you can pack east and west panels more tightly (because less spacing is then required to deal with shadowing effects), and this completely compensates for not being south facing.
Myth 4 – We don’t have a roof that is not shaded, so pointless. Ok, but there are other options, such as ground mounted arrays, or a tracking system like Heliomotion (which has a UK base in Stroud). Chris also showed arrays mounted high enough for sheep to graze under; and there is even a trend now to place solar on top of parking bays. There are simply so many ways of having solar fitted, there are no excuses for not doing it!
Myth 5 – The Feed In Tarif (FIT) has ended so it cannot be made to work, financially. This is wrong on several levels.
Firstly, the sun’s energy is free.
Secondly, the price of solar panels has dropped while their performance has increased (output increased from 250W to 350W over 5 years).
Thirdly, it is true that FIT gave householders 40p per kWh (kiloWatt hour, a unit of ‘energy’) for all energy generated, whether exported to the grid or not, and an extra 3p per kWh for 50% of that generated that is assumed to be exported to the grid. However, while there are now no FIT payments, utility companies will have to pay for what you export, under the new Export Guarantee Scheme (Octopus are already offering 5.5p per kWh even before the scheme comes in).
Fourthly, with a low cost ‘solar diversion switch’ any excess solar energy can be used to heat hot water, avoiding the need to export it to the grid (and by the way, this simple device has essentially killed the ‘solar thermal’ market).
Fifthly, systems that were costing between £3,000 and £4,000 per kW are now down to £1,000. So, in short, payback of a solar system is still possible within 6-7 years even without the FIT subsidy.
Finally, the reduction in bureaucracy with the loss of FIT means that it actually might, paradoxically, accelerate uptake of solar.
Heat Pump Myths
Chris started by explaining how heat pumps work, which seems miraculous to many people, but is the product of 17th century physics: if you compress a gas, it gets hotter. And a heat pump works by transferring heat from the air (or ground) via a fluid (a refrigerant) that is compressed and then releases its heat inside the building. But for each unit of energy used by the pump, 3 to 4 units of energy is extracted from the air in the form of heat. The two main categories of heat pump are Air Sourced Heat Pumps (ASHP) and Ground Sourced Heat Pumps (GSHP). The efficiency of a heat pump will vary with external temperature, but overall is quoted as a seasonally averaged figure.
Assume you had an ASHP with 3.5 efficiency factor. If you have a heating requirement of 18,000 kWh for your home, this could be achieved by using 18,000/3.5 = 5,143 kWh of electricity. Mains gas is currently 3p per kWh and mains electricity is 13 p per kWh so to heat the house with gas would be 18,000 x £0.03 = £540 per year, whereas to do it with this ASHP would be 5,143 x £0.13 = £669; still a bit more than gas, because gas is currently ridiculously cheap, but a few things to consider:
when a crisis occurs in the Middle East for example, gas prices can rise, and don’t have to swing much to wipe out the current distorted advantage of cheap gas;
a tax on carbon including gas, will come sooner or later to reflect the damage that carbon dioxide emissions are doing;
even if today some electricity is coming from fossil fuel plants, increasingly the grid is being ‘greened up’ (see www.carbonintensity.org to look at how much the grid has already greened);
as you will see below, if you add solar to a heat pump the maths flips, because you can use the free solar electricity to help drive the heat pump and even if that is not all year round, 24-7, it has made a huge difference;
finally, if you cannot add solar to your heat pump for some reason, many people are prepared to pay an extra £100 or so per year to save the planet (that is clear from the recent boost in heat pump installations YES have been seeing).
One other key point is that heating a house using a heat pump requires sufficiently large radiators because it operates using a flow temperature of 45/50oC, rather than say 70oC as with a gas boiler. At 45/50oC they still heat the house to the required temperature (typically 21oC), but does so with a larger surface area of ‘emitter’ (this effectively means a slight fatter radiator, and depending on how old the heating system in a house is, that may mean that some of the radiators need to be upgraded, but rarely all radiators; even better, under floor heating can be used, increasing the area even more).
Myth 6 – It cannot work when it is cold outside. Yes it can, as described. It is basic physics at work, and no magic is involved!
Myth 7 – They are more expensive than a gas boiler, so are unaffordable. Heat pumps are more expensive to fit but the Renewable Heat Incentive (RHI) was designed precisely to deal with this. It is paid to the householder over 7 years (and commercially over 20 years), reducing running costs and overall, paying off half to two-thirds of the cost of the installation. To qualify for RHI, the key requirement is roof insulation, and if you have cavity walls, then cavity wall insulation.
Myth 8 – They cannot work in old leaky houses. Untrue. Chris presented an example of an old rectory with 290 square metre floor area, that had good roof insulation but with walls that could not be clad, and overall it was a high heat loss building. It cost £3,500 per year using an oil boiler to heat it. Using a brilliantly effective combination of a 10kW solar array and 6 under lawn ‘slinkies’ to feed a GSHP, the heating bill dropped to £1,500 per year.
That is despite the heating system being set to ‘on’ all the time (but obviously, with a thermostat it runs only when the temperature drops below the required temperature). The 80 year old grand mother loves visiting the house now because “it is always so cosy”. Chris is not saying, from this experience, that insulation is unimportant – it is crucial you get good insulation – but where it is not up to modern standards, don’t let that be a reason for not installing renewable heat: That is, a heat pump with or without solar, but preferably with because the solar reduces the amount of electricity used from the grid, and swings the maths in favour of heat pumps (versus gas).
Chris gave another example of a bungalow (177 square metre floor area) that was costing £1,551 per year to heat. With just a 4 kW roof mounted system and a 14 kW ASHP the bill came down to £903. Now this was £168 more saving than they had expected. Why? Chris believes this is down to behavioural change. Instead of the behaviour with traditional gas systems which can heat up a house fast, and people switch up the system when cold and down when hot – creating a see-saw effect – with heat pump systems, people can just keep it on and be comfy at a sensible temperature (whichever is their preference). Increasingly, Chris is persuading householders to refrain from fiddling with the heat controls and allow the system to work as pre-programmed and provide consistent, comfortable but not hotter than required levels of heating. This changes behaviour and actually creates a perception of a cosier home and reduced bills; what is not to like?
The caveat is that we need more skilled fitters who do not put in the wrong sized radiators, or pipe work, and of course householders who don’t leave doors open (trying to heat your local town is not a sensible approach!).
Renewable technologies like solar and heat pumps are not rocket science, but a basic knowledge is required and vendors are very good at providing training. Along with persuading householders to take the plunge we also need to transfer trade skill sets, to acquire the knowledge and experience to help increase adoption. If your plumber says they don’t know anything about heat pumps, encourage them to take a course – to unlearn some old ways and learn some new ways – and they might be in the vanguard of the change to renewable heat in your neighbourhood.
Chris also mentioned that he has found an issue related to Energy Performance Certificates (EPCs). The question Chris is asking Government is this:
Why is it that it is government policy to encourage the installation of heat pumps through the Renewable Heat Incentive scheme, yet EPCs never recommend them and even discourage them by predicting higher running costs for heat pumps even than old oil boilers contrary to the research carried out by the government in 2013 on which the RHI was based? Does the left hand not know what the right hand is doing?
Chris covered a number of other points and new developments such as thermal storage, but I hope this summary does justice to what was an excellent and inspiring talk.
We have a climate emergency – we need to start behaving like we actually believe it!
So let’s get to work, and make it happen! There is no excuse for not doing so.
This summary of Chris Wilde’s talk is based on my notes, so will be incomplete, as Chris is a brilliant speaker who doesn’t need a script or use bullet points. So, if any errors have crept in, naturally they are mine. Richard Erskine, 7th Sept. 2019. Any comments please provide via my blog.
Heat Pumps, whether Air-Sourced or Ground-Sourced, can and should be making a major contribution to decarbonising heating in the UK. Heating (both space heating and water heating) is major contributor to our carbon footprint.
Heat pumps are now incredibly efficient – for 1 unit of electrical energy you put in you can get at least 3 units back in the form of heat energy (a pump compresses the air and this causes it to rise in temperature; two century old physics at work here). The process works sufficiently well even in UK winters.
The pumps are now relatively quiet (think microwave level of noise). They can deliver good payback (even more so if there was a cost on carbon). They even work with older properties (countering another one of the many myths surrounding heat pumps).
I even heard Paul Lewis on BBC’s ‘Money Box’ (Radio 4) – clearly getting confused between heat pumps and geothermal energy – saying ‘oh, but you need to be in a certain part of the country to use them’ (or words to that effect).
We clearly need much more education out there to raise awareness of the potential of heat pumps.
When combined with solar (to provide some of the electricity), they are even better.
So why is the take-up of heat pumps still too slow? Why is the Government not pushing them like crazy (it is an emergency, right!)? Why are households, when replacing old boilers, till opting for gas?
When we had the AIDS crisis in the 1980s, the UK Government undertook a major health awareness campaign, and other countries also, which largely succeeded. In an emergency, Governments tend to act in a way that ‘signals’ it is an emergency.
The UK Government is sending no such signals. Bland assurances that the commitment to reach net zero by 2050 is not a substitute for actions. In the arena of heat, where is the massive programme to up-skill plumbers and others? Where is the eduation programme to demystify heat pumps and promote their adoptions?
And where is the joined up thinking?
This article below from Yorkshire Energy Systems, based on their extensive research and practical experience, suggests one reason – that EPCs (Energy Performance Certificates) issued for homes and including recommended solutions – are biased against heat pumps.
The mismatch between what the Government is saying (that heat pumps are part of the decarbonisation solution) and what EPCs are advising suggests a clear lack of joined up thinking.
… and no sign that the Government really believes that urgent action is required.
Crossrail, prior to the announcement of delays and overspend, was being lauded as an example of an exemplar on-time, on-budget complex project; a real feather in the cap for British engineering. There were documentaries celebrating the amazing care with which the tunnelling was done to avoid damage at the surface, using precise monitoring and accurately positioned webs of hydraulic grouting to stabilise the ground beneath buildings. Even big data was used to help interpret signals received from a 3D array of monitoring stations, to help to actively manage operations during tunnelling and construction. A truly awesome example of advanced engineering, on an epic scale.
The post-mortem has not yet been done on why the delays came so suddenly upon the project, although the finger is being pointed not at the physical construction, but the digital one. To operate the rail service there must be advanced control systems in place, and to ensure these operate safely, a huge number of tests need to be carried out ‘virtually’ in the first instance, to ensure safety is not compromised.
Software is something that the senior management of traditional engineering companies are uncomfortable with; in the old days you could hit a machine with a hammer, but not a virtual machine. They knew intuitively if someone told them nonsense within their chosen engineering discipline; for example, if a junior engineer planned to pour 1000 cubic metres of cement into a hole and believed it would be set in the morning. But if told that testing of a software sub-system will take 15 days, they wouldn’t have a clue as to whether this was realistic or not; they might even ask “can we push to get this done in 10 days?”.
In the world of software, when budgets and timelines press, the most dangerous word used in projects is ‘hope’. “We hope to be finished by the end of the month”; “we hope to have that bug fixed soon”; and so on Testing is often the first victim of pressurised plans. Junior staff say “we hope to finish”, but by the time the message rises up through the management hierarchy to Board level, there is a confident “we will be finished” inserted into the Powerpoint. Anyone asking tough questions might be seen as slowing the project down when progress needs to be demonstrated.
You can blame the poor (software) engineer, but the real fault lies with the incurious senior management who seem to request an answer they want, rather than try to understand the reality on the ground.
The investigations of the Boeing 737 Max tragedy are also unresolved, but of course, everyone is focusing on the narrow question of the technical design issue related to a critical new feature. There is a much bigger issue at work here.
Arguably, Airbus has pursued the ‘fly by wire’ approach much earlier than Boeing, whose culture has tended to resist over automation of the piloting. Active controls to overcome adverse events has now become part of the design of many modern aircraft, but the issue with the Boeing 737 Max seems to have been that this came along without much in the way of training; and the interaction between the automated controls and the human controls is at the heart of the problem. Was there also a lack of realistic human-centric testing to assess the safety of the combined automated/ human control systems? We will no doubt learn this in due course.
Electronics is of course not new to aerospace industries, but programmable software has grown in importance and increasingly it seems that the issue of growing complexity and how to handle the consequent growth in testing complexity, has perhaps overtaken the abilities of traditional engineering management systems. This is extending to almost every product or project – small and large – as the internet of everything emerges.
This takes me to a scribbled diagram I found in an old notebook – made on a train back in 2014, travelling to London, while I debated the issue of product complexity with a project director for a major engineering project. I have turned this into the Figure below.
There are two aspects of complexity identified for products:
Firstly, the ‘design complexity’, which can be thought of as the number of components making up the product, but also the configurability and connectivity of those components. If printed on paper, you can thinking of how high the pile of paper would be that identified every component, with a description of their configuration and connection. This would apply to physical aspects but also software too; and all the implied test cases. There is a rapid escalation in complexity as we move from car to airliner to military platform.
Secondly, the ‘production automation complexity’, which represents the level of automation involved in delivering the required products. Cars as they have become, are seen as having the highest level of production automation complexity.
You can order a specific build of car, with desired ‘extras’, and colour, and then later see it travelling down the assembly line with over 50% of the tasks completely automated; the resulting product with potentially a nearly unique selection of options chosen by you. It is at the pinnacle of production automation complexity but it also has a significant level of design complexity, albeit well short of others shown in the figure.
Whereas an aircraft carrier will in each case be collectively significantly different from any other in existence (even when originally conceived as a copy of an existing model) – with changes being made even during its construction – so does not score so high on ‘production automation complexity’. But in terms of ‘design complexity’ it is extremely high (there are only about 20 aircraft carriers in operation globally and half of these are in the US Navy, which perhaps underlines this point).
As we add more software and greater automation, the complexity grows, and arguably, the physical frame of the product is the least complex part of the design or production process.
I wonder is there a gap between the actual complexity of the final products and an engineering culture that is still heavily weighted towards the physical elements – bonnet of a car, hull of a ship, turbine of a jet engine – and is this gap widening as the software elements grow in scope and ambition?
Government Ministers, like senior managers, will be happy being photographed next to the wing of a new model of airliner – and talk earnestly about workers riveting steel – but what may be more pivotal to success is some software sub-system buried deep in millions of lines of ‘code’; no photo opportunities here.
As we move from traditional linear ‘deterministic’ programming to non-deterministic algorithms – other questions arise about the increasing role of software.
Given incomplete, ambiguous or contradictory inputs the software must make a choice about how to act in real time. It may have to take a virtual vote between independently written algorithms. It cannot necessarily rely on supplementary data from external sources (“no, you are definitely nose diving not stalling!”), for system security reasons if not external data bandwidth reasons.
And so we continue to add further responsibility, onto the shoulders of the non-physical elements of the system.
Are Crossrail and the 737 Max representative of a widening gap, reflected in an inability of existing management structures to manage the complexity and associated risks of the software embedded in complex engineering products and projects?
Rather like the myth that carrots helped RAF pilots see at night during WWII which was such a great story that even today it is repeated and believed, the idea that some changes in the Sun’s output is responsible for recent climate change is a similarly attractive myth, which keeps on being repeated.
The BBC had to apologise for Quentin Letts’ execrable hatched job on the Met Office in 2015, which also included Piers Corbyn.
The truth is that we know with a confidence unsurpassed in many fields of science what is causing global warming; it’s not the sun, it’s not volcanoes; it’s not contrails. The IPCC’s 5th Assessment Report (2013) was clear that greenhouse gases (principally carbon dioxide) resulting from human activities are the overwhelming driver of global warming (see Figure 8.15)
So you might expect Boris Johnson as a leading politician, to reference the IPCC (Intergovernmental Panel on Climate Change), which gathers, analyses and synthesises the published work of thousands of scientists with relevant expertise on behalf of the nations of the world.
Instead, he has referred to the “great physicist and meteorologist Piers Corbyn” (It’s snowing, and it really feels like the start of a mini ice age, Boris Johnson, Daily Telegraph, 20th January 2013). Piers Corbyn has no expertise in climate science and theories like his have been completely debunked in a paper published in the Proceedings of The Royal Society:
… the long-term changes in solar outputs, which have been postulated as drivers of climate change, have been in the direction opposite to that required to explain, or even contribute to, the observed rise in Earth’s global mean air surface temperature (GMAST) …
What is alarming is that in the face of this strong scientific evidence, some Internet sources with otherwise good reputations for accurate reporting can still give credence to ideas that are of no scientific merit. These are then readily relayed by other irresponsible parts of the media, and the public gain a fully incorrect impression of the status of the scientific debate.
So, for Boris Johnson to call himself an “empiricist” is, frankly, laughable.
He has also cozied up to neoliberal ‘think tanks’ implacably opposed to action on global warming.
I think we can safely say that hitherto he has firmly placed himself in the DENIAL bucket (in the illustration below).
He shares this perspective with other hard Brexiteers in the new Cabinet, who are itching to deregulate the UK economy, such as Jacob Rees-Mogg, and see action on global warming as a constraint on unregulated markets.
In his acceptance speech on becoming Prime Minister, Boris Johnson never mentioned climate change. But since then he has reiterated Theresa May’s Government’s commitment to net zero by 2050, and
Responding to concerns expressed by Shadow Treasury Minister Anneliese Dodds that he had not focused sufficiently climate change in the initial statements outlining his priorities as Prime Minister, Johnson replied: “The House will know that we place the climate change agenda at the absolute core of what we are doing.”
He said: “This party believes in the private sector-generated technology which will make that target attainable and deliver hundreds of thousands of jobs. That is the approach we should follow.” …
Predicting that the UK will “no longer” be contributing to climate change by 2050, Johnson said: “We will have led the world in delivering that net-zero target. We will be the home of electric vehicles—cars and even planes—powered by British-made battery technology, which is being developed right here, right now.”
By imagining that industry alone (without any stated plans for an escalating tax on carbon), can somehow address the huge transformation required, on the timescale required, without concerted effort at every level of Government (top down and bottom up), and civil society, he remains disconnected from reality, let alone science.
Moving from DENIAL to COMPLACENCY is an advance for Boris – assuming for the moment this is not another flip-flopping of positions that he is famed for – but it is hardly the sign of the climate leadership required. We need a leadership that respects the science, and understands the policy implications and prescriptions required.
They need to, because great words need to turned into a plan of action, and every year we delay will make the transition more painful (it is already going to be painful enough, but they are not telling you that, are they?).
That will not be enough to meet the public’s concerns over the climate emergency, and increasingly, the public will be expecting leadership that has moved from COMPLACENCY to the URGENCY position.
Many see GREEN RADICALISM as now an unavoidable response to the COMPLACENCY in Whitehall.
If Boris Johnson fails to jettison his neoliberal friends and the crank science that is part of their tool-kit – who are trying (and have succeeded so far) in putting the breaks on meaningful and urgent action – the longer term political fall-out will make Brexit look like a tea party.
(c) Richard W. Erskine, essaysconcerning.com, July 2019
These past two weeks have been such a momentous time for climate change in the UK it is hard to take in. My takes:
On 21st April, Polly Higgins, the lawyer who has spent a decade working towards establishing ecocide as a crime under international law, sadly died. At a meeting at Hawkwood Centre, Stroud, I heard the inspiring Gail Bradbrook speak of how Polly had given her strength in the formation of Extinction Rebellion.
On 30th April, Extinction Rebellion met with the Environment Secretary Michael Gove, a small step but one that reflects the pressure that their actions (widely supported in the country) are having. Clare Farrell said the meeting “.. was less shit than I thought it would be, but only mildly”, but it’s a start.
On 1st May, the UK’s Parliament has declared a climate emergency
These are turbulent times. Emotions are stirring. Expectations are high. There is hope, but also fear.
The debate is now raging amongst advocates for climate action about whether the CCC’s report is adequate.
Let’s step back a moment.
The IPCC introduced the idea of a ‘carbon budget’ and this is typically expressed in the form such as (see Note):
“we have an X% chance of avoiding a global mean surface temperature rise of Y degrees centigrade if our emissions pathway keeps carbon emissions below Z billion tonnes”
The IPCC Special 1.5C Report, looked at how soon we might get to 1.5C and the impacts of this compared to 2C. As Carbon Brief summarised it:
At current rates, human-caused warming is adding around 0.2C to global average temperatures every decade. This is the result of both “past and ongoing emissions”, the report notes.
If this rate continues, the report projects that global average warming “is likely to reach 1.5C between 2030 and 2052”
Perhaps the most shocking and surprising aspect of this report was the difference in impacts between 1.5C and the hitherto international goal of 2C. The New York Times provided the most compelling, graphic summary of the change in impacts. Here are a few examples:
The percentage of the world’s population exposed to extreme heat jumps from 14% to 37%
Loss of insect species jumps from 6% to 18%
Coral reefs suffer “very frequent mass mortalities” in a 1.5C world, but “mostly disappear” in a 2C world.
So, in short, 1.5C is definitely worth fighting for.
In view of the potential to avoid losses, it is not unreasonable for Extinction Rebellion and others to frame this as a “we’ve got 12 years”. The IPCC says it could be as early as 12 years, but it might be as late as 34 years. What would the Precautionary Principle say?
Well, 12 years of course.
But the time needed to move from our current worldwide emissions to net zero is a steep cliff. You’ve all seen the graph.
It seems impossibly steep. It was a difficult but relatively gentle incline if we’d started 30 years ago. Even starting in 2000 was not so bad. Every year since the descent hasbecome steeper. It is now a precipice.
It is not unreasonable to suggest it is impossibly steep.
It is not unreasonable to suggest we blew it; we messed up.
We have a near impossible task to prevent 1.5C.
I’m angry about this. You should be too.
I am not angry with some scientists or some committee for telling me so. That’s like being angry with a doctor who says you need to lose weight. Who is to blame: the messenger? Maybe I should have listened when they told me 10 years back.
So if the CCC has come to the view that the UK at least can get to net zero by 2050 that is an advance – the original goal in the Act was an 80% reduction by 2050 and they are saying we can do better, we can make it a 100% reduction.
Is it adequate?
Well, how can it ever be adequate in the fundamental sense of preventing human induced impacts from its carbon emissions? They are already with us. Some thresholds are already crossed. Some locked in additional warming is unavoidable.
Odds on, we will lose the Great Barrier Reef. Let’s not put that burden on a committe to do the immpossible. We are all to blame for creating the precipice.
That makes me sad, furious, mournful, terrified, angry.
There is a saying that the best time to have started serious efforts to decarbonise the economy was 30 years ago, but the next best time is today.
Unfortunately, the CCC does not have access to a time machine.
Everyone is angry.
Some are angry at the CCC for not guaranteeing we stay below 1.5C, or even making it the central goal.
Extinction Rebellion tweeted:
The advice of @theCCCuk to the UK government is a betrayal of current & future generations made all the more shocking coming just hours after UK MPs passed a motion to declare an environment & climate emergency.
It is I think the target of 2050 that has angered activists. It should be remembered that 2050 was baked into the Climate Change Act (2008). It should be no surprise it features in the CCC’s latest report. The CCC is a statutory body. If we don’t like their terms of reference then it’s easy: we vote in a Government that will revise the 2008 Act. We haven’t yet achieved that.
Professor Julia Steinberger is no delayist (quite the opposite, she’s as radical as they come), and she has tweeted back as follows:
Ok, everyone, enough. I do need to get some work done around here.
(2) there is a lot of good stuff & hard workmaking the numbers work there.
3) Figuring out what it means for various sectors, work, finance, education, training, our daily lives & cities & local authorities and so on is going to take some thinking through.
(4) If you want a faster target, fine! I do too! Can you do it without being horrid to the authors and researchers who’ve worked like maniacs to try to get this much figured out? THEY WANT TO BE ON YOUR SIDE!
(5) So read it, share it, reflect on it, and try to figure out what & how we can do a lot faster, and what & how we can accelerate the slower stuff.
Treat the CCC report as in reality an ambitious plan – it really is – in the face of the precipice, but also believe we can do better.
These two ideas are not mutually exclusive.
Maybe we do not believe that people can make the consumption changes that will make it possible to be more ambitious; goals that politicians might struggle to deliver.
Yet communities might decide – to hell with it – we can do this. Yes we can, do better.
Some are scornful at Extension Rebellion for asking the impossible, but they are right to press for better. However, can we stop the in-fighting, which has undermined many important fights against dark forces in the past. Let’s not make that mistake again.
Can we all be a little more forgiving of each other, faced with our terrible situation.
We are between a rock and a hard place.
We should study the CCC report. Take it to our climate meetings in our towns, and halls, and discuss it.
How can we help deliver this?
How can we do even better?
I for one will be taking the CCC report to the next meeting of the climate action group I help run.
I’m still mournful.
I’m still angry.
But I am also a problem solver who wants to make a difference.
Good work CCC.
Good work XR.
We are all in this together.
… and we don’t have a time machine, so we look forward.
Let not the best be the enemy of the good.
Let not the good be a reason for not striving for better, even while the best is a ship that has long sailed.
You pick an X and Y, and the IPCC will tell how much we can emit (Z). The ‘X%’ is translated into precisely defined usages of terms such as ‘unlikely’, ‘likely’, ‘very likely’, etc. To say something is ‘likely‘ the IPCC means it has a greater than 66% chance of happening.
“CO2 is a powerful warming gas but there’s not a lot of it in the atmosphere – for every million particles of air, there are 410 of CO2.
The gas is helping to drive temperatures up around the world, but the comparatively low concentration means it is difficult to design efficient machines to remove it.
But a Canadian company, Carbon Engineering, believes it has found a solution.
Air is exposed to a chemical solution that concentrates the CO2. Further refinements mean the gas can be purified into a form that can be stored or utilised as a liquid fuel.”
The ‘magic bullet’ in the title is of course clickbait, because anyone who has spent any time looking at all the ways we need to reduce emissions or to draw down CO2 from the atmosphere will know that we need a wide range of solutions. There is no single ‘magic bullet’.
To remove the excess CO₂, sufficient at least to keep below 2oC …
“essentially we need to build an industry that’s 3 to 4 times the size of the current oil & gas industry just to clean up our waste” (2nd April 2019)
The issue is one of both scale and timing. We need big interventions and we need them fast (or fast enough).
It would take time and considerable resources to scale up NETs, which are currently mostly still in their development phase, and so the immediate focus needs to be on other strategies including energy in the home, reduced consumption, rolling out renewables, changing diets, etc., for which the solutions are ready and waiting and just needed a massive push from Governments, industry and civil society.
Glen Peters stresses that the first priority is emissions reductions, rather than capture, although capture will be needed in due course either using natural methods, or technological ones, or some combination.
There are big questions hanging over NETs such as BECCS (Bio-Energy with Carbon Capture and Storage), which would require between 1 and 5 ‘Indias’ of land area to make the contribution needed. The continuing fertility of soils to grow plants for BECCS and competition for land-use for agriculture, are just two of the concerns raised.
The technology highlighted in the BBC piece is DAC (Direct Air Capture) which could – powered by renewables – have great potential and avoids land-use competition, but is energy intensive. As with BECCS, DAC used in sequestration mode would still need to overcome hurdles, such as the geological ones related to safely burying CO₂ in perpetuity (my emphasis)
My concerns with Carbon Engineering’s proposed application of DAC – for fuel to be used in transport – are as follows.
Firstly, road, rail, and even shipping, are being electrified, making fuel redundant.There is the competing hydrogen economy that would use fuel, but a non-carbon based one.Either way, this will rapidly decarbonise these parts of transport. Since transport is overall 25% of global emissions currently, this is a highly significant ‘quick win’ for the planet (within 2 or at most 3 decades).
Commercial Aviation is 13% of transport’s carbon emissions, but is less easy to electrify – at the scale of airliners travelling long-distance – because of the current energy density and weight of batteries (this could change in the future, as Professor Clare Grey explained during an episode of The Life Scientific).
Aviation is therefore just above 3% of global emissions (13% of 25%) from all sectors (albeit a probably increasing percentage).A development-stage technology being focused on just 3% of global emissions can hardly be framed as a ‘magic bullet’ to the climate crisis.
Secondly, in terms of Government financing, would we focus it on decarbonising road, or decarbonising aviation? I suggest the former not the latter if it came down to a choice.
DAC may be great to invest some money in, as development phase technology, but the big bucks needed immediately, to make a huge dent in emissions, are in areas such as road sector.
It is not a binary choice of course, but the issue with financing is timing and scale again. The many solutions we forge ahead with now must meet the test that they are proven (not futurism/ delayism solutions like nuclear fusion), can be scaled fast, and will contribute significantly to carbon reductions while also helping to transition society in positive ways (as for example, the solutions in Project Drawdown offer, with numerous ‘co-benefits’)
Finally, it is worth stressing that the focus for Carbon Engineering (and hence the BBC report) is on the capture of carbon dioxide, to be converted into hydrocarbons as fuel, for burning. This effectively recycles atmospheric carbon. It neither adds to, nor takes away, carbon dioxide through this cycle.
This therefore makes zero change to CO₂ in the atmosphere. It might be whimsically called Carbon Capture and re-Emission technology (CCE)!
So I think it was wrong of the BBC piece to give the impression that the goal was ‘Carbon Capture and Storage’ (CCS), whose aim is to draw down CO₂.
If you say “I am cutting down on smoking” and it turns out that from 7,300 cigarettes per year over the last 10 years you have managed to reduce your consumption by 25 cigarettes per year over the last 4 years and now are at 7,200 per year, then yes, it is true, you are cutting down.
But are you being honest?
In fact, it is fair to say that far from telling the truth you are in a sense lying or at least ‘dissembling’
That is what BP is doing with it’s latest massive ‘Possibilities Everywhere’ public relations and media advertising campaign, which was “jointly created by Ogilvy New York and Purple Strategies, with the films directed by Diego Contreras through Reset (US) and Academy (UK). The global media agency is Mindshare.”, as Campaign reports.
In a Youtube video on the initiative Lightsource BP is craftily suggesting it is seriously investing in solar energy, but don’t worry folks if the sun goes in, because we have plenty of gas as backup.
They want it both ways: claiming to be supporting renewables while continuing to push ahead with investments in fossil fuel discovery and production.
The on-going investments in upstream oil & gas development runs into many billions of dollars annually, which rather dwarfs the measly £300 million that Lightsource will be getting over three years by a factor of over 250.
This is not a serious push for renewables.
If they were serious they would have actual renewables energy generation (arising from their ‘investments’) as one of their Key Performance Indicators (KPIs) in their Annual Report. They don’t because they don’t actually care, and they don’t expect their investors to care.
No, this is what BP cares about (from the same BP Annual Report) …
…. the value of their fossil fuel reserves. The more the better, because that has a huge influence on the share price.
In the Annual Report referenced above, BP states:
“Today, oil and gas account for almost 60% of all energy used. Even in a scenario that is consistent with the Paris goals of limiting warming to less than 2oC, oil and gas could provide around 40% of all energy used by 2040. So it’s essential that action is taken to reduce emissions from their production and use.
In a low carbon world, gas offers a much cleaner alternative to coal for power generation and a valuable back-up for renewables, for example when the sun and wind aren’t available. Gas also provides heat for industry and homes and fuel for trucks and ships.”
How do we decode this?
Well, what BP sees in a collapse of coal is a massive opportunity to grow oil & gas, but especially gas; they are not the only oil & gas company spotting the opportunity.
So they are not pushing energy storage for renewables, no, they are using intermittency as a messaging ploy to have gas as “a backup”.So while 60% to 40% might look like a fall in profits, for BP’s gas investments it is a growth business, and less renewables means more growth in that gas business. So don’t get too big for your boots renewables – if we own you we can keep you in your place. Maybe you can rule when we have dug the last hole, but don’t expect that any time soon.
No amount of tinkering with emissions from production facilities or more efficient end-use consumption will avoid the conclusion that the “transition” they talk of must be a whole lot more urgent than the – dare I use the metaphor – glacial pace which BP are demonstrating.
Maybe BP should take seriously 3 key learning points:
Firstly, we have run out of time to keep playing these games. Your fossil fuel industry has to be placed on an aggressive de-growth plan, not the growth one you envisage, if you take seriously the implications of the IPCC’s 1.5C Special Report.
Secondly, far from your not-so-subtle digs at renewables, it is possible to construct an energy regime based on renewables (that does address intermittency issues); try reading reports like Zero Carbon Britain: Rethinking the Future from the Centre for Alternative Technology.
Thirdly, your investors will not thank you if you continue to ignore the serious risks from a ‘carbon bubble’. Claiming a value for BP assets based on unburnable fossil fuels will catch you out, sooner or later, and that your shareholders, pensioners and many others won’t thank you for your complacency.
Dissembling in respect of your commitment to the transition – which you intend to drag out for as long as possible it seems – will fool no one, and certainly not a public increasingly concerned about the impacts of global warming (and, by the way, also the impacts of plastics – another of your gifts to Mother Earth).
We are out of time.
By investing seriously and urgently in solutions that demonstrate a real commitment to the transition, and in planning to leave a whole lotta reserves in the ground, you can earn the trust of the public.
Change your KPIs to show you have read and understood the science on global warming.
Then you can build a PR campaign that demonstrates honesty and earns trust.
How the world feeds itself while at the same time becoming carbon neutral within a few decades (see Note 1), while at the same time protecting biodiversity and respecting other planetary boundaries, is a hugely complex issue.
It is not helped by simplistic arguments on any side of the debate.
Food is much more complex than say, electricity generation or transport, because it brings together so many different interlocking threads, not least our different cultures and trading practices around the world; it cannot be glibly addressed through some technical silver bullet or indeed any single prescription.
Although it seems perfectly possible to have rewilding without conflating this with meat production for human consumption, Knepp Castle Estate clearly see these twinned in their overall vision for the Estate.
Knepp Castle Estate have done some wonderful work in their experiment to rewild the Estate’s farm and this has yielded some great results in promoting biodiversity on the farm.
It is therefore disappointing that Isabelle Tree – who runs the Estate with her husband Sir Charles Burrell – decided that the way to counter what she believes are simplistic “exhortations” in favour of veganism is to use strawman arguments, which I will come to in a moment.
The article ends with a statement I think can be defended (even if I disagree with it):
“There’s no question we should all be eating far less meat, and calls for an end to high-carbon, polluting, unethical, intensive forms of grain-fed meat production are commendable. But if your concerns as a vegan are the environment, animal welfare and your own health, then it’s no longer possible to pretend that these are all met simply by giving up meat and dairy.”
The key words here are “simply by”, because of course, any diet begs a lot of questions on how food is produced, processed and transported. We all agree it is complicated. We can all agree that a goat farmer in the Himalayas cannot simply adopt the practices of a farmer in England’s green pastures. We need to respect cultural and geographic diversity.
Except her last sentence does not address crop production methods, but simply asserts:
“Counterintuitive as it may seem, adding the occasional organic, pasture-fed steak to your diet could be the right way to square the circle.”
The problem is that to feed the UK or feed the world, we need to know what this means in quantitative terms, and there is really no indication of what a balanced omnivorous diet would look like or how to scale up the Knepp Castle Estate experiment, even for the UK.
We need alternatives, for sure, but any changes will take a long time to make a dent on a global scale. The world could simply follow the example of India with it relatively low level of meat consumption, but any proposed system must be able to scale effectively.
“… shifting the crop calories used for feed and other uses to direct human consumption could potentially feed an additional ∼4 billion people.”
Emily Cassidy et al, Environ. Res. Lett. 8 (2013) 034015
And if our goal is to address climate disruption as well as sustainable agriculture, the land will be in demand for other purposes: crops for human consumption; re-forestation; bio-energy crops; renewable energy assets; etc.
Meat production whether it is intensively produced, or in a rewilded context, cannot wish away the basic fact that it is a relatively inefficient way of using land to produce calories.
The UK currently imports over 40% of its food, and on top of that imports soy and other crops for feed for livestock. Of the land we have in the UK, about 50% is given over to grassland for livestock, as illustrated in this Figure from the Zero Carbon Britain Report: Rethinking The Future:
The Centre for Alternative Technology’s alternative, set out in the same report, is aimed at getting the UK to zero carbon; balancing all the sectors that are involved, including food production, but recognising we need to fit everything required into the available land. They arrive at a radically different distribution of land-use:
In their scenario, livestock are not eliminated but are radically reduced.
What is most disappointing about Isabella Tree’s piece in the Guardian is that she feels the need to use Strawman Arguments to support her case (which immediately suggests it has some holes in it):
Strawman argument #1
“Rather than being seduced by exhortations to eat more products made from industrially grown soya, maize and grains, we should be encouraging sustainable forms of meat and dairy production based on traditional rotational systems, permanent pasture and conservation grazing.”
My Response: Well, since most of those crops are grown for animal consumption, that is another reason to release that land to grow sustainable crops (in soil-carbon caring ways); for forests; for bio-energy crops; for human habitation; etc.The net result of low intensive meat production is that we would need to massively reduce meat production.
Strawman argument #2
“In the vegan equation, by contrast, the carbon cost of ploughing is rarely considered … up to 70% of the carbon in our cultivated soils has been lost to the atmosphere”
My Response: Untrue. Why do we have the permaculture movement, low-till systems, etc.? And to stress again, the majority of the cropland in UK and US, for example, today is to feed livestock. If we want to improve soil carbon there are many ways of doing it.
I could go on.
She implies that the proposed method of farming will make a big impact on soil carbon sequestration, and there is no doubt that soil plays a hugely important role in carbon sequestration, but this is an area which is very complex. It is reassuring that the article does not make outlandish claims (such as those made by Savory, see Note 2), but again, there is a lack of any estimates as to the extent to which the proposed farming practices would mitigate increases in greenhouse gas emissions. Plausibility arguments won’t cut it I’m afraid.
For those interested in exploring all the questions touched on so far and more besides – with the benefit of some science to back up claims – they could not do better than look at a few of the excellent food research organisations in Oxford.
Isabella Tree acknowledges that we need to reduce meat consumption. No doubt she would agree that the sky-rocketing consumption of meat in China and globally is unsustainable. Here is the current picture:
And as Godfray et al. state in the paper from which I took this Figure:
“It is difficult to envisage how the world could supply a population of 10 billion or more people with the quantity of meat currently consumed in most high-income countries without substantial negative effects on environmental sustainability. “
Godfray et al., Science 361, 243 (2018), 20th July 2018
Yes, it is much more complicated than simply choosing one’s diet, and we must all take care to consider the processes and pathways by which we get our food and how land is used – whether we eat meat or not.
But for many, veganism remains an increasingly obvious option to make an immediate dent in one’s carbon footprint, and it remains a perfectly justifiable choice, whether from an environmental, ethical or scientific standpoint.
It is by no means clear that even as a portion of our weekly diet, rewilded meat will be the solution to the world’s environmental and sustainability challenges, or at least on the timescales required. Veganism can make an immediate impact.
In fact, without a whole lot more vegans on this planet, it is difficult to see how those who want to remain meat eaters can carry on doing so with a clear conscience, given the current (as opposed to, wished for) farming practices.
In the future, meat eaters may have to pay a lot more to eat meat and even then give a big nod of thanks to vegans for making a space for them to do so.
If Isabella Tree’s article was entitled “If you want to save the world, veganism isn’t the whole answer: Intensively farmed meat and dairy are a blight along with the fields of soya and maize they depend on. But there is a case for low levels of meat consumption.” … it would have been less catchy but at least defensible.
Knepp Castle Estate are doing great work showing how to promote biodiversity on their farm, but as a model for feeding the world and preventing dangerous climate disruption, by 2050 or earlier … they have failed to make a convincing case that they have a credible plan.
(c) Richard W. Erskine, 2019
NOTES
Note 1
On our current emissions trajectory, the world “is likely to reach 1.5C between 2030 and 2052”. If we are to avoid a global mean surface temperature rise of 1.5C, net global CO2 emissions need to fall by about 45% from 2010 levels by 2030 and reach “net zero” by around 2050. See Carbon Brief’s ‘In-depth Q&A: The IPCC’s special report on climate change at 1.5C’ for more details. The IPCC’s 1.5C report made it clear that the difference between a 1.5C world and a 2C world was very significant, and so every year counts. The sooner we can peak the atmospheric concentration of greenhouse gases (especially CO2, being long-lived) in the atmosphere, the better.
Note 2
Savory suggested that over a period of 3 or 4 decades you can draw down the whole of the anthropogenic amount that has accumulated (which is nearly 2000 Gigatonnes of carbon dioxide), whereas a realistic assessment (e.g. www.drawdown.org ) is suggesting a figure of 14 gigatonnes of carbon dioxide (more than 100 times less) is possible in the 2020-2050 timeframe.
FCRN explored Savory’s methods and claims, and find that despite decades of trying, he has not demonstrated that his methods work. Savory’s case is very weak, and he ends up (in his exchanges with FCRN) almost discounting science; saying his methods are not susceptible to scientific investigations.
In an attempt to find some science to back himself up, Savory referenced Gattinger, but that doesn’t hold up either. Track down Gattinger et al’s work and it reveals that soil organic carbon could (on average, with a large spread) capture 0.4GtC/year (nowhere near annual anthropogenic emissions of 10GtC), and if it cannot keep up with annual emissions, forget soaking up the many decades of historical emissions (the 50% of these that persists for a very long time in the atmosphere), which some are claiming is possible.
We had a seasonal pub lunch with neighbours, and my christmas cracker included the question:
“How many gifts would you have if you received all the gifts in the song ‘The Twelve Days of Christmas’?”
The song indicates the following gifts I will receive on each day:
on 1st day I’ll receive, a partridge in a pear tree;
on 2nd day, 2 turtle doves and a partidge in a pear tree;
on 3rd day, 3 French hens, 2 turtle doves and a partidge in a pear tree;
etc.
So, by the twelth day I will have received a total of:
12 x 1 partridges (each in a pear tree);
11 x 2 turtle doves;
10 x 3 French hens;
… etc; (until we get to)
1 x 12 drummers drumming.
So, the total number of gifts is:
(12×1) + (11×2) + (10×3) + … + (1×12)
which my abstemious wife very rapidly computed is 364 gifts.
By which time and after a few glasses of wine, I was of course wanting a more general result, so I declared:
“What about the number of gifts on the Nth day of Christmas?”
My wife mumbled “here we go!”, and by then my pen and paper napkin were at the ready …
Assuming the general gifts were denoted g1, g2, g3, …, gN, then we’d end up with…
N x 1 of gift g1
(N-1) x 2 of gift g2
(N-3) x 3 of gift g3
… etc. until
1 x N of gift gN
Let’s call the total number of gifts arrived at as G(N). So as an example, we already know that G(12) = 364
In mathematical notation I can write this in a different way (see Note 1), and solve the equation to show that …
G(N) = (1/6) * N * (N+1) * (N+2)
Testing this equation for case of N=12 I get
G(12) = (1/6) * 12 * 13 *14
= 2 *13 * 14
= 364
Job done!
When I got home I wondered if there was a geometrical way of deriving this result, rather like the trick that Gauss used as a young boy when the teacher asked the class to add the whole numbers from 1 to 100 (see Note 2).
I rather like the visual proof which I can show for N=4 as:
which generally (for N rather than 4), and expressed algebrailly, can be expressed as
N∑[i] = (N2 – N)/2 + N
= (N2 /2) – N/2 + N
= (N2 /2) + N/2
= (1/2) * N * (N+1)
My question to myself was can we do a similar visual trick with the Nth Days of Christmas sum? (I say we, but without the genius Gauss to assist me!).
We have to go three dimensional now to build a picture of the number. The child’s blocks I could find were too few in number and we don’t have sugar cubes, but we do have veggie stock cubes! So, I created the following …
The left hand portion represents 3×1 + 2×2 + 1×3 which is G(3)
The same number of blocks is in the right portion (in mirror image).
In the middle I have added 1+2+3+4 which is the familiar 4∑[i]
Put these all together and the picture is as follows:
which is clearly 12+22+32+42 which is the familiar 4∑[i2]
That’s a nice pictorial solution of a kind.
So in algebraic terms that gives
2 G(3) + 4∑[i] = 4∑[i2]
This gives me an algebraic solution that is not any simpler than the original solution I made on the napkin. The stock cubes give me:
G(N-1) = (1/2) * ( N∑[i2] – N∑[i] )
which can be solved (Note 3) to give
G(N) = (1/6) * N * (N+1) * (N+2)
as before.
However, I felt I had failed in my quest to avoid algebra or at least a much simpler algebraic resolution. Ultimately I couldn’t find one, but the visualization is at least a great way to play with the number relationships.
At least I will be very quick with the answer if ever I am asked
“How many gifts would you have if you received all the gifts in the general song ‘The N Days of Christmas’?”
“Oh, that’s easy, it one sixth of N, times N plus one, times N plus two.”
Richard W. Erskine, 30th December 2018
Note 1
G(N) can be written as the following sum:
G(N) = ∑ [(N – i + 1) * ( i )]
where N∑[] is shorthand for “sum of expression […] for i ranging from 1 to N”
Expanding the expression, I get
G(N) = ( (N+1) * N∑[i] ) – N∑[i2]
Now, there are well known results that give us, firstly
N∑[i] = (1/2) * N * (N+1)
and secondly,
N∑[i2] = (1/6) * N * (N+1) * (2N + 1)
So, combining these I get,
G(N) = ((N+1) * (1/2) * N * (N+1) ) – ((1/6) * N * (N+1) * (2N + 1))
Taking out a common factor (1/6) * N * (N+1), this becomes
G(N) = (1/6) * N * (N+1) * { 3*(N+1) – (2N + 1) }
Simplifying { 3*(N+1) – (2N + 1) } I get {N+2}, so
G(N) = (1/6) * N * (N+1) * (N+2)
Note 2
Gauss as a boy spotted a short-cut, which can be seen in the following picture:
The sum 1+2+3+4 is represented by the shaded blocks, and the unshaded blocks are also the same sum in reverse order. So, we can see
Royal Dutch Shell plc, or Shell for short, have issued a statement, under pressure from institutional investors, on how they will contribute to achieving the Paris climate change commitments. They state:
“Shell fully supports the Paris Agreement and believes that society has the scientific and technical knowledge to achieve a world where global warming is limited to well below 2°C.” (Ref. 1)
That sounds pretty unequivocal, and Shell are spending a lot of money aiming to persuade us that they are indeed serious. Let’s take a look at their claims.
Reuters reported in March 2018 that:
“Shell, the world’s top trader of liquefied natural gas, currently produces around 3.7 million barrels of oil equivalent per day, of which roughly half is natural gas.” (Ref. 2)
That’s 1.35 billion barrels per year, of which 50% is natural gas. This equates to about 0.5 billion tonnes of CO2 equivalent per year, or 0.5 GtCO2e/yr for short, from the end-use emissions from their products (Note 1).
Shell is claiming to take a lead on emissions reductions, but take a look at Shell’s own statement of ‘direct emissions’ (those resulting from operation of their operations):
“The direct greenhouse gas (GHG) emissions from facilities we operate were 73 million tonnes on a CO2-equivalent basis in 2017, … The indirect GHG emissions from the energy we purchased (electricity, heat and steam) were 12 million tonnes on a CO2-equivalent basis in 2017” (Ref. 7)
So Shell are focusing on these production-based emission totalling 85 million tonnes of CO2e in 2017, or 0.085 GtCO2e.
From the above figures we see that their production related emissions of CO2e are nearly 15% of the net CO2e resulting from production and end-use (see Note 2). By no means a trivial part of their net emissions, but no surprise that their marketing focuses on their production methods not the 85% coming for end-use of their products, that they clearly cannot mitigate, except by not producing them in the first place.
This explains why Shell, in the disclaimer to their statement to institutional investors, state:
“Also, in this statement we may refer to “Net Carbon Footprint” or “NCF, which includes Shell’s carbon emissions from the production of our energy products, our suppliers’ carbon emissions in supplying energy for that production and our customers’ carbon emissions associated with their use of the energy products we sell. Shell only controls its own emissions but, to support society in achieving the Paris Agreement goals, we aim to help and influence such suppliers and consumers to likewise lower their emissions. The use of the terminology “Net Carbon Footprint” is for convenience only and not intended to suggest these emissions are those of Shell or its subsidiaries.” (Ref. 1)
The key words are “Shell only controls its own emissions”, but offers to “support society” in meeting Paris Agreement goals.
Off the agenda of this statement is any suggestion of keeping fossil fuels in the ground or an acknowledgement of the devastating implications of the IPCC’s 1.5C special report.
They plan to boost natural gas extraction, tripling this by 2050 according to the Reuters report. Citing measures such as use of CCS (Carbon Capture and Storage) lead them to state an aspiration to halve the ‘Net Carbon Footprint’ (which includes end-use emissions) by 2050. This may seem to be an ambitious and welcome commitment for a fossil fuel major, but it fails to acknowledge the urgency with which we must decarbonise energy, and relies on the same magical thinking involved in the massive scaling required in CCS technologies by 2050 that many policy-makers are prone to.
This is a self-administered license to carry on extracting and selling fossil fuels.
Shell have been flooding the media with reports of how they are reducing carbon emissions, and they will fund events such as the annual New Scientist Live 2018, where they had a large stand in the middle of the exhibition hall; right next to BP’s stall offering, you guessed it, the same soothing words on emissions reductions. They will be back for more next year (Ref. 8).
If Shell are the trail-blazers amongst the fossil fuel majors, then what to expect from the laggards? 40% reduction by 2050, or 30%, or 20%, or less? What is the ambition of the industry as a whole, which last year was responsible for nearly 37 GtCO2 overall (Ref. 5)?
But they, like the other carbon majors and all the minors, are collectively in denial about the challenge we face, and the urgency required to get to net zero emissions by 2050 or earlier, not 50%.
Shell can, as they admit, only seriously impact on the 15% (production emissions), not the 85% (end use emissions), of their fossil fuel cake, when the real issue is that we need a radical shrinking cake, not the growing one we have today.
I regard it as distraction tactics to focus on production emission, trying to deflect the discussion away from the calls to ‘keep it in the ground’.
Unfortunately for Shell and other gas majors, the science is showing we have run out of time.
‘Keep it in the ground, keep it in the ground’, the protestors cried at COP24.
They at least, will not be distracted by the latest greenwash from Shell and the others.
o o O o o
(c) Richard W. Erskine, 19th December 2018
Notes
This assumes 0.43 metric tonnes of CO2 per barrel of oil (Ref. 3), and using 75% of this value for natural gas (Ref. 4). The figure of 0.5GtCO2e for Shell aligns with the figure shown in the CDP Carbon Majors Report 2017 (Ref. 6). Note also that 5,800 cubic feet of natural gas is equal (using an energy metric) to a Barrel of Oil Equivalent https://en.wikipedia.org/wiki/Barrel_of_oil_equivalent
Add the 0.5 GtCO2e from the end-use burning of their products and we see that this 0.085 GtCO2e is nearly 15% of the total for which Shell is ultimately responsible for. Note also that CO2e or CO2 equivalent includes the CO2 resulting from combustion as well as any leakages of methane, with the methane contribution converted into the equivalent amount (by its warming potential) of CO2.
“there are some fundamental difficulties with story-telling from data. Classic narratives have an emotional hit to the reader, they reveal a clear causal path, and have a neat conclusion. But science and statistics are not like that – they can seem impersonal, they don’t have a clear chain of causation, and their results are often ambiguous.”
In other words, when we convey facts through narrative we often seek certainty whereas paradoxically, scientists are the ones often having to grapple with uncertainty and ambiguity. In our popular imaginations, we might think that the reverse was true.
Yet on global warming, the irreducible uncertainty is increasingly concerning the when, not the if, of serious impacts. By debating the when (and trying to put a date on it), we are in danger of losing sight of the grindingly unavoidable fact that if a tsunami is heading your way, and you are on the beach, the exact ‘when’ is somewhat academic; the imperative is to run like heck to high ground!
So the questions being raised – how bad could it get and how soon – fit on a spectrum of possibilities. Our responses also fit on a spectrum, concerning how much work we are prepared to put in to limit the impacts.
How much urgency are we prepared to put in to limiting the impacts or adapting to them, and will this be fair to everyone? Will it be fair to the energy poor of the UK, to rural Indians, to the flora and fauna already experiencing catastrophic losses (and set to escalate)?
If we miss the 1.5°C goal, can we limit it to 1.75°C,and if not 1.75°C then maybe 2°C, and if not 2°C then can we limit it to 2.5°C,? The impacts are not ‘linearly related’ to temperature rise. There is an escalating level of impacts that ensue in areas such as heat stress, species loss, sea-level rise, crop yields, and more, and there are ‘tipping points’ that can create nasty surprises at multiple stages on this rising, jagged curve.
The context for this latest report was the Paris Agreement – arising from the21st Conference Of the Parties (COP) to the UN Framework Convention on Climate Change (UNFCCC) – held in Paris in December 2015. Hitherto, the UNFCCC had discussed policy aimed at ‘avoiding dangerous climate change’, which was deemed to be a 2C GMST rise. The UNFCCC in Paris was basing policy in part on the scientific input of the 5th Assessment Report by the IPCC (Intergovernmental Panel on Climate Change) published in 2013/2014. However, low lying countries and those prone to the worst impacts of climate change requested that there be an investigation on the feasibility of limiting the GMST rise to a more ambitious 1.5°C, and also determining the benefits (in terms of reduced impacts) of 1.5°C as compared to 2°C.
“Human activities are estimated to have caused approx. 1.0°C of global warming above pre-industrial levels, with a likely range of 0.8°C to 1.2°C. Global warming is likely to reach 1.5°C between 2030 and 2052 if it continues to increase at the current rate. (high confidence)”
So, strictly speaking, it is ‘likely’ (meaning at least a 66% chance; for those that read footnotes) that we have between 12 and 34 years before we cross the 1.5°C threshold, unless we do something to improve on this prognosis.
So is 1.5°C more special than all the other thresholds laid out before us? Well, it is an upcoming and fast approaching threshold, and so, theoretically avoidable.
“The first is that limiting warming to 1.5°C brings a lot of benefits compared with limiting it to two degrees. It really reduces the impacts of climate change in very important ways,” said Prof Jim Skea, who co-chairs the IPCC.
“The second is the unprecedented nature of the changes that are required if we are to limit warming to 1.5°C – changes to energy systems, changes to the way we manage land, changes to the way we move around with transportation.”
The 1.5°C report found – to the surprise of many – that there were significant benefits to keeping GMST to this lower level. For example, by 2100:
14% of the world’s population will be exposed to extreme heat (as experienced in southeastern Europe) at least once every 5 years in a 1.5°C world, but this rises to 37% in a 2°C world;
Some sea-ice will remain in the artic in most summers in a 1.5°C world, but ice-free summers are 10 times more likely in a 2°C world;
Urban populations exposed to water scarcity would increase from 250 million to 411 million;
Species loss for insects, plants and vertebrates would increase from 6, 8 and 4% to 18, 16 and 8%, respectively;
Coral reefs would suffer frequent mass mortalities in a 1.5°C world, but would mostly disappear in a 2°C world;
Crop yields would be lower in a 2°C world, especially in sub-sub-Saharan Africa, Southeast Asia, and Central and South America.
We see how there is a definite ‘non linearity’ occurring in some of the examples.
For some natural systems that have been able to recover to naturally occuring extremes in climate in the past, the future is a marked change. The effect of repeated, closely spaced extreme conditions means that they then fail to recover. Like a boxer that has been floored, they may get up once, or even twice, but at some point they stay down.
So coral reefs are hurting badly today (in a 1°C world), and will manage to cling on in a 1.5°C world, but will disappear in a 2°C world. This is a graph where the line falls off the cliff; no comforting linearity here.
A key concept introduced in the 5th Assessment Report was the ‘carbon budget’ – the maximum amount of cumulative carbon emissions allowable to stay within a target GMST rise.
The news which, if not good, is at least something of a relief, is that the so-called ‘committed warming’ due to emissions to-date (all that heat locked up in the oceans that will continue to drive increases in atmospheric temperature / GMST rise until the system reaches equilibrium again), is less than 1.5°C; although changes (e.g. to sea-level raise) will continue for centuries to millennia.
The not so good news (or rather extremely challenging news) is that 2030 emissions must reduce by 45% versus 2010 emissions to achieve 1.5°C, and get to zero by 2050 (note also that for 2°C, 2030 emissions would need to reduce by 20% versus 2010, and get to zero by 2075).
The IPCC make it clear that achieving 1.5°C would require rapid and far-reaching transitions in energy, land, urban and infrastructure (transport and buildings), and industry.
These system transition are unprecedented in terms of scale, but not necessarily in terms of speed (we have all seen how quickly cars replaced horse-drawn carriages in New York City), and imply deep emissions reductions in all sectors, a wide portfolio of mitigation options and upscaling of investments.
However, achieving 1.5°C cannot be achieved solely by decarbonising sectors, but must be supplemented by technologies that will take ‘carbon’ out of the atmosphere and effectively bury it. The 1.5°C pathways explored by IPCC assume between 100 and 1000 billion tonnes of carbon dioxide removal (CDR) over the 21st Century.
The most popular form of CDR currently being investigated is BECCS, which stands for Bio-energy with Carbon Capture and Storage. It works by growing plants/ trees that will capture carbon, and these are then burned to produce energy; but rather than re-release the carbon into the atmosphere, this is extracted from the exhaust gas stream, then buried in deep geological structures where CO₂ will remain in a condensed state.
The required transition can be summarised in a graphic (produced by Glen Peters from the CICERO Institute), consistent with the median scenario from the IPCC 1.5°C report:
(Credit: @Peters_Glen, 15th October 2018, On Twitter)
This graph illustrates that human activities would (by 2100) need to move from the current situation – a net source of 40 billion tonnes of carbon dioxide (40GtCO₂) – to becoming a net sink of -15GtCO₂. Coal would be eliminated, and oil would almost be; gas would be uncertain, but any that was used would need to be combined with CDR; land-use would need to move from being a net producer of greenhouse gas emissions to a net sink; and CDR/BECCS would have to be massively scaled up.
Many question the feasibility of such a large roll-out of CDR, requiring perhaps 2 or 3 times the area of India for the energy crops required (while at the same time, there are many other pressures on the land, not least, feeding a population that could grow/ stabilize at about 10 billion).
The graph above shows a complete change in the net CO₂ but this is like turning around a very large tanker on a sixpence.
Personally, I would conclude that the very tough challenge of keeping warming below 2°C has just got even tougher. The scale and range of changes needed requires something like a Marshall Plan for the whole world to stay below 1.5°C.
This is why many commentators such as Professor Kevin Anderson at the Tyndall centre in Manchester says that the only way we can square the circle is through a massive reduction in consumption (particularly amongst the high emitters). He has noted that if the top 10% of emitters reduced their emissions to the average European level, that would equate to a 33% reduction in global emissions!
We have run out of time to decarbonise all sectors fast enough, so are creating ‘magical thinking’ to imagine a scale of CDR that would allow us to continue to consume at the current rate (in the high emitting countries).
So the picture is mixed. Yes, the 1.5°C target is extremely important and worthwhile because it brings so many benefits as compared to 2°C. However, the already very challenging goal of decarbonising the world’s economy (both the established countries and those seeking justice and development), is made even harder.
Irrespective of whether you are an optimistic or pessimistic by nature, the fact is that we have to some extent already left it too late to avoid serious impacts. Whatever level of tolerance to risk we choose for ourselves and our families, by failing to seriously engage in action now, we are in effect making choices for our neighbours, for those communities and ecosystems that may not have the resources to adapt as well as we can.
Whether we imagine it is 0 years left to take action, 12, 34, or any other number that lies within the reasonable range between bad and catastrophic, we are really out of time. 15 years is a flick of the fingers in terms of transforming all sectors in all countries.
We should not hold up “12 years” as some magical number that is a binary switch between “we’ll be OK” and “it’s Armageddon”, but yet another milestone on the slow, and somewhat slippery path towards a very dangerous future.
Where each of us – as individuals, communities, countries or at whatever scale we wish to frame it – have started to take meaningful action, we should celebrate that and strive for wider and deeper change. Where this is not happening, we have to say that the sooner this process starts the better.
As Twitter just reminded me…
“The best time to start was 20 years ago, the second best time is now”
Let’s not wait another 12 years to act on the scale required.
Most people are naturally conservative with a small ‘c’ – they really find it very difficult to change.
For nearly 30 years after leaving academia, I spent a lot of time helping organizations be better at managing their enterprise information and retaining knowledge. Many skills are required to help such aspirations to be realised, and not merely technical ones; as I discussed in The Zeitgeist of the Coder.
As always, techies would run around thinking they had a silver bullet that would make people adopt new practices, simply by installing the software and with a few hours of training. Time and time again I would find that an organization that claimed to have made a big change hadn’t. They had changed very little because no real effort had been put into the changes in behaviour that are required to ensure that the claimed outcomes of an enterprise system rollout would actually transpire.
Old habits die hard.
In my large tool-box of diagrams I used when consulting is the following figure, which originates from the field of ‘business change management’ (I have been using it for many years; long before I became active in climate change).
The denial (or change) curve is just a name for the grey path from Denial to Commitment, with each stage described as follows:
Denial – people do not believe that the change is needed or will really happen, focusing on business as usual and not engaging their own feelings.
Resistance – people now know change is coming and engage their feelings of anxiety, anger and rejection. The focus is personal resistance, not the wider organization. It can be disruptive and even counter to one’s interests.
Exploration – a switch occurs whereby people recognize need for personal change and start to explore new ways of working.
Commitment – people gain mastery of new ways of working and the focus moves from the personal to the organization – and participating in helping to make the change a success.
The denial referred to here is the regular kind of denial we are all prone to, and is well known to psychologists. In the business context I worked in, it was an illusion to imagine you could get staff to jump en masse from Denial to Commitment.
Denial is more characterised by folded arms and non-communication than by argument or engagement; denial of this kind is a shutting out of the possibility of change, not arguing against it.
Resistance is different. The resentment and anger that comes with Resistance is almost a necessary part of the journey; finding reasons why the change won’t work, and using active measures to frustrate implementation. This can be loud and angry.
Only when the benefits of at least the promise of change start to become appreciated, does Exploration begin, and while there will still be arguments, they move from destructive ones (“It can’t work”) to constructive alternatives (“I don’t see how it can work, but show me”).
Commitment follows, and those that have made this journey are much better at helping others tread in their footsteps than an external consultant. Personal journeys get transformed into communcal ones. There is a tipping point, when enough people are reinforcing the positives so that everyone wants to join the party.
Of course, getting action on global warming and decarbonising our economy is much tougher than getting a large organization to adopt new practices, but maybe there is a lesson here.
For one thing, there is nothing wrong in using the word ‘denial’. Some claim that this is being conflated with ‘holocaust denial’ and is therefore an outrageous slur on ‘contrarians’ (it is always contrarians that make this claim and merely, I would argue, because they wish to deflect criticism and adopt a posture of victimhood, whereas they are the aggressors).
In any case, I am not so much interested in the tiny percentage of politically motivated contrarians, or ‘ideological denialists’ as I would prefer to call them, even though some are in highly influential positions.
We have to find ways to work around them, rather than give them too much of our time (although it has to be said, I am frustrated at how much airtime these people get on Twitter; and maybe that is a problem with Twitter itself as a platform). Their attention seeking behaviour is self-reinforcing and will not be a path to change (ask a psychologist), and it is wasting time that should be focused on the conversations that really matter.
The great majority of people fit more easily into the standard psychological meaning of the word denial; they are blocking their eyes and ears hoping it will go away.
La la la … la la la.
We are all in denial in some sense and to some level; otherwise we’d be seeing a rebellion wouldn’t we?
It is surely unrealistic to expect it will be a smooth and easy (psychological) journey, from Denial to Commitment, and that we can convince people with a few graphs. We can show all the data in the world – on the efficiency of heat pumps; the health benefits of low meat and EV buses; the falling costs of renewables; the ecological impacts underway; or whatever – but until this becomes internalised as the way we think and act, every day and in every way, it will not lead to measurable outcomes.
We have to pass through The Denial Curve – the pain and anger of the ‘loss’, for what we assumed was forever. That high consumption, limitless travelling, throwaway culture, and our infinite planet, with a mode of consumption that we have somehow slipped into sometime in the 60s or 70s. We have to shake ourselves out of a kind of consumerist trance.
We need to make so many changes at so many levels that the change is bound to create huge anxiety and then, of course, Resistance.
If people are resentful at feeling trapped between a rock and a hard place – between the terrifying consequences of inaction and the trapped-in-the-headlights ‘conservative’ preference for inaction – then that is entirely natural. Yet this is exactly the ‘tension’ that needs to be explored, and ultimately, needs to be resolved in each of us, and in our communities.
I sense that there is already a growing number of people who have moved from cross-armed denial, to resentment and hence Resistance. This is to be expected, and we should expect it to get a lot louder. We’ll need strong leaders and ‘counsellors’ amongst us, to help guide people on the journey; despite the noises off.
I would argue therefore that we should embrace The Denial Curve, rather than get stuck in the loop of raging against denial per se (that’s what the ‘ideological denialists’ want!).
If we can reassure people and find ways to help them transition from Resistance to Exploration, then that is the hardest part. Commitment will follow.
Those pesky Europeans, imposing their values on us – you know, a belief in the rule of law and all that. How dare they!
Freed from this prison, the UK can forge a new future with the world, based on emerging economies, and Britain’s long-established record for gun-boat diplomacy.
What can possibly go wrong?
China, for example, has emerged as a powerhouse likely to overtake the USA as the largest economy in the world by 2050. Ok, so they execute more people than the rest of world put together and lock up millions for ‘re-education’; anyone who does not support the ruling dictatorship.
Sunlit uplands.
Hey ho, at least they want to buy our stuff, so what’s not to like?
Ok, so they actually want to steal our stuff – the stuff they have not already hacked using their superior mathematicians (check out the Olympiad results) – but will then be handed over legally. As the mafia discovered, the easiest way to rob a bank is to own one. And the easiest way to steal from, say, Rolls Royce is to own it. Expect when, not if.
Sunlit uplands.
And what about weapons? Well Saudi Arabia is a great client, and the advantage of having an on-going war – namely pulverising Yemen back to the dark ages and murdering children without challenge (let alone journalists) – is that there is such a great repeat-business order book. BAE Systems shareholders are smiling all the way to the bank.
Sunlit uplands.
There seems no end to powerful and anti-democratic forces who want our stuff.
Let’s cut ourselves off from the cultures that we spent hundreds of years wrestling with – in war and peace – and ultimately worked together with to create a platform for peace, diversity and sharing, of hope, and collaboration. Be it the scientific endeavours, or the regulations that allow safe medicines across Europe, or the protection of consumer rights in telecommunications or even swimming on beaches (without going through the motions).
Who exactly are our friends?
Those in Russia, China and Saudi Arabia that have a long history of suppressing freedoms or those in the EU that have a long history of non-conformism and defence of freedoms, even in the face of despots (Diderot eat your heart out).
As we confront issues such as the unrestricted power of Google and Facebook, or the issue of man-made Global Warming, do we trust the USA, China, India or Russia to act as our friends, and in our interests? Not bloody likely.
Oh, but <keep chanting in the dark> ‘sunlit uplands’ (you know it makes you feel better, you just have to believe and everything will work out – even Chris Grayling’s 50 mile tailbacks for lorries – no pain, no gain).
Is there another path?
We could work with Europeans to push for change and to continue the tradition of European enlightenment and rebellion against elites, towards a better world however flawed; including radical reforms of the EU?
Inside the tent, we have a chance to make the changes, but outside it we merely become prey to deals with those that neither share our values, nor value them. Those who turn our creativity into death and destruction. Is that what we want?
How would you choose in this turbulent world?
Will the UK ultimately find itselfa slave to China, where we will have to attend re-education camps in Milton Keynes; where we will have to unlearn the Glorious Revolution (such a dangerous idea)?
Fanciful?
No less so than the sunlit uplands of post-Brexit Britain we have been promised in the false prospectus of Boris Johnson or Jacob Rees-Mogg; a race to the bottom future of vestigual government, and the power of moneyed elites, who want to frame our future in 19th Century terms.
How about a 21st Century world where we are leaders in Europe, using our talents in genomics, engineering, and yes, regulation (we Brits are geniuses at that), to build a better, safer world. Where we transition our industries to confront climate change, mental health, ecocide, the digital economy and other great challenges; as Brits and as Europeans.
I attended an inspiring talk by Chris Packham in Stroud at the launch of Stroud Nature’s season of events. Chris was there to show his photographs but naturally ranged over many topics close to his heart.
These are just two stats in a long list that attest to this catastrophe.
Chris talked about how brilliant amateur naturalists are in the UK – better than in any other country – in the recording of flora and fauna. They are amateur only in the sense that they do not get paid, but highly professional in the quality of their work. That is why we know about the drop in species numbers in such comprehensive detail. It appears that this love of data is not a new phenomenon.
I have been a lover of butterflies since very young. I came into possession ofa family heirloom when I was just 7 years old which gave a complete record of the natural history butterflies and moths in Great Britain in the 1870s. Part of what made this book so glorious was the intimate accounts of amateur scientists who meticulously recorded sightings and corresponded though letters and journals.
The Brits it seems are crazy about nature, and have this ability to record and document. We love our tick boxes and lists, and documenting things. It’s part of our culture.
I remember once doing a consultancy for a German car manufacturer who got a little irritated by our British team’s insistence on recording all meetings and then reminding the client of agreed points later, when they tried to change the requirements late in the project: “you Brits do love to write things down, don’t you!”.
Yes we do.
But there is a puzzling contradiction here. We love nature, we love recording data, but somehow have allowed species to be harmed, and have failed to stop this? Is this a naive trust in institutions to act on our behalf, or lack of knowledge in the wider population as to the scale of the loss?
I heard it said once (but struggle to find the appropriate reference) that the Normans were delighted after conquering Britain in 1066 to find that unlike most of Europe, the British had a highly organised administration and people paid their dues. Has anything changed?
But we have our limits. Thatcher’s poll tax demonstrated her lack of understanding of the British character. We will riot when pushed too hard – and I don’t know what you think, but by god they frighten me (as someone might have said). Mind you, I can imagine British rioters forming an orderly queue to collect their Molotov Cocktails. Queue jumping is the ultimate sin. Rules must be obeyed.
I have a friend in the finance sector, and we were having a chat about regulations. I asked if it was true in his sector if Brussels ‘dictated’ unreasonable regulations – “Not at all he said. For one thing, Brits are the rule writers par excellence, and the Brits will often gold-plate a regulation from Brussels.”
Now, I am sure some will argue that yes, we Brits are rule followers and love a good rule, but would prefer it if it is always our rules, and solely our rules. Great idea except that it is a total illusion to imagine that we can trade in high value goods and services without agreeing on rules with other countries.
In sectors like Chemicals and Pharmaceuticals where the UK excels, there are not only European regulations (concerning safety, licensing, event reporting, etc. – all very reasonable and obvious regulations by the way) but International ones. In Pharma, the ICH.org has Harmonization in its title for a reason, and is increasingly global in nature.
Innovation should be about developing the best medicines, not reinventing protocols for drug trials or the design of a drug dossier used for multi-country licensing applications. One can develop an economy on a level playing field.
The complete freedom the hard-right Brexiteers dream of rather highlights their complete lack of knowledge of how the world works.
Do we really think we can tear up regulations such as REACH and still trade in in Chemicals, in Europe or even elsewhere?
And are we really going to tear up the Bathing Water Directive?
Maybe Jacob Rees-Mogg fancies going to the beach and rediscovering the delights of going through the motions, but I suspect the Great British Public might well riot at the suggestion, or at least, get very cross.
My wife and I were on our annual week-end trip to Cambridge to meet up with my old Darwinian friend Chris and his wife, for the usual round of reminiscing, punting and all that. On the Saturday (12th May) we decided to go to Kettle’s Yard to see the house and its exhibition and take in a light lunch.
As we were about to get our (free) tickets for the house visit, we saw people in T-shirts publicising a Gurdon Institute special event in partnership with Kettle’s Yard that we had been unaware of:
Experiments in Art & Science
A new collaboration between three contemporary artists
This immediately grabbed our attention and we changed tack, and went to the presentation and discussion panel, intrigued to learn more about the project.
The Gurdon Institute do research exploring the relationship between human disease and development, through all stages of life.They use the tools of molecular biology, including model systems that share a lot of their genetic make-up with humans. There were fascinating insights into how the environment can influence creatures, in ways that force us to relax Crick’s famous ‘Central Dogma’. But I am jumping into the science of what I saw, and the purpose of this essay is to explore the relationship between art and science.
I was interested to learn if this project was about making the science more accessible – to draw in those who may be overwhelmed by the complexities of scientific methods – and to provide at least some insight into the work of scientists. Or maybe something deeper, that might be more of an equal partnership between art and science, in a two-way exchange of insights.
I was particularly intrigued by Rachel’s exploration of the memory of trauma, and the deep past revealed in the behaviour of worms, and their role as custodians of nature; of Turing’s morphogenesis, fractals and the emergence of self-similarity at many scales. A heady mix of ideas in the early stages of seeking expression.
David’s exploratory animations of moving through neural networks was also captivating.
As the scientists there noted, the purpose of the art may not be so much as to precisely articulate new questions, but rather to help them to stand back and see their science through fresh eyes, and maybe find unexpected connections.
In our modern world it has almost become an article of faith that science and art occupy two entirely distinct ways of seeing the world, but there was a time, as my friend Chris pointed out, when this distinction would not have been recognised.
Even within a particular department – be it mathematics or molecular biology – the division and sub-division of specialities makes it harder and harder for scientists to comprehend even what is happening in the next room. The funding of science demands a kind of determinism in the production of results which promotes this specialisation. It is a worrying trend because it is something of an anathema when it comes to playfulness or inter-disciplinary collaboration.
This makes the Wellcome Trust’s support for the Gurdon Institute and for this Science-Art collaboration all the more refreshing.
Some mathematicians have noted that even within the arcane world of number theory, group theory and the rest, it will only be through the combining of mathematical disciplines that some of the long-standing unresolved questions of mathematics be solved.
In areas such as climate change it was recognised in the lated 1950s that we needed to bring together a diverse range of disciplines to get to grips with the causes and consequences of man-made global warming: meteorologists, atmospheric chemists, glaciologists, marine biologists, and so many more.
We see through complex questions such as land-use and human civilisation how we must broaden this even further to embrace geography, culture and even history, to really understand how to frame solutions to climate change.
In many ways those (in my days) unloved disciplines such as geography, show their true colours as great integrators of knowledge – from human geography to history, from glaciology to food production – and we begin to understand that a little humility is no bad thing when we come to try to understand complex problems. Inter-disciplinary working is not just a fad; it could be the key to unlock complex problems that no single discipline can resolve.
Leonardo da Vinci was both artist and scientist. Ok, so not a scientist in the modern sense that David Wootton explores in his book The Invention of Science that was heralded in by the Enlightenment, but surely a scientist in the sense of his ability to forensically observe the world and try to make sense of it. His art was part of his method in exploring the world, be it the sinews of the human body or birds in flight, art and science were indivisible.
Since my retirement I have started to take up painting seriously. At school I chose science over art, but over the years have dabbled in painting but never quite made progress. Now, under the watchful eye of a great teacher, Alison Vickery, I feel I am beginning to find a voice. What she tells me hasn’t really changed, but I am finally hearing her. ‘Observe the scene, more than look at the paper’; ‘Experiment and don’t be afraid of accidents, because often they are happy ones’; the list of helpful aphorisms never leaves me.
A palette knife loaded with pigment scrapped across a surface can give just the right level of variegation if not too wet and not too dry; there is a kind of science to it. The effect is to produce a kind of complexity that the human eye seems to be drawn to: imperfect symmetries of the kind we find alluring in nature even while in mathematics we seek perfection.
Scientists and artists share many attributes.
At the meeting hosted by Kettle’s Yard, there was a discussion on what was common between artists and scientists. My list adapts what was said on the day:
a curiosity and playfulness in exploring the world around them;
ability to acutely observe the world;
a fascination with patterns;
not afraid of failure;
dedication to keep going;
searching for truth;
deep respect for the accumulated knowledge and tools of their ‘art’;
ability to experiment with new methods or innovative ways of using old methods.
How then are art and science different?
Well, of course, the key reason is that they are asking different questions and seeking different kinds of answers.
In art, the question is often simply ‘How do I see, how do I frame what I see. and how do I make sense of it?’ , and ‘How do I express this in a way that is interesting and compelling?’. If I see a tree, I see the sinews of the trunk and branches, and how the dappled light reveals fragmentary hints as to the form of the tree.I observe the patterns of dark and light in the canopy. A true rendering of colour is of secondary interest (this is not a photograph), except in as much as it helps reveal the complexity of tree: making different greens by playing with mixtures of 2 yellows and 2 blues offers an infinity of greens which is much more interesting than having tubes of green paint (I hardly ever buy green).
Artists do not have definite answers to unambiguous questions. It is OK for me to argue that J M W Turner was the greatest painter of all time, even while my friend vehemently disagrees. When I look at a painting (or sculpture, or film) and feel an emotional response, there is no need to explain it, even though we often seem obliged to put words to emotions, we know these are mere approximations.
In science (or experimental science at least), we ask specific questions, which can be articulated as a hypothesis that challenges the boundaries of our knowledge. We can then design experiments to test the hypothesis, and if we are successful (in the 1% of times that maybe we are lucky), we will have advanced the knowledge of our subject. Most times this is an incremental learning, building on a body of knowledge. Other times, we may need to break something down before building it up again (but unlike the caricature of science often seen on TV, science is rarely about tearing down a whole field of knowledge, and starting from scratch).
When I see the tree, I ask, why are the leaves of Copper Beech trees deep purple in colour rather than green? Are the energy levels in the chlorophyll molecule somehow changed to produce a different colour or is a different molecule involved?
In science, the objective is to find definite answers to definite questions. That is not to say that the definite answer is in itself a complete answer to all the questions we have. When Schrodinger asked the question ‘What is Life?’ the role and structure of DNA were not known, but there were questions that he could ask and find answers to. This is the wonder of science; this stepping stone quality.
I may find the answer as to why the Copper Beech tree’s leaves are not green, but what of the interesting question of why leaves change colour in autumn and how they change, not from one state (green) to another (brown), but through a complex process that reveals variegations of colour as Autumn unfolds? And what of a forest? How does a mature forest evolve from an immature one; how do pioneer trees give way to a complex ecology of varyingly aged trees and species over time? A leaf begs a question, and a forest may end up being the answer to a bigger question. Maybe we find that art, literature and science are in fact happy bedfellows after all.
As Feynman said, I can be both fascinated by something in the natural world (such as a rainbow) while at the same time seeking a scientific understanding of the phenomenon.
Nevertheless, it seems that while artists and scientists have so much in common, their framings struggle to align, and that in a way is a good thing.
There is great work done in the illustration of scientific ideas, in textbooks and increasingly in scientific papers. I saw a recent paper on the impact of changes to the stratospheric polar vortex on climate, which was beautifully illustrated. But this is illustration, intended to help articulate those definite questions and answers. It is not art.
So what is the purpose of bringing artists into laboratories to inspire them; to get their response to the work being done there?
The answer, as they say, is on the tin (of this Gurdon Institute collaborative project): It is an experiment.
The hypothesis is that if you take three talented and curious young artists and show them some leading edge science that touches on diverse subjects, good things happen. Art happens.
Based on the short preview of the work being done which I attended, good things are already happening and I am excited to see how the collaboration evolves.
Here are some questions inspired in my mind by the discussion
How do we understand the patterns in form in the ways that Turing wrote about, based on the latest research? Can we explore ‘emergence of form’ as a topic that is interesting, artistically and scientifically?
In the world of RNA epigenetics can the previously thought of ‘junk DNA’ play a part in the life of creatures, even humans, in the environment they live in? Can we explore the deep history of our shared genotype, even given our divergent phenotypes? Will the worm teach us how to live better with our environment?
Our identity is formed by memory and as we get older we begin to lose our ability to make new memories, but older ones often stay fast, but not always. Surely here there is a rich vein for exploring the artistic and scientific responses to diseases like Alzheimers?
Scientists are dedicated and passionate about their work, like artists. A joint curiosity drives this new collaborative Gurdon Institute project.
The big question for me is this: can art reveal to scientists new questions, or new framings of old questions, that will advance the science in novel ways? Can unexpected connections be revealed or collaborations be inspired?
I certainly hope so.
P.S. the others in my troop did get to do the house visit after all, and it was wonderful, I hear. I missed it because I was too busy chatting to the scientists and artists after the panel discussion; and I am so grateful to have spent time with them.
Normally, as with 9/11, a conspiracy theory involves convoluted chains of reasoning so torturous that it can take a while to determine how the conjuring trick was done: where the lie was implanted. But often, the anatomy of a conspiracy theory takes the following basic form:
Part 1 is a plausible but flawed technical claim that aims to refute an official account, and provides the starting point for Part 2, which is a multi-threaded stream of whataboutery. To connect Part 1 and 2 a sleight of hand is performed. This is the anatomy of a basic conspiracy theory.
I have been thinking about this because a relative of mine asked me for my opinion about a video that turns out to be a good case study in this form of conspiracy theory. It was a video posted by a Dr Chris Busby relating to the nerve gas used to poison the Skripals:
So, against my better judgment, I sat through the video.
Dr Busby who comes across initially as quite affable proceeds to outline his experience at length. He says he was employed at the Wellcome Research Laboratories in Beckenham (see Note 1), where he worked, in his words,
“… on the physical chemistry of pharmaceutical compounds or small organic compounds”, and he used “spectroscopic and other methods to determine the structure of these substances, as they were made by the chemists”.
I have no reason to doubt his background, but equally have not attempted to verify it either; in any case, this is immaterial because I judge people on their arguments not their qualifications.
I want to pass over Busby’s first claim – that a state actor was not necessarily involved because (in his view):
“any synthetic organic chemist could knock up something like that without a lot of difficulty”
… which is questionable, but is not the main focus of this post. I do have a few observations on this subsidiary claim in Note 2.
He explains correctly that a Mass Spectroscopy spectrum (let’s abbreviate this as ‘spectrum’ in what follows) is a pattern of the masses of the ionised fragments created when a substance passes through the instrument. This pattern is characteristic of the molecule under investigation.
So a spectrum “identifies a material”. So far, so good.
He now makes his plausible but flawed technical claim. I don’t want to call it a lie because I will assume Dr Busby made it in good faith, but it does undermine his claim to be an ‘expert’, and was contained in the following statement he made:
“… but in order to do that, you need to have a sample of the material, you need to have synthesized the material”
In brief we can summarise the claim as follows: In order for you to identify a substance, you need to have synthesised it.
Curiously, later in the video he says that the USA manufactured the A-234 strain that is allegedly involved (see Note 3) and put the spectrum on the NIST database, but then later took it down.
It does not occur to Dr Busby that Porton Down could have taken a copy of data from NIST before it was removed and used that as the reference spectrum, thereby blowing a huge hole in Busby’s chain of logic (also, see Note 4).
But there is a more fundamental reason why the claim is erroneous even if the data had never existed.
One of the whole points of having a technique like mass spectroscopy is precisely to help researchers in determining the structures of unknown substances, particularly in trace quantities where other structural techniques cannot be used (see Note 5).
To show you why the claim is erroneous, here is an example of a chemistry lecturer taking his students through the process of analysing the spectrum of a substance, in order to establish its structure (Credit: Identify a reasonable structure for the pictured mass spectrum of an unknown sample, Professor Heath’s Chemistry Channel, 6th October 2016).
This method uses knowledge of chemistry, logic and arithmetic to ‘reverse engineer’ the chemical structure, based on the masses of the fragments:
Now it is true that with a library of spectra for known substances, the analysis is greatly accelerated, because we can then compare a sample’s spectrum with ones in the library. This might be called ‘routine diagnostic mass spectroscopy’.
He talked about having done a lot of work on pharmaceuticals that had been synthesised “in Spain or in India”, and clearly here the mode of application would have been the comparison of known molecules manufactured by (in this case Wellcome) with other samples retrieved from other sources – possibly trying to break a patent – but giving away their source due to impurities in the sample (see Note 6).
It then struck me that he must have spent so much time doing this routine diagnostic diagnostic mass spectroscopy that he is now presenting this as the only way in which you can use mass spectroscopy to identify a substance.
He seems to have forgotten the more general use of the method by scientists.
This flawed assumption leads to the scientific and logical chain of reasoning used by Dr Busby in this video.
The sleight of hand arrives when he uses the phrase ‘false flag’ at 6’55” into a 10’19” video.
The chain of logic has been constructed to lead the viewer to this point. Dr Busby was in effect saying ‘to test for the agent, you need to have made it; if you can make it, maybe it got out; and maybe the UK (or US) wasresponsible for using it!’.
This is an outrageous claim but he avoids directly accusing the UK or US Governments; and this is the sleight of hand. He leaves the viewer to fill in the gap.
This then paves the way for Part 2 of his conspiracy theory which now begins in earnest on the video. He cranks up the rhetoric and offers up an anti-American diatribe, full of conspiracy ideation.
He concludes the video as follows:
“There’s no way there’s any proof that that material that poisoned the Skripal’s came from Russia. That’s the take home message”
On the contrary, the message I took away is that it is sad that an ex-scientist is bending and abusing scientific knowledge to concoct conspiracy theories, to advance his political dogma, and helping to magnify the Kremlin’s whataboutery.
Now, Dr Busby might well respond by saying “but you haven’t proved the Russians did it!”.No, but I would reply ‘you haven’t proved that they didn’t, and as things stand, it is clear that they are the prime suspect’; ask any police inspector how they would assess the situation.
My purpose here was not to prove anything, but to discuss the anatomy of conspiracy theories in general, and debunk this one in particular.
But I do want to highlight one additional point: those that are apologists for the Russian state will demand 100% proof the Russians did it, but are lazily accepting of weak arguments – including Dr Busby’s video – that attempt to point the finger at the UK or US Governments. This is, at least, double standards.
By all means present your political views and theories on world politics, Dr Busby – the UK is a country where we can express our opinions freely – but please don’t dress them up with flawed scientific reasoning masquerading as scientific expertise.
Hunting down a plausible but flawed technical claim is not always as easy as in the case study above, but remember the anatomy, because it is usually easy to spot the sleight of hand that then connects with the main body of a conspiracy theory.
We all need to be inoculated against this kind of conspiracy ideation, and I hope my dissection of this example is helpful to people.
Note 1: The Wellcome Research Laboratories in Beckenham closed in 1995, when the GlaxoWellcome merged company was formed, and after further mergers transformed into the current leading pharmaceutical global entity GSK.
Note 2: Busby’s first claim is that the nerve agent identified by Porton Down is a simple organic compound and therefore easy for a chemist to synthesise. Gary Aitkenhead, the chief executive of the government’s Defence Science and Technology Laboratory (DSTL) said on Sky News (here reported in The Guardian)
“It’s a military-grade nerve agent, which requires extremely sophisticated methods in order to create – something that’s probably only within the capabilities of a state actor.”
But the difficulty of synthesising a molecule is not simply based on the number of atoms in the molecule, but rather the synthetic pathway, and all that, and in the case of a nerve agent, the practical difficulties involved in making the stuff in a safe environment, then preparing it in some ‘weaponized’ formulation.
Vil Mirzayanov who was a chemist who worked on Novichok has said that that this process is extremely difficult. Dr Busby thinks he knows better but not being a synthetic chemist (remember, he had chemists making the samples he analysed), cannot claim expertise on the ease or difficulty of nerve agent synthesis.
The UK position is that the extremely pure nature of the samples found in Salisbury point to a state actor. Most of us, and I would include Dr Busby, without experience of the synthesis of the nerve agent in question and its formulation as a weapon, cannot really comment with authority on this question.
Simply saying it is a simple molecule really doesn’t stand up as an argument.
Note 3: While the Russian Ambassador to the UK claims that the strain is A-234, neither the UK Government, nor Porton Down, nor the OPCW have stated which strain was used, and so the question regarding what strain or strains the USA might or might not have synthesized, is pure speculation.
Note 4: He says that if the USA synthesised it (the strain of nerve agent assumed to have been used), then it is possible that Porton Down did so as well. I am not arguing this point either way. The point of this post is to challenge what Dr Busby presents as an unassailable chain of logic, but which is nothing of the sort.
Note 5: There are many other techniques used in general for structuralwork, but not all are applicable in every situation. For large complex biological molecules, X-Ray Crystallography has been very successful, and more recently CryoEM has matured to the point where it is taking over this role. Neither will have used in the case of trace quantities of a nerve agent.
Note 6: He also talks about impurities that can show up in a spectrum and using these as a way to identify a laboratory of origin (in relation to his pharmaceuticals experience), but this is a separate argument, which is irrelevant if the sample is of high purity, which is what OPCW confirmed in relation to the nerve gas found in Salisbury.
The Information Commissioner’s Office (ICO) won’t find any retained data at Cambridge Analytica (CA) gleaned from Facebook user’s. They might even find proof it was deleted in a timely manner.
So, would that mean CA did not provide an assist to the Trump campaign? No.
Because the analysis of all that data would have been used to provide knowledge and insight into which buttons to push in the minds of voters, and crucially, in which States this would be most effective.
At that point you can delete all the source Facebook data.
The knowledge and insight would have powered a broad spectrum campaign using good old fashioned media channels and social media. At this point, it is not micro-targeting, but throwing mud knowing it will stick where it matters.
Maybe the focus on micro-targeting is a smokescreen, because if the ICO don’t find retained data, then CA can say “see, we are innocent of all charges of interference”, when in fact the truth could be quite the opposite.
It is important the ICO, Select Committees in the UK Parliament and, when they get their act together, committees on Capitol Hill, ask the right questions, and do not succumb to smokescreens.
We all know that there is no such thing as a free lunch, don’t we?
Except when we get the next offer of a free lunch. It’ll be different this time, because they are so nice and well, what could go wrong?
The Facebook offer was always the offer of a free lunch. No need to pay anything for you account, and just share and share alike.
In fact the encouragement to be as open and sharing as possible was made easier by the byzantine complexity of the access controls (to allow people to be more private). It never occurred to Facebook that humans have complex lives where the family friends was a non-overlapping set of people to the tennis club friends, or the ‘stop the fracking’ friends!
No, there is a binary reductionism to the happy clappy religion which is ‘the world is my friend’ dogma of social media, of which Facebook is the prime archetype.
Of course, the business model was always to monetise our connectivity. We view a few pages on artist materials, and suddenly we are deluged by adverts for artist materials. Basic stuff you might say, and often it is; small minded big data. But it feels like and is an intrusion. Facebook is wanting to take business away from WPP and the rest and uses the social desire to connect as the vehicle for gaining a better insight into our lives than traditional marketing can achieve. Why did Facebook not make this clear to people from the start?
The joke was always that marketing companies know that 50% of their spending is wasted but don’t know which parts make up that 50%.
Facebook will now say that they know.
Don’t get me wrong, I love Facebook, because it reunited me with a long lost ‘other’ family. That is another story but I am eternally grateful to Facebook for making that connection. It also provides the town I live in the ability to connect over local issues. It can be a force for good.
But the most egregious issue that Facebook is now facing (and seem in denial about) is that the bill for the lunch is now proving to be exceptionally high indeed.
If Facebook data effectively helped Cambridge Analytica help Trump and the Brexit campaigns to win even a marginal assist – as is now alleged – that could have been crucial, as both won by a marginal amount.
We cannot go back to a pre-digital world.
We need trust in institutions and in what will happen to our data, and not just the snaps we took of the new kitten playing on the sofa. We want the benefits that combining genomics and clinical data will do to revolutionise medicine. We want to develop ground-up social enterprises to address issues like climate change. We need to be able to move beyond primitive cloudscum fileshares or private storage devices to a truly trusted, long term repository for personal data; guaranteed to a level no less than a National Archive.
There are many reasons we need community governed, rigorously audited and regulated data, to help in many aspects of our personal lives, social enterprises, and as safe places for retention of knowledge and cultural assets in the digital world.
Even without the Cambridge Analytica scandal, the geek-driven models of Facebook, Google and the rest betray a level of naivety and lack of insight into this challenge which is breathtaking.
Call it Web 4.0 or choose a higher number if you like.
But what this episode proves is that the current generation of social media is barely a rough draft on what society needs in the digital world of the 21st Century.
If you were to think about ranking the most important Figures from the IPCC Fifth Assessment Report, I would not be surprised if the following one (SPM.10) did not emerge as a strong candidate for the number one slot:
This is how the Figure appears in the main report, on page 28 (in the Summary for Policymakers) of The Physical Basis Report (see References: IPCC, 2013). The Synthesis Report includes a similar figure with additional annotations.
Many have used it in talks because of its fundamental importance (for example, Sir David King in his Walker Institute Annual Lecture (10th June 2015), ahead of COP21 in Paris). I have followed this lead, and am sure that I am not alone.
This Figure shows an approximately linear1 relationship between the cumulative carbon dioxide we emit2, and the rise in global average surface temperature3 up to 2100. It was crucial to discussions on carbon budgets held in Paris and the goal of stabilising the climate.
I am not proposing animating this Figure in the way discussed in my previous essay, but I do think its importance warrants additional attention to get it out there to a wider audience (beyond the usual climate geeks!).
So my question is:
“Does it warrant some kind of pedagogic treatment for a general audience (and dare I say, for policy-makers who may themselves struggle with the density of information conveyed)?”
My answer is yes, and I believe that the IPCC, as guardians of the integrity of the report findings, are best placed to lead such an effort, albeit supported by skills to support the science communications.
The IPCC should not leave it to bloggers and other commentators to furnish such content, as key Figures such as this are fundamental to the report’s findings, and need to be as widely understood as possible.
While I am conscious of Tufte’s wariness regarding Powerpoint, I think that the ‘build’ technique – when used well – can be extremely useful in unfolding the information, in biteable chunks. This is what I have tried to do with the above Figure in a recent talk. I thought I would share my draft attempt.
It can obviously do with more work, and the annotations represent my emphasis and use of language4. Nevertheless, I believe I was able to truthfully convey the key information from the original IPCC Figure more successfully than I have before; taking the audience with me, rather than scaring them off.
So here goes, taken from a segment of my talk … my narrative, to accompany the ‘builds’, is in italics …
Where are we now?
“There is a key question: what is the relationship between the peak atmospheric concentration and the level of warming, compared to a late 19th century baseline, that will result, by the end of the 21st century?”
“Let’s start with seeing where we are now, which is marked by a X in the Figure below.”
“Our cumulative man-made emissions of carbon dioxide (CO2) have to date been nearly 2000 billion tonnes (top scale above)”
“After noting that 50% of this remains in the atmosphere, this has given rise to an increase in the atmospheric concentration from its long-standing pre-industrial value of 280 parts per million to it current value which is now about 400 parts per million (bottom scale above).”
“This in turn has led to an increase in averaged global surface temperature of 1oC above the baseline of 1861 to 1880 (vertical scale above).”
Where might we be in 2100?
“As we add additional carbon dioxide, the temperature will rise broadly in proportion to the increased concentration in the atmosphere. There is some uncertainty between “best case” and “worst case” margins of error (shown by the dashed lines).”
“By the end of the century, depending on how much we emit and allowing for uncertainties, we can end up anywhere within the grey area shown here. The question marks (“?”) illustrate where we might be by 2100.”
Can we stay below 2C?
“The most optimistic scenario included in the IPCC’s Fifth Assessment Report (AR5) was based on the assumption of a rapid reduction in emissions, and a growing role for the artificial capture of carbon dioxide from the atmosphere (using a technology called BECCS).”
“This optimistic scenario would meet the target agreed by the nations in Paris, which is to limit the temperature rise to 2oC.”
“We effectively have a ‘carbon budget’; an amount of fossil fuels that can be burned and for us to stay below 2oC”.
“The longer we delay dramatically reducing emissions, the faster the drop would need to be in our emissions later, as we approach the end of the ‘carbon budget’.”
“Some argue that we are already beyond the point where we can realistically move fast enough to make this transition.”
“Generally, experts agree it is extremely challenging, but still not impossible.”
Where will we be in 2100? – Paris Commitments
“The nationally determined contributions (or NDCs) – the amounts by which carbon dioxide emissions will fall – that the parties to the Paris Agreement put forward have been totted up and they would, if implemented fully, bring us to a temperature rise of between 2.5 and 3.5 oC (and an atmospheric concentration about twice that of pre-industrial levels).”
“Now, the nations are committed to increase their ‘ambition’, so we expect that NDCs should get better, but it is deeply concerning that at present, the nations’ current targets are (1) not keeping us unambiguously clear of catastrophe, and (2) struggling to be met. More ambition, and crucially more achievement, is urgent.”
“I have indicated the orange scenarios as “globally severe”, but for many regions “catastrophic” (but some, for example, Xu and Ramanathan5, would use the term “Catastrophic” for any warming over 3oC, and “Unknown” for warming above 5oC). The IPCC are much more conservative in the language they use.”
Where will we be in 2100? – Business As Usual Scenario
“The so-called ‘business as usual’ scenario represents on-going use of fossil fuels, continuing to meet the majority of our energy needs, in a world with an increasing population and increasing GDP per capita, and consequently a continuing growth in CO2 emissions.”
”This takes global warming to an exceptionally bad place, with a (globally averaged) temperature rise of between 4 and 6 oC; where atmospheric concentrations will have risen to between 2.5 and 3 times the pre-industrial levels.”
“The red indicates that this is globally catastrophic.”
“If we go above 5oC warming we move, according to Xu and Ramanathan, from a “catastrophic” regime to an “unknown” one. I have not tried to indicate this extended vocabulary on the diagram, but what is clear is that the ‘business as usual’ scenario is really not an option, if we are paying attention to what the science is telling us.”
That’s it. My draft attempt to convey the substance and importance of Figure SPM.10, which I have tried to do faithfully; albeit adding the adjectives “optimistic” etc. to characterise the scenarios.
I am sure the IPCC could do a much better job than me at providing a more accessible presentation of Figure SPM.10 and indeed, a number of high ranking Figures from their reports, that deserve and need a broader audience.
The linearity of this relationship was originally discussed in Myles Allen et al (2009), and this and other work has been incorporated in the IPCC reports. Also see Technical Note A below.
About half of which remains in the atmosphere, for a very long time
Eventually, after the planet reaches a new equilibrium, a long time in the future. Also see Technical Note B below.
There are different opinions are what language to use – ‘dangerous’, ‘catastrophic’, etc. – and at what levels of warming to apply this language. The IPCC is conservative in its use of language, as is customary in the scientific literature. Some would argue that in wanting to avoid the charge of being alarmist, it is in danger of obscuring the seriousness of the risks faced. In my graphics I have tried to remain reasonably conservative in the use of language, because I believe things are serious enough; even when a conservative approach is taken.
In a recent paper in the Proceedings of the National Academy of Sciences, two climate scientists—Yangyang Xu, of Texas A. & M., and Veerabhadran Ramanathan, of the Scripps Institution of Oceanography—proposed that warming greater than three degrees Celsius be designated as “catastrophic” and warming greater than five degrees as “unknown??” The “unknown??” designation, they wrote, comes “with the understanding that changes of this magnitude, not experienced in the last 20+ million years, pose existential threats to a majority of the population.”
References
IPCC, 2013: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovern- mental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 1535 pp.
IPCC, 2001: Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change [Houghton, J.T., Y. Ding, D.J. Griggs, M. Noguer, P.J. van der Linden, X. Dai, K. Maskell, and C.A. Johnson (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 881pp.
Myles Allen at al (2009), “Warming caused by cumulative carbon emissions towards the trillionth tonne”,Nature 458, 1163-1166
Kirsten Zickfeld et al (2016), “On the proportionality between global temperature change and cumulative CO2 emissions during periods of net negative CO2 emissions”, Environ. Res. Lett. 11 055006
Technical Notes
A. Logarithmic relationship?
For those who know about the logarithmic relationship between added CO2 concentration and the ‘radiative forcing’ (giving rise to warming) – and many well meaning contrarians seem to take succour from this fact – the linear relationship in this figure may at first sight seem surprising.
The relative warming (between one level of emissions and another) is related to the ratio of this logarithmic function, and that is approximately linear over the concentration range of interest.
In any case, it is worth noting that CO2 concentrations have been increasing exponentially, and a logarithm of an exponential function is a linear function.
There is on-going work on wider questions. For example, to what extent ‘negative emissions technology’ can counteract warming that is in the pipeline?
Kirsten Zickfield et al (2016), is one such paper, “…[suggests that] positive CO2 emissions are more effective at warming than negative emissions are at subsequently cooling”. So we need to be very careful in assuming we can reverse warming that is in the pipeline.
B. Transient Climate Response and Additional Warming Commitment
The ‘Transient Climate Response’ (TCR) reflects the warming that results when CO2 is added at 1% per year, which for a doubling of the concentration takes 70 years. This is illustrated quite well in a figure from a previous report (Reference: IPCC, 2001):
The warming that results from this additional concentration of CO2 occurs over the same time frame. However, this does not include all the the warming that will eventually result because the earth system (principally the oceans and atmosphere) will take a long time to reach a new equilibrium where all the flows of energy are brought back into a (new) balance. This will take at least 200 years (for lower emission scenarios) or much longer for higher emission levels. This additional warming commitment must be added to the TCR. However, the TCR nevertheless does represent perhaps 70% of the overall warming, and remains a useful measure when discussing policy options over the 21st Century.
This discussion excludes more uncertain and much longer term feedbacks involving, for example, changes to the polar ice sheets (and consequentially, the Earth’s albedo), release of methane from northern latitudes or methane clathrates from the oceans. These are not part of the ‘additional warming commitment’, even in the IPCC 2013 report, as they are considered too speculative and uncertain to be quantified.
The IPCC (Intergovernmental Panel on Climate Change) is exploring ways to improve the communication of its findings, particularly to a more general audience. They are not alone in having identified a need to think again about clear ‘science communications’. For example, the EU’s HELIX project (High-End Climate Impacts and Extremes), produced some guidelines a while ago on better use of language and diagrams.
The idea is not to say ‘communicate like THIS’ but more to share good practice amongst scientists and to ensure all scientists are aware of the communication issues, and then to address them.
Much of this guidance concerns the ‘soft’ aspects of communication: how the communicator views themself; understanding the audience; building trust; coping with uncertainty; etc.
Some of this reflects ideas that are useful not just to scientific communication, but almost any technical presentation in any sector, but that does not diminish its importance.
This has now been distilled into a Communications Handbook for IPCC Scientists; not an official publication of the IPCC but a contribution to the conversation on how to improve communications.
I want to take a slightly different tack, which is not a response to the handbook per se, but covers a complementary issue.
In many years of being involved in presenting complex material (in my case, in enterprise information management) to audiences unfamiliar with the subject at hand, I have often been aware of the communication potential but also risks of diagrams. They say that a picture is worth a thousand words, but this is not true if you need a thousand words to explain the picture!
The unwritten rules related to the visual syntax and semantics of diagrams is a fascinating topic, and one which many – and most notably Edward Tufte – have explored. In chapter 2 of his insightful and beautiful book Visual Explanations, Tufte argues:
“When we reason about quantityative evidence, certain methods for displaying and analysing data are better than others. Superior methods are more likely to produce truthful, credible, and precise findings. The difference between an excellent analysis and a faulty one can sometimes have momentous consequences”
He then describes how data can be used and abused. He illustrates this with two examples: the 1854 Cholera epidemic in London and the 1986 Challenger space shuttle disaster.
Tufte has been highly critical of the over reliance on Powerpoint for technical reporting (not just presentations) in NASA, because the form of the content degrades the narrative that should have been an essential part of any report (with or without pictures). Bulletized data can destroy context, clarity and meaning.
There could be no more ‘momentous consequences’ than those that arise from man-made global warming, and therefore, there could hardly be a more important case where a Tuftian eye, if I may call it that, needs to be brought to bear on how the information is described and visualised.
The IPCC, and the underlying science on which it relies, is arguably the greatest scientific collaboration ever undertaken, and rightly recognised with a Nobel Prize. It includes a level of interdisciplinary cooperation that is frankly awe-inspiring; unique in its scope and depth.
It is not surprising therefore that it has led to very large and dense reports, covering the many areas that are unavoidably involved: the cryosphere, sea-level rise, crops, extreme weather, species migration, etc.. It might seem difficult to condense this material without loss of important information. For example, Volume 1 of the IPCC Fifth Assessment Report, which covered the Physical Basis of Climate Change, was over 1500 pages long.
Nevertheless, the IPCC endeavours to help policy-makers by providing them with summaries and also a synthesis report, to provide the essential underlying knowledge that policy-makers need to inform their discussions on actions in response to the science.
However, in its summary reports the IPCC will often reuse key diagrams, taken from the full reports. There are good reasons for this, because the IPCC is trying to maintain mutual consistency between different products covering the same findings at different levels of detail.
This exercise is fraught with risks of over-simplification or misrepresentation of the main report’s findings, and this might limit the degree to which the IPCC can become ‘creative’ with compelling visuals that ‘simplify’ the original diagrams. Remember too that these reports need to be agreed by reviewers from national representatives, and the language will often seem to combine the cautiousness of a scientist with the dryness of a lawyer.
So yes, it can be problematic to use artistic flair to improve the comprehensibility of the findings, but risk losing the nuance and caution that is a hallmark of science. The countervailing risk is that people do not really ‘get it’; and do not appreciate what they are seeing.
We have seen with the Challenger reports, that people did not appreciate the issue with the O rings, especially when key facts were buried in 5 levels of indented bullet points in a tiny font, for example or, hidden in plain sight, in a figure so complex that the key findings are lost in a fog of complexity.
That is why any attempt to improve the summaries for policy makers and the general public must continue to involve those who are responsible for the overall integrity and consistency of the different products, not simply hived off to a separate group of ‘creatives’ who would lack knowledge and insight of the nuance that needs to be respected. But those complementary skills – data visualizers, graphics artists, and others – need to be included in this effort to improve science communications. There is also a need for those able to critically evaluate the pedagogic value of the output (along the lines of Tufte), to ensure they really inform, and do not confuse.
Some individuals have taken to social media to present their own examples of how to present information, which often employs animation (something that is clearly not possible for the printed page, or its digital analogue, a PDF document). Perhaps the most well known example to date was Professor Ed Hawkin’s spiral picture showing the increase in global mean surface temperature:
There are now a number of other great producers of animations. Here follows a few examples.
Here, Kevin Pluck (@kevpluck) illustrates the link between the rising carbon dioxide levels and the rising mean surface temperature, since 1958 (the year when direct and continuous measurements of carbon dioxide were pioneered by Keeling)
Kevin Pluck has many other animations which are informative, particularly in relation to sea ice.
Another example, from Antti Lipponen (@anttilip), visualises the increase in surface warming from 1900 to 2017, by country, grouped according to continent. We see the increasing length/redness of the radial bars, showing an overall warming trend, but at different rates according to region and country.
A final example along the same lines is from John Kennedy (@micefearboggis), which is slightly more elaborate but rich in interesting information. It shows temperature changes over the years, at different latitudes, for both ocean (left side) and land (right side). The longer/redder the bar the higher the increase in temperature at that location, relative to the temperature baseline at that location (which scientists call the ‘anomaly’). This is why we see the greatest warming in the Arctic, as it is warming proportionally faster than the rest of the planet; this is one of the big takeaways from this animation.
These examples of animation are clearly not dumbing down the data, far from it. They improve the chances of the general public engaging with the data. This kind of animation of the data provides an entry point for those wanting to learn more. They can then move onto a narrative treatment, placing the animation in context, confident that they have grasped the essential information.
If the IPCC restricts itself to static media (i.e. PDF files), it will miss many opportunities to enliven the data in the ways illustrated above that reveal the essential knowledge that needs to be communicated.
used to be the question everyone asked, but of course is an increasingly irrelevant question, in an ageing population.
But a question that should never age, and should stay with us forever, is
“When did you learn about the holocaust?”.
I remember when I first learned about the holocaust, and it remains seared into my consciousness, thanks to a passionate and dedicated teacher, Mr Cromie.
I was a young child at a boarding school Stouts Hill Preparatory School, in the little village of Uley in Gloucestershire. The school no longer exists but that memory never fades. You cannot ‘unlearn’ something like that.
I was no more than 12 at the time, so this would have been 1965 or earlier, and our teacher told us about the mass murder of the Jews in Nazi Germany, but with a sense of anger and resentment at the injustice of this monstrous episode in history. And it has often occurred to me since that the peak of this programme of murder was just 10 years before I was born.
But what did I learn and what did I remember? I learned about the gas chambers, and the burning of bodies, but it was all a kind of vague memory of an atrocity, difficult to properly make sense of at that age.
What we did not really learn was the process by which a civilised country like Germany could turn from being at the centre of European culture to a murderous genocidal regime in just a decade.
For British viewers, this story of inhumanity was often framed through the lens of Bergen-Belsen, because it was the Brits that liberated this Concentration Camp, and the influential Richard Dimbleby was there to deliver his sonorous commentary on the horrors of the skeletal survivors and piles of corpses.
But it is curious how this story is still the reflex image that many Britons have of the holocaust, and I have often wondered why. The Conversation tried to provide an answer:
“But even though many, if not most, of those involved in the rescue and relief effort were aware of the fact that Jews made up the largest number of the victims, the evolving official British narrative sidestepped this issue. The liberation of Bergen-Belsen became separated from what the people held in this camp had had to endure, and why they had been incarcerated in the first place.
Instead, the liberation of Bergen-Belsen was transformed into a British triumph over “evil”. The event was used to confirm to the wider British public that the British Army had fought a morally and ethically justified war, that all the personal and collective sacrifices made to win the war had now been vindicated. Bergen-Belsen gave sense and meaning to the British military campaign against Nazi Germany and the Allied demand for an unconditional surrender. The liberation of the camp became Britain’s finest hour.”
Each country, each culture, and each person, constructs their own narrative to try to make sense of the horror.
But despite the horror of Bergen-Belsen, and the 35,000 who died there, it is barely a footnote in the industrialised murder campaign that the Nazi leadership planned and executed.
Despite the fact that most people are vaguely aware of a figure of several million Jews and others dying, they are rather less aware of the distinction between Concentration Camps and Death Camps (also know as Extermination Camps).
“Many of the prisoners died in the concentration camps due to deliberate maltreatment, disease, starvation, and overwork, or they were executed as unfit for labor. Prisoners were transported in inhumane conditions by rail freight cars, in which many died before reaching their final destination. The prisoners were confined in the boxcars for days or even weeks, with little or no food or water. Many died of dehydration in the intense heat of summer or froze to death in winter. Concentration camps also existed in Germany itself, and while they were not specifically designed for systematic extermination, many of their inmates perished because of harsh conditions or they were executed.”
The death camps at Chełmno, Treblinka, Sobibór and Belzec were designed purely as places of murder. It is not simply about the arithmetic of the holocaust. After all, the death squads and related actions in the east accounted for 2.5 million murders, and the death camps over 3 million. But it is the sheer refinement of the industrialization of murder at the Extermination Camps that is difficult to comprehend:
“Visitors to the sites of Belzec, Sobibor and Treblinka (of who there are far, far fewer than travel to Auschwitz) are shocked by how tiny these killing camps were. A total of around 1.7 million people were murdered in these three camps – 600,000 more than the murder toll of Auschwitz – and yet all three could fit into the area of Auschwitz-Birkenau with room to spare. In a murder process that is an affront to human dignity at almost every level, one of the greatest affronts – and this may seem illiogical unless you have actually been there – is that so many people were killed in such a small area.”
Auschwitz: The Nazis & The ‘Final Solution’ – Laurence Rees, BBC Books, 2005
Majdanek and Auschwitz also became Extermination Camps, but were dual purpose, also being used as Concentration Camps, so they had accommodation, bunks, and so forth that where not needed in the small camps designed purely for murder.
It is helpful to those who deny the holocaust or its full horror that Belzec, Sobibor and Treblinka have not entered into the public imagination in the way that Auschwitz has. Being dual use it is then easier to play on this apparent ambiguity, to construct a denial narrative along the lines of: many died from hard labour, it was not systematic murder.
And of course, not knowing about Belzec, Sobibor, Treblinka and Chełmno is a lot easier than knowing, because they expose the full, unadulterated horror.
Remember that the Final Solution came after a decade of murderous projects – the death squads in the east, the euthanasia programmes, and early experiments with gassing – which led to the final horror of the Extermination Camps.
You can never stop learning, because you will never hear all the details, read all the books, or hear all the testimonies.
But if you ever find yourself not feeling deeply uncomfortable (as well as deeply moved) by the horrors of the Holocaust, then it is time to not turn away. To take another look.
For us today, the most important lesson is that it is possible for even a sophisticated and educated country to succumb to a warped philosophy that blames the ‘other’ for problems in society, and to progressively desensitize the people to greater and greater levels of dehumanisation.
While nothing on the scale of the holocaust has occurred again, can we be confident that it never could? When we see what has happened under Pol Pot, or in Srebrenica, or in Rwanda, we know that the capacity of people to dehumanise ‘others’ for reasons of ethnicity or politics, and to murder them in large numbers, has not gone away.
The price of freedom, and decency in a society, is eternal vigilance.
Calling out hate speech is therefore, in a small way, honouring the 6 million – the great majority of whom were Jews – who died in the holocaust. It is stamping out that first step in that process of dehumanisation that is the common precursor of all genocidal episodes in history. It is always lurking there, waiting to consume a society that is looking for simple answers, and for someone to blame.
Ridley trots out a combination of long-refuted myths that are much loved by contrarians; bad or crank science; or misunderstandings as to the current state of knowledge. In the absence of a Climate Feedback dissection of Ridley’s latest opinion piece, here is my response to some of his nonsense …
Here are five statements he makes that I will refute in turn.
1. He says: Forty-five years ago a run of cold winters caused a “global cooling” scare.
I say:
Stop repeating this myth Matt! A few articles in popular magazines in the 70s speculated about an impending ice age, and so according to dissemblers like Ridley, they state or imply that this was the scientific consensus at the time (snarky message: silly scientists can’t make your mind up). This is nonsense, but so popular amongst contrarians it is repeated frequently to this day.
Warming, not cooling was the greater concern. It is astonishing that Ridley and others continue to repeat this myth. Has he really been unable – in the ten years since it was published – to read this oft cited article and so disabuse himself of the myth? Or does he deliberately repeat it because he thinks his readers are too lazy or too dumb to check the facts? How arrogant would that be?
2. He says: Valentina Zharkova of Northumbria University has suggested that a quiescent sun presages another Little Ice Age like that of 1300-1850. I’m not persuaded. Yet the argument that the world is slowly slipping back into a proper ice age after 10,000 years of balmy warmth is in essence true.
I say:
Oh dear, he cites the work of Zharkova, saying he is not persuaded, but then talks of ‘slowly slipping into a proper ice age’. A curious non sequitur. While we are on Zharkova, her work suffered from being poorly communicated.
But, let’s return to the ice age cycle. What Ridley obdurately refuses to acknowledge is that the current warming is occurring due to less than 200 years of man-made changes to the Earth’s atmosphere, raising CO2 to levels not seen for nearly 1 million years (equal to 10 ice age cycles), is raising the global mean surface temperature at an unprecedented rate.
Therefore, talking about the long slow descent over thousands of years into an ice age that ought to be happening (based on the prior cycles), is frankly bizarre, especially given that the man-made warming is now very likely to delay a future ice age. As the a paper by Ganopolski et al, Nature (2016) has estimated:
“Additionally, our analysis suggests that even in the absence of human perturbations no substantial build-up of ice sheets would occur within the next several thousand years and that the current interglacial would probably last for another 50,000 years. However, moderate anthropogenic cumulative CO2 emissions of 1,000 to 1,500 gigatonnes of carbon will postpone the next glacial inception by at least 100,000 years.”
And why stop there, Matt? Our expanding sun will boil away the oceans in a billion years time, so why worry about Brexit; and don’t get me started on the heat death of the universe. It’s hopeless, so we might as well have a great hedonistic time and go to hell in a handcart! Ridiculous, yes, but no less so than Ridley conflating current man-made global warming with a far, far off ice age, that recedes with every year we fail to address man-made emissions of CO2.
3. He says: Well, not so fast. Inconveniently, the correlation implies causation the wrong way round: at the end of an interglacial, such as the Eemian period, over 100,000 years ago, carbon dioxide levels remain high for many thousands of years while temperature fell steadily. Eventually CO2 followed temperature downward.
I say:
The ice ages have indeed been a focus of study since Louis Agassiz coined the term in 1837, and there have been many twists and turns in our understanding of them even up to the present day, but Ridley’s over-simplification shows his ignorance of the evolution of this understanding.
The Milankovitch Cycles are key triggers for entering, an ice age (and indeed, leaving it), but the changes in atmospheric concentrations of carbon dioxide drives the cooling (entering) and warming (leaving) of an ice age, something that was finally accepted by the science community following Hays et al’s 1976 seminal paper (Variations in the Earth’s orbit: Pacemake of the ice ages) , over 50 years since Milankovitch first did his work.
But the ice core data that Ridley refers to confirms that carbon dioxide is the driver, or ‘control knob’, as Professor Richard Alley explains it; and if you need a very readable and scientifically literate history of our understanding of the ice cores and what they are telling us, his book “The Two-Mile Time Machine: Ice Cores, Abrupt Climate Change, and Our Future” is a peerless, and unputdownable introduction.
Professor Alley offers an analogy. Suppose you take out a small loan, but then after this interest is added, and keeps being added, so that after some years you owe a lot of money. Was it the small loan, or the interest rate that created the large debt? You might say both, but it is certainly ridiculous to say the the interest rate is unimportant because the small loan came first.
But despite its complexity, and despite the fact that the so-called ‘lag’ does not refute the dominant role of CO2, scientists are interested in explaining such details and have indeed studied the ‘lag’. In 2012, Shakun and others published a paper doing just that “Global warming preceded by increasing carbon dioxide concentrations during the last deglaciation”(Jeremy D. Shakun et al, Nature 484, 49–54, 5 April 2012). Since you may struggle to see a copy of this paywalled paper, a plain-English summary is available.
Those who read headlines and not contents – like the US Politician Joe Barton – might think this paper is challenging the dominant role of CO2, but the paper does not say that. This paper showed that some warming occurred prior to increased CO2, but this is explained as an interaction between Northern and Southern hemispheres, following the Milankovitch original ‘forcing’.
The role of the oceans is crucial in fully explaining the temperature record, and can add significant delays in reaching a new equilibrium. There are interactions between the oceans in Northern and Southern hemispheres that are implicated in some abrupt climate change events (e.g. “North Atlantic ocean circulation and abrupt climate change during the last glaciation”, L. G. Henry et al, Science, 29 July 2016 • Vol. 353 Issue 6298).
4. He says: Here is an essay by Willis Eschenbach discussing this issue. He comes to five conclusions as to why CO2 cannot be the main driver
I say:
So Ridley quotes someone with little or no scientific credibility who has managed to publish in Energy & Environment. Its editor Dr Sonja Boehmer-Christiansen admitted that she was quite partisan in seeking to publish ‘sceptical’ articles (which actually means, contrarian articles), as discussed here.
Yet, Ridley extensively quotes this low grade material, but could have chosen from hundreds of credible experts in the field of climate science. If he’d prefer ‘the’ textbook that will take him through all the fundamentals that he seems to struggle to understand, he could try Raymond Pierrehumbert’s seminal textbook “Principles of Planetary Climate”. But no. He chooses Eschenbach, with a BA in Psychology.
Ridley used to put up the appearance of interest in a rational discourse, albeit flying in the face of the science. That mask has now fully and finally dropped, as he is now channeling crank science. This is risible.
5. He says: The Antarctic ice cores, going back 800,000 years, then revealed that there were some great summers when the Milankovich wobbles should have produced an interglacial warming, but did not. To explain these “missing interglacials”, a recent paper in Geoscience Frontiers by Ralph Ellis and Michael Palmer argues we need carbon dioxide back on the stage, not as a greenhouse gas but as plant food.
I say:
The paper is 19 pages long, which is unusual in today’s literature. The case made is intriguing but not convincing, but I leave it to the experts to properly critique it. It is taking a complex system, where for example, we know that large movements of heat in the ocean have played a key role in variability, and tries to infer (explaining interglacials) that dust is the primary driver, while discounting the role of CO2 as a greenhouse gas.
The paper curiously does not cite the seminal paper by Hays et al (1976), yet cites a paper by Willis Eschenbach published in Energy & Environment (which I mentioned earlier). All this raised concerns in my mind about this paper.
Extraordinary claims require extraordinary evidence and scientific dialogue, and it is really too early to claim that this paper is something or nothing; even if that doesn’t mean waiting the 50 odd years that Milankovitch’s work had to endure, before it was widely accepted. Good science is slow, conservative, and rigorous, and the emergence of a consilience on the science of our climate has taken a very long time, as I explored in a previous essay.
Ralph Ellis on his website (which shows that his primary interest is the history of the life and times of Jesus) states:
“Ralph has made a detour into palaeoclimatology, resulting in a peer-review science paper on the causes of ice ages”, and after summarising the paper says,
“So the alarmists were right about CO2 being a vital forcing agent in ice age modulation – just not in the way they thought”.
So was this paper an attempt to clarify what was happening during the ice ages, or a contrivance, to take a pot shot at carbon dioxide’s influence on our contemporary climate change?
The co-author, Michael Palmer, is a biochemist, with no obvious background in climate science and provided “a little help” on the paper according to his website.
But on a blog post comment he offers a rather dubious extrapolation from the paper:
“The irony is that, if we should succeed in keeping the CO2 levels high through the next glacial maximum, we would remove the mechanism that would trigger the glacial termination, and we might end up (extreme scenario, of course) another Snowball Earth.”,
They both felt unembarrassed participating in comments on the denialist blog site WUWT. Quite the opposite, they gleefully exchanged messages with a growing band of breathless devotees.
But even if my concerns about the apparent bias and amateurism of this paper were allayed, the conclusion (which Ridley and Ellis clearly hold to) that the current increases in carbon dioxide is nothing to be concerned with, does not follow from this paper. It is a non sequitur.
If I discovered a strange behavour like, say, the Coriolis force way back when, the first conclusion would not be to throw out Newtonian mechanics.
The physics of CO2 is clear. How the greenhouse effect works is clear, including for the conditions that apply on Earth, with all remaining objections resolved since no later than the 1960s.
We have a clear idea of the warming effect of increased CO2 in the atmosphere including short term feedbacks, and we are getting an increasingly clear picture of how the Earth system as a whole will respond, including longer term feedbacks. There is much still to learn of course, but nothing that is likely to require jettisoning fundamental physics.
The recent excellent timeline published by Carbon Brief showing the history of the climate models, illustrates the long slow process of developing these models, based on all the relevant fundamental science.
This history has shown how different elements have been included in the models as the computing power has increased – general circulation, ocean circulation, clouds, aerosols, carbon cycle, black carbon.
I think it is really because Ridley still doesn’t understand how an increase from 0.03% to 0.04% over 150 years or so, in the atmospheric concentration of CO2, is something to be concerned about (or as I state it in talks, a 33% rise in the principal greenhouse gas; which avoids Ridley’s deliberately misleading formulation).
He denies that he denies the Greenhouse Effect, but every time he writes, he reveals that really, deep down, he still doesn’t get it. To be as generous as I can to him, he may suffer from a perpetual state of incredulity (a common condition I have written about before).
“So, why do they say that their estimate of climate sensitivity, which is the amount of warming from a doubling, is 3 degrees? Not 1 degree? And the answer is because the models have an amplifying factor in there. They are saying that that small amount of warming will trigger a furtherwarming, through the effect mainly of water vapor and clouds. In other words, if you warm up the earth by 1 degree, you will get more water vapor in the atmosphere, and that water vapor is itself a greenhouse gas and will cause you to treble the amount of warming you are getting. Now, that’s the bit that lukewarmers like me challenge. Because we say, ‘Look, the evidence would not seem the same, the increases in water vapor in the right parts of the atmosphere–you have to know which parts of the atmosphere you are looking at–to justify that. And nor are you seeing the changes in cloud cover that justify these positive-feedback assumptions. Some clouds amplify warming; some clouds do the opposite–they would actually dampen warming. And most of the evidence would seem to suggest, to date, that clouds are actually having a dampening effect on warming. So, you know, we are getting a little bit of warming as a result of carbon dioxide. The clouds are making sure that warming isn’t very fast. And they’re certainly not exaggerating or amplifying it. So there’s very, very weak science to support that assumption of a trebling.”
He seems to be saying that the water vapour is in the form of clouds – some high altitude, some low – have opposite effects (so far, so good), so the warming should be 1C – just the carbon dioxide component – from a doubling of CO2 concentrations (so far, so bad). The clouds represent a condensed (but not yet precipitated) phase of water in the atmosphere, but he seems to have overlooked that water also comes in a gaseous phase (not clouds). Its is that gaseous phase that is providing the additional warming, bringing the overall warming to 3C.
The increase in water vapour concentrations is based on “a well-established physical law (the Clausius-Clapeyron relation) determines that the water-holding capacity of the atmosphere increases by about 7% for every 1°C rise in temperature” (IPCC AR4 FAQ 3.2)
T.C. Chamberlin writing in 1905 to Charles Abbott, explained this in a way that is very clear, explaining the feedback role of water vapour:
“Water vapour, confessedly the greatest thermal absorbent in the atmosphere, is dependent on temperature for its amount, and if another agent, as CO2 not so dependent, raises the temperature of the surface, it calls into function a certain amount of water vapour, which further absorbs heat, raises the temperature and calls forth more [water] vapour …”
It is now 113 years since Chamberlin wrote those words, but poor Ridley is still struggling to understand basic physics, so instead regales us with dubious science intended to distract and confuse.
When will Matt Ridley stop feeling the need to share his perpetual incredulity and obdurate ignorance with the world?
If you spend even a little time looking at the internet and social media in search of enlightenment on climate solutions, you will have noted that there are passionate advocates for each and every solution out there, who are also experts in the shortcomings of competing solutions!
This creates a rather unhelpful atmosphere for those of us trying to grapple with the problem of addressing the very real risks of dangerous global warming.
There are four biases – often implied but not always stated – that lie at the heart of these unproductive arguments:
Lack of clear evidence of the feasibility of a solution;
Failure to be clear and realistic about timescales;
Tendency to prioritize solutions in a way that marginalizes others;
Preference for top-down (centralization) or bottom-up (decentralization) solutions.
Let’s explore how these manifest themselves:
Feasibility: Lack of clear evidence of the feasibility of a solution
This does not mean that an idea does not have promise (and isn’t worthy of R&D investment), but refers to the tendency to champion a solution based more on wishful thinking than any proven track record. For example, small modular nuclear has been championed as the path to a new future for nuclear – small, modular, scaleable, safe, cheap – and there are an army of people shouting that this is true. We have heard recent news that the economics of small nuclear are looking a bit shaky. This doesn’t mean its dead, but it does rather put the onus on the advocates to prove their case, and cut the PR, as Richard Black has put it. Another one that comes to mind is ‘soil carbon’ as the single-handed saviour (as discussed in Incredulity, Credulity and the Carbon Cycle). The need to reform agriculture is clear, but it is also true (according to published science) that a warming earth could make soils a reinforcer of warming, rather than a cooling agent; the wisdom of resting hopes in regenerative farming as the whole of even a major contributor, is far from clear. The numbers are important.
Those who do not wish to deal with global warming (either because they deny its seriousness or because they do not like the solutions) quite like futuristic solutions, because while we are debating long-off solutions, we are distracted from focusing on implementing existing solutions.
Timescale: Failure to be clear and realistic about timescales
Often we see solutions that seem to clearly have promise and will be able to make a major contribution in the future. The issue is that even when they have passed the feasibility test, they fail to meet it on a timescale required. There is not even one timescale, as discussed in Solving Man-made Global Warming: A Reality Check, as we have an immediate need to reduce carbon emissions (say, 0-10 years), then an intermediate timeframe in which to implement an energy transition (say, 10-40 years). Renewable energy is key to the latter but cannot make sufficient contribution to the former (that can only be done by individual and community reductions in their carbon intensity). And whatever role Nuclear Fusion has for the future of humanity, it is totally irrelevant to solving the challenge we have in the next 50 years to decarbonize our economy.
The other aspect of timescale that is crucial is that the eventual warming of the planet is strongly linked to the peak atmospheric concentration, whereas the peak impacts will be delayed for decades or even centuries, before the Earth system finally reaches a new equilibrium. Therefore, while the decarbonization strategy required for solutions over, say, the 2020-2050 timeframe; the implied impacts timeframe could be 2050-2500, and this delay can make it very difficult to appreciate the urgency for action.
Priority: Tendency to prioritize solutions in a way that precludes others
I was commenting on Project Drawdown on twitter the other day and this elicited a strong response because of a dislike of a ‘list’ approach to solutions. I also do not like ‘lists’ when they imply that the top few should be implemented and the bottom ones ignored. We are in an ‘all hands on deck’ situation, so we have to be very careful not to exclude solutions that meet the feasibility and timescale tests. Paul Hawken has been very clear that this is not the intention of Project Drawdown (because the different solutions interact and an apparently small solution can act as a catalyst for other solutions).
Centralization: Preference for top-down (centralization) or bottom-up (decentralization) solutions.
Some people like the idea of big solutions which are often underwritten at least by centralised entities like Governments. They argue that big impact require big solutions, and so they have a bias towards solutions like nuclear and an antipathy to lower-tech and less energy intensive solutions like solar and wind.
Others share quite the opposite perspective. They are suspicious of Governments and big business, and like the idea of community based, less intensive solutions. They are often characterized as being unrealistic because of the unending thirst of humanity for consumption suggests an unending need for highly intensive energy sources.
The antagonism between these world views often obscures the obvious: that we will need both top-down and bottom-up solutions. We cannot all have everything we would like. Some give and take will be essential.
This can make for strange bedfellows. Both environmentalists and Tea Party members in Florida supported renewable energy for complementary reasons, and they became allies in defeating large private utilities who were trying to kill renewables.
To counteract these biases, we need to agree on some terms of reference for solving global warming.
Firstly, we must of course be guided by the science (namely, the IPCC reports and its projections) in order to measure the scale of the response required. We must take a risk management approach to the potential impacts.
Secondly, we need to start with an ‘all hands on deck’ or inclusive philosophy because we have left it so late to tackle decarbonization, we must be very careful before we throw out any ideas.
Thirdly, we must agree on a relevant timeline for those solutions we will invest in and scale immediately. For example, for Project Drawdown, that means solutions that are proven, can be scaled and make an impact over the 2020-2050 timescale. Those that cannot need not be ‘thrown out’ but may need more research & development before they move to being operationally scaled.
Fourthly, we allow both top-down (centralized) and bottom-up (solutions), but recognise that while Governments dither, it will be up to individuals and social enterprise to act, and so in the short-medium term, it will be the bottom solutions that will have greater impact. Ironically, the much feared ‘World Government’ that right-wing conpiracy theorists most fear, is not what we need right now, and on that, the environmentalists mostly agree!
In the following Climate Solutions Taxonomy I have tried to provide a macro-level view of different solution classes. I have included some solutions which I am not sympathetic too; such as nuclear and geo-engineering. But bear in mind that the goal here is to map out all solutions. It is not ‘my’ solutions, and is not itself a recommendation or plan.
On one axis we have the top-down versus bottom-up dimension, and on the other axis, broad classes of solution. The taxonomy is therefore not a simple hierarchy, but is multi-dimensional (here I show just two dimensions, but there are more).
While I would need to go to a deeper level to show this more clearly, the arrows are suggestive of the system feedbacks that reflect synergies between solutions. For example, solar PV in villages in East Africa support education, which in turn supports improvments in family planning.
It is incredible to me that while we have (properly) invested a lot of intellectual and financial resources in scientific programmes to model the Earth’s climate system (and impacts), there has been dramatically less modelling effort on the economic implications that will help support policy-making (based on the damage from climate change, through what are called Integrated Assessment Models).
But what is even worse is that there seems to have been even less effort – or barely any – modelling the full range of solutions and their interactions. Yes, there has been modelling of, for example, renewable energy supply and demand (for example in Germany), and yes, Project Drawdown is a great initiative; but I do not see a substantial programme of work, supported by Governments and Academia, that is grappling with the full range of solutions that I have tried to capture in the figure above, and providing an integrated set of tools to support those engaged in planning and implementing solutions.
This is unfortunate at many levels.
I am not here imagining some grand unified theory of climate solutions, where we end up with a spreadsheet telling us how much solar we should build by when and where.
But I do envisage a heuristic tool-kit that would help a town such as the one I was born (Hargesia in Somaliland), or the town in which I now live (Nailsworth in Gloucestershire in the UK), to be able to work through what works for them, to plan and deliver solutions. Each may arrive at different answers, but all need to be grounded in a common base of data and ‘what works’, and a more qualitative body of knowledge on synergies between solutions.
Ideally, the tool-kit would be usable at various levels of granularity, so it could be used at different various scales, and different solutions would emerge at different scales.
A wide range of both quantitative and qualitative methods may be required to grapple with the range of information covered here.
I am looking to explore this further, and am interested in any work or insights people have. Comments welcome.
This essay is based on an extract from a talk I did recently that was well received. This specific part of the talk was described as very helpful in clarifying matters related to our carbon dioxide emissions. I hope others also find it useful.
David Cameron said on 24 January 2013 “We’re paying down Britain’s debts” and got a lot of stick for this misleading statement. Why? Let me try to explain.
The deficit is the annual amount by which we spend more than we get in taxes. Whereas, the debt is the cumulative sum of year on year deficits.
As many politicians do, Cameron was using language designed to be, shall we say, ‘economical with the truth’. He was not the first, and he won’t be the last.
We can picture deficit being added to our debt using the following picture (or for greater dramatic effect, do it live if you are giving a talk):
If the deficit declines this year compared to last year, that may be of great solace to the Chancellor (and that was the situation in 2013), because maybe it’s the start of a trend that will mean that the debt may reach a peak.
Cameron could have said “Our debt keeps rising, but at least the rate at which it is rising is slightly less than last year. We’ll need to borrow some more to cover the additional deficit”, would the a honest statement, but he didn’t. It simply wouldn’t have cut it with the spin doctors.
The reality is that the only thing we can conclude from a deficit this year that is smaller than last year is that that the debt has increased by an amount less than last year. That’s it. It doesn’t sound quite so great put that way, does it?
You need year-on-year surpluses to actually bring the debt down.
Deficit and debt are useful in making an analogy with carbon dioxide in the atmosphere, because the confusion – intended or accidental – over deficit and debt, is very similar to the confusion that occurs in the mind of the public when the media report changes in our carbon emissions.
Let’s explore the analogy by replacing “Deficit” with “Emissions”, and “Debt” with “Atmospheric Concentration” …
The annual emissions add to the cumulative emissions in the atmosphere, i.e. the raised Atmospheric Concentration.
There are two differences with the financial analogy when we think about carbon dioxide in the atmosphere.
Firstly, when we add, say, 40 billion tonnes of carbon dioxide to the atmosphere (the green coloured area represents the added carbon dioxide) …
… then, within a short time (about 5 years) 50% of the added carbon dioxide (that is 20 billion tonnes, in this illustration), is absorbed in oceans and biosphere, balancing the remainder of carbon dioxide added to atmosphere, and we can visualize this balance as follows (Credit: Rabett Run, which includes a more technical description and an animation) –
Secondly, unlike with the economy, once the atmospheric concentration of carbon dioxide goes up, it stays up for hundred of years (and to get back to where it started, thousands of years), because for one thing, the processes to take carbon from the upper ocean to the deep ocean are very slow.
Unlike with the economy, our added carbon dioxide concentration in the atmosphere always goes in the wrong direction; it increases.
So when we see stories that talk about “emissions stalling” or other phrases that seem to offer reassurance, remember, they are talking about emissions (deficit) NOT concentrations (debt).
The story title below is just one example, taken from the Financial Times ( and I am not picking on the FT, but it shows that this is not restricted to the tabloids).
Whenever we see a graph of emissions over the years (graph on the left), the Health Warning should always be the Keeling Curve (graph on the right).
So the global garbon dioxide emissions in 2014 and 2015 where 36.08 and 36.02 billion tonnes, respectively. Cause for cautious rejoicing? Well, given the huge number of variables that go into this figure (the GDP of each nation; their carbon intensity; the efficiency level for equipment and transport; and so on), projecting a trend from a few years is a tricky business, and some have devoted their lives to tracking this figure. Important work for sure.
Then 2016 came along and the figure was similar but slightly raised, at 36.18 billion tonnes.
But we were said to be stalled … 36.08, 36.02 and 36.18.
I liken this to heading for the cliff edge at a steady pace, but at least no longer accelerating. Apparently that is meant to be reassuring.
Then comes the projected figure for 2017, which includes a bit of a burp of carbon dioxide from the oceans – courtesy of the strong El Nino – and this was even predicted, and horror of horrors, it makes headline news around the world.
We have jumped by 2% over the previous year (actually 1.7% to 36.79 billion tonnes). Has the ‘stall’ now unstalled? What next?
The real headline is that we are continuing to emit over 35 billion tonnes of carbon dioxide, year on year without any sign of stopping.
Only when emissions go down to 0 (zero), will the atmospheric concentration STOP rising.
So in relation to our emissions what word do we want to describe it? Not stall, not plateau, not ease back, but instead, stop, finito or end. They’ll do.
I have discovered – from talking to people who do not follow climate change on twitter, or the blogosphere, and are not fans of complex data analysis – that what I explained above was very helpful but also not widely appreciated.
But in a sense, this is probably the most important fact about climate change that everyone needs to understand, that
the carbon dioxide concentration will only stop rising when emissions completely stop.
The second most important fact is this:
whatever value the atmospheric concentration of carbon dioxide gets to – at that point in the future when we stop adding more – that it is where it will stay for my grandchild, and her grandchildren, and their grandchildren, and so on … for centuries* to come.
The Keeling Curve – which measures the global atmospheric concentration of carbon dioxide – is the only curve that matters, because until it flattens, we will not know how much warming there will actually be, because of the third most important fact people must understand is this:
broadly speaking, the level of warming is proportional to the the peak concentration of carbon dioxide.
So when we see stories that talk about “emissions stalling” or other phrases that seem to offer hope that we’ve turned a corner, remember, they are talking about emissions (deficit) NOT concentrations (debt).
It is amazing how often the deficit/ debt confusion is played on by policitians regarding the nations finances.
The ’emissions stalling’ narrative of the last few years has led many to imagine we are, if not out of the woods, then on our way, but I think the confusion here is a failure of the media and other science communicators to always provide a clear health warning.
The truth is that we, as a species, are a long way still from showing a concerted effort to get out of the woods. Worse still, we are arguing amongst ourselves about which path to take.
(c) Richard W. Erskine, 2017
[* Unless and until we find a way to artificially extract and sequester carbon dioxide; this is still only R&D and not proven at scale yet, so does not rescue the situation we face in the period leading to 2050. We need to halt emissions, not just “stall” them.]
People are arguing as to whether the loss of the EMA from the UK will hurt us or not, and I think missing some nuance.
The ICH (International Committee on Harmonization) has helped pharma to harmonize the way drugs are tested, licensed and monitored globally (albeit with variations), enabling drugs to be submitted for licensing in the largest number of countries possible.
For UK’s Big Pharma, the loss of EMA is a blow but not a fatal one, they have entities everywhere, they’ll find a way.
There are 3 key issues I see, around Network, Innovation and Influence:
Network – New drug development is now more ‘ecosystem’ based, not just big pharma alone, and UK has lots of large, medium and small pharma, in both private and public institutions (Universities, Francis Crick Institute, etc.). And so do other EU countries, which form part of the extended network of collaboration. UK leaving EU will disrupt this network, and loss of EMA subtly changes the centre of power.
Innovation – Further to the damage to networks, and despite ICH’s harmonization, being outside of EU inevitably creates issues for the smaller innovators with less reach, shallower pockets, and a greater challenge in adapting to the new reality.
Influence – not being at the EMA table (wherever its HQ is based) means that we cannot guide the development of regulation, which is on an inexorable path of even greater harmonization. Despite the UK’s self-loathing re. ‘not being as organized as the Germans’, the Brits have always been better than most at regulation, its deep in our culture (indeed much of the EU regulations neoliberals rail against have been gold-plated by the UK when they reach our shores). But outside the EU, and outside EMA, we won’t be in a position to apply these skills, and our influence will wane.
Unfortunately, the Brexiters have shown that they misunderstand the complexity not merely of supply chains in the automotive sector, for example, but the more subtle connections that exist in highly sophisticated development lifecycles, and highly regulated sectors, like pharmaceuticals.
A key regulatory body moving from our shores will have long term consequences we cannot yet know.
Can Britain adapt to the new reality?
Of course it can, but do not expect it to be easy, quick or cheap to do so.
Updated 11th November 2017 – Hopeful message following Figure added.
It seems that the we are all – or most of us – in denial about the reality of the situation we are in with relation to the need to address global warming now, rather than sometime in the future.
We display seesaw emotions, optimistic that emissions have been flattening, but aghast that we had a record jump this year (which was predicted, but was news to the news people). It seems that people forget that if we have slowed from 70 to 60 miles per hour, approaching a cliff edge, the result will be the same, albeit deferred a little. We actually need to slam on the breaks and stop! Actually, due to critical erosion of the cliff edge, we will even need to go into reverse.
I was chatting with a scientist at a conference recently:
Me: I think we need to accept that a wide portfolio of solutions will be required to address global warming. Pacala and Socolow’s ‘wedge stabilization’ concept is still pertinent.
Him: People won’t change; we won’t make it. We are at over 400 parts per million and rising, and have to bring this down, so some artificial means of carbon sequestration is the only answer.
This is just an example of many other kinds of conversations of a similar structure that dominate the blogosphere. It’s all about the future. Future impacts, future solutions. In its more extreme manifestations, people engage in displacement behaviour, talking about any and every solution that is unproven in order to avoid focusing on proven solutions we have today.
Yet nature is telling us that the impacts are now, and surely the solutions should be too; at least for implementation plans in the near term.
Professors Kevin Anderson and Alice Larkin of the Tyndall Centre have been trying to shake us out of our denial for a long time now. The essential argument is that some solutions are immediately implementable while others are some way off, and others so far off they are not relevant to the time frame we must consider (I heard a leader in Fusion Energy research on the BBC who sincerely stated his belief that it is the solution to climate change; seriously?).
The immediately implementable solution that no politician dares talk about is degrowth – less buying stuff, less travel, less waste, etc. All doable tomorrow, and since the top 10% of emitters globally are responsible for 50% of emissions (see Extreme Carbon Inequality, Oxfam), the quickest and easiest solution is for that 10% or let’s say 20%, to halve their emissions; and do so within a few years. It’s also the most ethical thing to do.
Anderson & Larkin’s credibility is enhanced by the fact that they practice what they advocate, as for example, this example of an approach to reduce the air miles associated with scientific conferences:
Some of people in the high energy consuming “West” have proven it can be done. Peter Kalmus, in his book Being the Change: Live Well and Spark a Climate Revolution describes how he went from a not untypical US citizen responsible for 19 tonnes of carbon dioxide emissions per year, to now something like 1 tonne; which is one fifth of the global average! It is all about what we do, how we do it, and how often we do it.
This approach – a large reduction in consumption (in all its forms) amongst high emitters in all countries, but principally the ‘west’ – could be implemented in the short term (the shorter the better but let’s say, by 2030). Let’s call these Phase 1 solutions.
The reason we love to debate and argue about renewables and intermittency and so on is that it really helps to distract us from the blinding simplicity of the degrowth solution.
It is not that a zero or low carbon infrastructure is not needed, but that the time to fully implement it is too long – even if we managed to do it in 30 years time – to address the issue of rising atmospheric greenhouse gases. This has already started, but from a low base, but will have a large impact in the medium term (by 2050). Let’s call these Phase 2 solutions.
Project Drawdown provides many solutions relevant to both Phase 1 and 2.
And as for my discussion that started this, artificial carbon sequestration methods, such as BECCS and several others (are explored in Atmosphere of Hope by Tim Flannery) will be needed, but it is again about timing. These solutions will be national, regional and international initiatives, and are mostly unproven at present; they live in the longer term, beyond 2050. Let’s call these Phase 3 solutions.
I am not here wanting to get into geo-engineering solutions, a potential Phase 4. A Phase 4 is predicated on Phases 1 to 3 failing or failing to provide sufficient relief. However, I think we would have to accept that if, and I personally believe only if, there was some very rude shock (an unexpected burp of methane from the Arctic, and signs of a catastrophic feedback), leading to an imminent > 3C rise in global average temperature (as a possible red-line), then some form of geo-engineering would be required as a solution of last resort. But for now, we are not in that place. It is a matter for some feasibility studies but not policy and action. We need to implement Phase 1, 2 and 3 – all of which will be required – with the aim of avoiding a Phase 4.
I have illustrated the three phases in the figure which follows (Adapted from Going beyond dangerous climate change: does Paris lock out 2°C? Professors Kevin Anderson & Alice Bows-Larkin, Tyndall Centre – presentation to School of Mechanical Aerospace & Civil Engineering University of Manchester February 2016, Douglas, Isle of Man).
My adapted figure is obviously a simplification, but we need some easily digestible figures to help grapple with this complex subject; and apologies in advance to Anderson & Larkin if I have taken liberties with my colourful additions and annotations to their graphic (while trying to remain true to its intent).
A version of this slide on Twitter (@EssaysConcern) seemed to resonate with some people, as a stark presentation of our situation.
For me, it is actually a rather hopeful image, if as I, you have a belief in the capacity for people to work together to solve problems which so often we see in times of crisis; and this is a crisis, make no mistake.
While the climate inactivists promote a fear of big Government, controlling our lives, the irony here is that Phase 1 is all about individuals and communities, and we can do this with or without Government support. Phase 2 could certainly do with some help in the form of enabling legislation (such a price on carbon), but it does not have to be top-down solutions, although some are (industrial scale energy storage). Only when we get to Phase 3 are we seeing national solutions dominating, and then only because we have an international consensus to execute these major projects; that won’t be big government, it will be responsible government.
The message of Phases 1 and 2 is … don’t blame the conservatives, don’t blame the loss of feed-in tarifs, or … just do it! They can’t stop you!
They can’t force you to boil a full kettle when you only need one mug of tea. They can’t force you to drive to the smoke, when the train will do. They can’t force you to buy new stuff that can be repaired at a cafe.
And if your community wants a renewable energy scheme, then progressives and conservatives can find common cause, despite their other differences. Who doesn’t want greater community control of their energy, to compete with monopolistic utilities?
I think the picture contains a lot of hope, because it puts you, and me, back in charge. And it sends a message to our political leaders, that we want this high on the agenda.
Incredulity, in the face of startling claims, is a natural human reaction and is right and proper.
When I first heard the news about the detection on 14th September 2015 of the gravitational waves from two colliding black holes by the LIGO observatories I was incredulous. Not because I had any reason to disagree with the predictions of Albert Einstein that such waves should exist, rather it was my incredulity that humans had managed to detect such a small change in space-time, much smaller than the size of a proton.
How, I pondered, was the ‘noise’ from random vibrations filtered out? I had to do some studying, and discovered the amazing engineering feats used to isolate this noise.
What is not right and proper is to claim that personal incredulity equates to an error in the claims made. If I perpetuate my incredulity by failing to ask any questions, then it’s I who is culpable.
And if I were to ask questions then simply ignore the answers, and keep repeating my incredulity, who is to blame? If the answers have been sufficient to satisfy everyone skilled in the relevant art, how can a non expert claim to dispute this?
Incredulity is a favoured tactic of many who dispute scientific findings in many areas, and global warming is not immune from the clinically incredulous.
The sadly departed Professor David Mackay gives an example in his book Sustainable Energy Without the Hot Air (available online):
The burning of fossil fuels is the principal reason why CO2 concentrations have gone up. This is a fact, but, hang on: I hear a persistent buzzing noise coming from a bunch of climate-change inactivists. What are they saying? Here’s Dominic Lawson, a columnist from the Independent:
“The burning of fossil fuels sends about seven gigatons of CO2 per year into the atmosphere, which sounds like a lot. Yet the biosphere and the oceans send about 1900 gigatons and 36000 gigatons of CO2 per year into the atmosphere – … one reason why some of us are sceptical about the emphasis put on the role of human fuel-burning in the greenhouse gas effect. Reducing man-made CO2 emissions is megalomania, exaggerating man’s significance. Politicians can’t change the weather.”
Now I have a lot of time for scepticism, and not everything that sceptics say is a crock of manure – but irresponsible journalism like Dominic Lawson’s deserves a good flushing.
Mackay goes on to explain Lawson’s error:
The first problem with Lawson’s offering is that all three numbers that he mentions (seven, 1900, and 36000) are wrong! The correct numbers are 26, 440, and 330. Leaving these errors to one side, let’s address Lawson’s main point, the relative smallness of man-made emissions. Yes, natural flows of CO2 are larger than the additional flow we switched on 200 years ago when we started burning fossil fuels in earnest. But it is terribly misleading to quantify only the large natural flows into the atmosphere, failing to mention the almost exactly equal flows out of the atmosphere back into the biosphere and the oceans. The point is that these natural flows in and out of the atmosphere have been almost exactly in balance for millenia. So it’s not relevant at all that these natural flows are larger than human emissions. The natural flows cancelled themselves out. So the natural flows, large though they were, left the concentration of CO2 in the atmosphere and ocean constant, over the last few thousand years.
Burning fossil fuels, in contrast, creates a new flow of carbon that, though small, is not cancelled.
I offer this example in some detail as an exemplar of the problem often faced in confronting incredulity.
It is natural that people will often struggle with numbers, especially large abstract sounding numbers. It is easy to get confused when trying to interpret numbers. It does not help that in Dominic Lawson’s case he is ideologically primed to see a ‘gotcha’, where none exists.
Incredulity, such as Lawson’s, is perfectly OK when initially confronting a claim that one is sceptical of; we cannot all be informed on every topic. But why then not pick up the phone, or email a Professor with skills in the particular art, to get them to sort out your confusion? Or even, read a book, or browse the internet? But of course, Dominic Lawson, like so many others suffers from a syndrome that many have identified. Charles Darwin noted in The Descent of Man:
“Ignorance more frequently begets confidence than does knowledge: it is those who know little, not those who know much, who so positively assert that this or that problem will never be solved by science.”
It is this failure to display any intellectual curiosity which is unforgivable in those in positions of influence, such as journalists or politicians.
However, the incredulity has a twin brother, its mirror image: credulity. And I want to take an example that also involves the carbon cycle,.
In a politically charged subject, or one where there is a topic close to one’s heart, it is very easy to uncritically accept a piece of evidence or argument. To be, in the technical sense, a victim of confirmation bias.
I have been a vegetarian since 1977, and I like the idea of organic farming, preferably local and fresh. So I have been reading Graham Harvey’s book Grass Fed Nation. I have had the pleasure of meeting Graham, as he was presenting a play he had written which was performed in Stroud. He is a passionate and sincere advocate for his ideas on regenerative farming, and I am sure that much of what he says makes sense to farmers.
The recently reported research from Germany of a 75% decline in insect numbers is deeply worrying, and many are pointing the finger at modern farming and land-use methods.
However, I found something in amongst Harvey’s interesting book that made me incredulous, on the question of carbon.
Harvey presents the argument that, firstly, we can’t do anything to reduce carbon emissions from industry etc., but that secondly, no need to worry because the soils can take up all the annual emissions with ease; and further, that all of extra carbon in the industrial era could be absorbed in soils over coming years.
He relies a lot on Savory’s work, famed for his visionary but contentious TED talk. But he also references other work that makes similar claims.
I would be lying if I said there was not a part of me that wanted this to be true. I was willing it on. But I couldn’t stop myself … I just had to track down the evidence. Being an ex-scientist, I always like to go back to the source, and find a paper, or failing that (because of paywalls), a trusted source that summarises the literature.
Talk about party pooper, but I cannot find any such credible evidence for Harvey’s claim.
I think the error in Harvey’s thinking is to confuse the equilibrium capacity of the soils with their ability to take up more, every year, for decades.
I think it is also a inability to deal with numbers. If you multiply A, B and C together, but then take the highest possible ranges for A, B and C you can easily reach a result which is hugely in error. Overestimate the realistic land that can be addressed; and the carbon dioxide sequestration rate; and the time till saturation/ equilibrium is reached … and it is quite easy to overestimate the product of these by a factor of 100 or more.
Savory is suggesting that over a period of 3 or 4 decades you can draw down the whole of the anthropogenic amount that has accumulated (which is nearly 2000 gigatonnes of carbon dioxide), whereas a realistic assessment (e.g. www.drawdown.org) is suggesting a figure of 14 gigatonnes of carbon dioxide (more than 100 times less) is possible in the 2020-2050 timeframe.
There are many complex processes at work in the whole carbon cycle – the biological, chemical and geological processes covering every kind of cycle, with flows of carbon into and out of the carbon sinks. Despite this complexity, and despite the large flows of carbon (as we saw in the Lawson case), atmospheric levels had remained stable for a long time in the pre-industrial era (at 280 parts per million). The Earth system as a whole was in equilibrium.
The deep oceans have by far the greatest carbon reservoir, so a ‘plausibility argument’ could go along the lines of: the upper ocean will absorb extra CO2 and then pass it to the deep ocean. Problem solved! But this hope was dashed by Revelle and others in the 1950s, when it was shown that the upper-to-lower ocean processes are really quite slow.
I always come back to the Keeling Curve, which reveals an inexorable rise in CO2 concentrations in the atmosphere since 1958 (and we can extend the curve further back using ice core data). And the additional CO2 humans started to put into the atmosphere since the start of the industrial revolution (mid-19th century, let us say) was not, as far as I can see, magically soaked up by soils in the pre-industrial-farming days of the mid-20th century, when presumably traditional farming methods pertained.
In an attempt to find some science to back himself up, Savory referenced Gattinger, but that doesn’t hold up either. Track down Gattinger et al’s work and it reveals that soil organic carbon could (on average, with a large spread) capture 0.4GtC/year (nowhere near annual anthropogenic emissions of 10GtC), and if it cannot keep up with annual emissions, forget soaking up the many decades of historical emissions (the 50% of these that persists for a very long time in the atmosphere).
It is interesting what we see here.
An example of ‘incredulity’ from Lawson, who gets carbon flows mixed up with net carbon flow, and an example of ‘credulity’ from Harvey where he puts too much stock in the equilibrium capacity of carbon in the soil, and assumes this means soils can keep soaking up carbon almost without limit. Both seem to struggle with basic arithmetic.
Incredulity in the face of startling claims is a good initial response to startling claims, but should be the starting point for engaging one’s intellectual curiosity, not as a perpetual excuse for confirming one’s bias; a kind of obdurate ignorance.
And neither should hopes invested in the future be a reason for credulous acceptance of claims, however plausible on face value.
It’s boring I know – not letting either one’s hopes or prejudices hold sway – but maths, logic and scientific evidence are the true friends here.
It seems only yesterday that the BBC was having to apologise for not challenging the scientifically illiterate rants of Lord Lawson … oh, but it was yesterday!
So how delightful to see another example of BBC journalism that demonstrates the woeful inability of journalists to report science accurately, or at least, to use well informed counter arguments when confronted with bullshit.
A Story by Owen Amos on the BBC Website (US & Canada section), with clickbait title “JFK assassination: Questions that won’t go away” … is a grossly ill-informed piece, repeating ignorant conspiracy theories by Jefferson Morley (amongst others), without any challenge (BBC’s emphasis):
“Look at the Zapruder film,” says Morley. “Kennedy’s head goes flying backwards.”
I know there’s a theory that if you get hit by a bullet from behind, the head goes towards the source of the bullet.
But as a common sense explanation, it seems very unlikely. That sure looks like a shot from the front.”
That’s it then, common sense.
Case settled.
If it’s good enough for Oliver Stone and Jefferson Morley, who are we to argue?
But wait a minute!
The theory in question, if Morley is really interested, is the three centuries old theory called Newtonian Mechanics (Reference: “Philosophiæ Naturalis Principia Mathematica“, Issac Newton, 1687).
Are we to cast that aside and instead listen to a career conspiracy theorist.
You can if you must, but the BBC shouldn’t be peddling such tripe.
As Luis Alvarez, the Nobel Laureate, pointed out long ago, the head MUST kick back in order to conserve both Momentum and Energy. You need a picture?
[I have not included the maths, but it is high school maths, trust me, you don’t need a Nobel Prize to do the calculation]
Morley would get a Nobel Prize if he disproved it. He hasn’t and won’t.
It seems that Morley has been doing the rounds in the media, and there is no problem finding gullible victims.
You might like to look at the Penn & Teller video of 2006 which demonstrates the physics in practice (with a melon), for the Newtonian sceptics like Morley.
Amos/BBC is gullible in uncritically replaying this nonsense, without mentioning Alvarez. Amos could have said something like
“this rationale (the head kick back) for a second gunman is completely unfounded as it flies in the face of basic Newtonian mechanics .. see this video“
Unfortunately this fails the clickbait test for irresponsible journalism, which requires ‘debate’ by idiots in response to experts. It’s balanced reporting after all.
Why are journalists so incapable of understanding 300 years old basic physics, or so carelessly cast it aside. The same physics, by the way, that helps us design airplanes that fly, and a major pillar in climate science too (the science that so persistently eludes Lord Lawson).
I am waiting patiently for another BBC apology for crimes against scientific literacy and an inability to ask searching, informed questions of peddlars of bullshit, be they Lawson or Morley.
How far do we go back to find examples of investigations of injustice or the abuse of power?
Maybe Roger Casement’s revelations on the horrors of King Leopold’s Congo, or the abuses of Peruvian Indians were heroic examples for which he received a Knighthood, even if later, his support for Irish independence earned him the noose.
Watergate was clearly not the first time that investigative journalism fired the public imagination, but it must be a high point, at least in the US, for the power of the principled and relentless pursuit of the truth by Bob Woodward and Carl Bernstein.
And then I call to mind the great days of the Sunday Times’ ‘Insight’ team that conducted many investigations. I recall the brilliant Brian Deer, who wrote for The Times and Sunday Times, and revealed the story behind Wakefield’s fake science on MMR, even while other journalists were shamelessly helping to propagate the discredited non-science.
But those days seem long ago now.
Today, you are just as likely to find The Times, The Daily Telegraph, Daily Mail and Spectator – desperate to satisfy their ageing and conservative readership, or in need of clickbait advertising revenue – to regurgitate bullshit, including the anti-expert nonsense that fills the blogosphere. This nonsense has been called out many times, such as in Climate Feedback.
Despite Michael Gove’s assertion that “Britain has had enough with experts” the IPSOS More Veracity Index of 2016 suggests differently – It appears that nurses, doctors, lawyers and scientists are in the upper quartile of trust, whereas journalists, estate agents and politicians lurk in the lower quartile.
No wonder the right-wingers who own or write for the organs of conservatism are so keen to attack those in the upper quartile, and claim there is a crisis of trust. This is displacement activity by politicians and journalists: claiming that there is a crisis of trust with others to deflect it from themselves. The public are not fooled.
It is a deeply cynical and pernicious to play the game of undermining evidence and institutions.
As Hannah Arendt said in The Origins of Totalitarianism:
“The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction (i.e., the reality of experience) and the distinction between true and false (i.e., the standards of thought) no longer exist.”
But investigative journalism is not dead.
In Russia there are many brave journalists who expose corruption and the abuse of power, and they have paid with their lives: 165 murdered since 1993, with about 50% of these since Putin came to power. He didn’t start the killing, but then, he didn’t stop it either.
The nexus of political, business and mafia-style corruption makes it easy from the leadership to shrug off responsibility.
And so we come to Malta, where the same nexus exists. Daphne Caruana Galizia has been exposing corruption for so long, there were no shortage of enemies, including the politicians and police that failed to protect her. Her assassination is a scar on Malta that will take a long time to heal.
The EU has produced anodyne reports on partnership with Malta and programmes continue despite a breakdown in the rule of law and governance, that have provided a haven for nepotism and racketeering. Is Malta really so different to Russia in this regard?
Is the EU able to defend the principles it espouses, and sanction those who fail to live up to them?
The purveyors of false news detest brave investigative journalists as much as they love to attack those like scientists who present evidence that challenges their interests. Strong institutions are needed to defend society against these attacks.
Remainers like myself defend the EU on many counts, but we also expect leadership when that is needed, not merely the wringing of hands.
This was originally written on 2nd October 2017 following the Las Vegas shooting where Stephen Paddock murdered 58 people and injured 851 more. The latest massshooting (a phrase that will become out of date, almost before the ink is dry) at Florida’s Marjory Stoneman Douglas High School. This is also the 17th schoolshooting in the USA in the first 45 days of 2018. I have not made any changes to the essay below (because this is tragically the same psychosis), but have added Venn Diagrams to visualize the issue of mental health and guns. Mental health is not the issue here. It is people with homicidal tendencies (many of whom will indeed have mental problems) having easy access to guns. We should not stigmatise a growing number of people with mental health problems. We should reduce access to guns.
If ever one needed proof of the broken state of US politics, the failure to deal with this perpetual gun crisis is it.
After 16 children and 1 teacher were killed in the Dunblane massacre on 13th March 1996, the UK acted.
After 35 people were killed in the PortArthur massacre on 28th April 1996, Australia acted.
It’s what any responsible legislature would do.
So far in 2017, US deaths from shootings totals a staggering 11,652 (I think not including the latest mass shooting in Las Vegas, and with 3 months still to run in 2017 – see gunsviolencearchive – and note this excludes suicides).
The totals for the previous 3 years 2014, 2015 and 2016 are 12,571; 13,500; and 15,079.
The number of those injured comes in at about two times those killed (but note that the ratio for the latest Las Vegas shooting is closer to 10, with the latest Associated Press report at the time of writing, giving 58 people dead and 515 injured).
One cannot imagine the huge number of those scarred by these deaths and injuries – survivors, close families, friends, colleagues, classmates, first-responders, relatives at home and abroad. Who indeed has not been impacted by these shootings, in the US and even abroad?
I write as someone with many relatives and friends in America, and having owed my living to great American companies for much of my career. But I am also someone whose family has been touched by this never-ending obsession that America has with guns.
And still Congress and Presidents seem incapable of standing up to the gun lobby and acting.
The US, far from acting, loosens further the access to guns or controls on them.
This is a national psychosis, and an AWOL legislature.
In both the UK and Australian examples, it was actually conservative administrations that brought in the necessary legislation, so the idea that only ‘liberals’ are interested in reducing the number and severity of shootings, by introducing gun control, is simply wrong. This should not be a party political issue.
In the US some will argue against gun control, saying that a determined criminal or madman can always get hold of a gun. This is a logical fallacy, trying to make the best be the enemy of the good. Just because an action is not guaranteed to be 100% perfect, is no reason for not taking an action that could be effective, and the case of the UK and Australia, very effective. Do we fail to deliver chemotherapy to treat cancer patients because it is not guaranteed to prevent every patient from dying; to be 100% perfect? Of course not. But this is just one of the many specious arguments used by the gun lobby in the USA to defend the indefensible.
But at its root there is, of course, a deeply polarised political system in the USA. The inability to confront the guns crisis, is the same grid-locked polarisation that is preventing the US dealing with healthcare, or the justice system, or endemic racism, or indeed, climate change.
How will America – a country that has given so much to the world – overcome this debilitating polarization in the body politic?
America needs a Mandela – a visionary leader able to bring people together to have a rationale, evidence based conversation – but none is in sight.
It’s enough to make one weep.
The 3 branches of the US Government ought to be ashamed, but expect more platitudinous ‘thoughts and prayers’ … the alternative to them doing their job.
Trump is now praying for the day when evil is banished, for god’s sake! An easy but totally ineffective substitute for actually doing anything practical to stem the carnage, and protect US citizens.
Some pictures added 16th February 2018 to illustrate the problem facing the USA …
“My PhD supervisor, Sir Peter Lachmann, has framed the distinction between the subjective and the objective in a different way, by considering whether questions are ‘pollable’ or ‘non- pollable’; that is, whether a question can be answered in principle by a vote (a pollable question), or whether the question has a right answer that is independent of individual preferences and opinions (a non-pollable question). This distinction can be easily illustrated by a couple of examples. It is a non-pollable question as to whether there is an anthropogenic contribution to climate change. There is a correct answer to this question and your opinion or mine is ultimately irrelevant. The fact that there may be uncertainties about the scale and the nature of the contribution does not change the basic nature of the question. In contrast, it is a pollable question as to whether nuclear energy is an acceptable solution to providing low-carbon power, and I will return to this later.”
The question presents itself: does the BBC understand the distinction between pollable and non-pollable questions related to science?
The first was with Steve Jones , Emeritus Professor of Human Genetics at University College, who led a review of the way the BBC itself reports science, about the changing nature of science reporting, while the second was with Richard Dawkins, Professor of evolutionary biology and David Willetts a former science minister, considering the “public’s evolving relationship with science, evidence and truth”.
Subsequent to this I wrote a letter to the Today team at the BBC, which is reproduced below, which I am now sharing on my blog:
Dear Sir/ Madam
I wanted to thank the BBC Today team for two excellent discussions that John Humphreys had, first with Prof. Steve Jones, and then subsequently with David Willetts and Richard Dawkins.
John Humphreys posed the challenge to Prof. Jones, as to why we should ‘believe’ climate change; and I am paraphrasing his words:
A. The world is warming
B. This warming is man made, and
C. There is only one way of stopping it.
This was an alarming way to approach the topic, for two reasons.
Firstly, the science – and by virtue of that statement, scientists – unequivocally answer A and B with a resounding ‘Yes’. There is an aggregation of scientific evidence and analysis going back at least to John Tyndall in the mid 19th Century that brought us – no later than the 1980s in fact – to a consilience of science on these questions. I discuss this history and the nature of ‘consilience’ in an essay, here: https://essaysconcerning.com/2017/05/02/a-climate-of-consilience-or-the-science-of-certitude/
To question this is at the same level as questioning whether cigarettes cause lung cancer. There is no debate to be had here. Yes, debate on how to get teenagers to stop taking up smoking, but that’s a different question. To say that everyone can have an opinion, and to set up a controversial ‘debate’ on these questions is the “false balance” Professor Jones identified in the report he did for the BBC. Representing opinions is not a license to misrepresent the evidence, by using ‘false balance’ in this way.
Secondly, however, scientists do NOT speak with one voice on how to stop it, as John Humphrey’s phrased his C question. That is a why the UNFCCC takes up the question here which require policy input, and yes, the input of ‘values’. Whilst the A and B questions are not questions where it is appropriate to bring values to bear on the answers; solutions are full of value-based inputs. So the C that John Humphreys should be opening a dialogue on this:
C(amended): There are many solutions that can contribute to addressing the given man-made global warming – either by mitigation or adaptation – which ones do you advocate and why?
And of course many subsidiary questions arise when debating these solutions:
Are we too late to prevent dangerous climate change, therefore need a massive reduction in consumption – a degrowth strategy?
Can we solve this with a kind of Marshall Plan to decarbonise our energy supply, but also heat buildings and transport, through electrification?
What role does nuclear energy play?
Given the long time that excess carbon dioxide levels remain in the atmosphere, and the legacy of the developed worlds emissions, how can the developing world receive carbon justice?
Even if we decarbonised everything tomorrow, what solutions are feasible for reducing the raised levels of carbon dioxide in the atmosphere; what degree of sea-level rise are we prepared to tolerate, ‘baked in’ already to the Earth system?
Is a carbon tax ultimately the only way forward, and what price do we put on carbon?
… and so on.
Yes, science can help answer these kinds of questions, but the values play a large part too.
The fact the BBC still gets stuck in the groove of ‘debating’ A and B, is I think woeful. As woeful as ‘debating’ if smoking causes cancer.
I think David Willetts acknowledged the difference in these classes of question, whereas Richard Dawkins was disappointingly black and white; not recognising the role of values in the C(amended) class of questions.
David Willetts made the interesting point that in social science, there is often greater difficulty in getting to the truth, and this is highly problematic for politicians, but that for the physical sciences, if we’ve discovered the Higgs Boson, it is much clearer. He made a lot of the need to bring values to bear on decisions and ‘not being able to wait for yet another report’. However, there is a qualitative difference with climate change: it requires long term strategic thinking and it is a challenge to the normal, national political cycles.
On the question of Lord Lawson. By all means invite him to discuss the economics of decarbonising the economy. But last time he was asked on – more or less to do this – and had a discussion with Justin Webb, he was asked by Justin to comment on Al Gore’s statement that we needed to push ahead with the solutions that are already available to us. Move on, in other words.
Instead of answering this question Lord Lawson tried to poke holes in unequivocal science (A and B), instead of addressing C; he has no intention of moving on. He lost, and seems quite bitter about it; as he went on to make personal attacks on Al Gore. While the interviewer cannot stop Lord Lawson saying these things, he should be called out on them.
“I am not a scientist” is a statement that US Republican Congressman use to avoid confronting the fact that A and B are true, and not up for debate. John Humphreys should not be using the same statement (but he did on this episode).
If climate change is “the big one” as he himself noted, surely it is time he made the effort to educate himself to the point where he understands why A and B are unequivocally “Yes”, in the same way that “Does smoking cause lung cancer?” has an unequivocally “Yes” answer. There are no shortage of scientists at the Met Office, Cambridge, Oxford, UCL and elsewhere who I am sure would be happy to help him out here.
Today was a good discussion – even a great step forward – but the BBC is still failing in its public service duty, on this topic of global warming.
Kind regards,
Richard Erskine
What seems to be clear to me is that John Humphreys is not alone amongst journalists in failing to distinguish between non-pollable (where evidence accumulated over many years holds sway, and values have no place) and pollable questions (where values can have as big a part to play as the science).
When I go to see a film with my wife, we always stick around for the credits, and the list has got longer and longer over the years … Director, Producer, Cinematographer, Stuntman, Grips, Special Effects … and we’ve only just started. Five minutes later and we are still watching the credits! There is something admirable about this respect for the different contributions made to the end product. The degree of differentiation of competence in a film’s credits is something that few other projects can match.
Now imagine the film reel for a typical IT project … Project Manager, Business Analyst, Systems Architect, Coder, Tester and we’re almost done, get your coat. Here, there is the opposite extreme; a complete failure to identify, recognise and document the different competencies that surely must exist in something as complex as a software project. Why is this?
For many, the key role on this very short credits list is the ‘coder’. There is this zeitgeist of the coders – a modern day priesthood – that conflates their role with every other conceivable role that could or should exist on the roll of honour.
A good analogy for this would be the small scale general builder. They imagine they can perform any skill: they can fit a waterproof membrance on a flat roof; they can repair the leadwork around the chimney; they can mend the lime mortar on that Cotswold stone property. Of course, each of these requires deep knowledge and experience of the materials, tools and methods needed to plan and execute them right. A generalist will overestimate their abilities and underestimate the difficulties, and so they will always make mistakes.
The all purpose ‘coder’ is no different, but has become the touchstone for our digital rennaissance. ‘Coding’ is the skill that trumps all others in the minds of the commentariat.
Politicians, always keen to jump on the next bandwagon, have for some years now been falling over themselves to extol the virtues of coding as a skill that should be promoted in schools, in order to advance the economy. Everyone talks about it, imagining it offers a kind of holy grail for growing the digital economy. But can it be true? Is coding really the path to wealth and glory, for our children and our economy?
Forgetting for a moment that coding is just one of the skills required on a longer list of credits, why do we all need to become one?
Not everyone is an automotive engineer, even though cars are ubiquitous, so why would driving a car mean we all have to be able to design and build one? Surely only a few of us need that skill. In fact, whilst cars – in the days when we called them old bangers – did require a lot of roadside fixing, they are now so good we are discouraged from tinkering with them at all. We the consumers have become de-skilled, while the cars have become super-skilled.
But apparently, every kid now needs to be able to code, because we all use Apps. Of course, it’s nonsense, for much the same reasons it is nonsense that all car drivers need to be automotive engineers. And as we decarbonise our economy Electric Vehicles will take over, placing many of the automotive skills in the dustbin. Battery engineers anyone?
So why is this even worth discussing in the context of the knowledge economy? We do need to understand if coding has any role in the management of our information and knowledge, and if not, what are the skills we require. We need to know how many engineers are required, and crucially, what type of engineers.
But lets stick with ‘coding’ for a little while longer. I would like to take you back to the very birth of computing, to deconstruct the wording ‘coding’ and place into context. The word coding originates the time when programming a computer meant knowing the very basic operations expressed as ‘machine code’ – Move a byte to this memory location, Add these two bytes, Shift everything left by 2 bytes – which was completely indecipherable to the uninitiated. It also had a serious drawback in that a program would have to be re-written to run on another machine, with its own particular machine code. Since computers were evolving fast, and software needed to be migrated from old to new machines, this was clearly problematic.
Grace Hooper came up with the idea of a compiler in 1952, quite early in the development of computers. Programs would then be written in a machine-agnostic ‘high level language’ (which was designed to be readable, almost like a natural language, but with a simple syntax to allow logic to be expressed … If (A = B) Then [do-this] Else [do-that]). A compiler on a machine would take a program written in a high-level language and ‘compile’ it into the machine code that could run on that machine. The same program could thereby run on all machines.
In place of ‘coders’ writing programs in machine code, there were now ‘programmers’ doing this in high-level language such as Cobol or FORTRAN (both of which were invented in the 1950s), and later ones as they evolved.
So why people still talk about ‘coders’ rather than ‘programmers’ is a mystery to me. Were it just an annoying misnomer, one could perhaps ignore it as an irritant, but it reveals a deeper and more serious misunderstanding.
Coding … I mean Programming … is not enough, in so many ways. When the politician pictures a youthful ‘coder’ in their bedroom, they imagine the next billionaire creating an App that will revolutionize another area of our lives, like Amazon and Uber have done.
But it is by no means clear that programming as currently understood, is the right skill for the knowledge economy. As Gottfried Sehringer wrote in an article “Should we really try to teach everyone to code?” in WiRED, even within the narrow context of building Apps:
“In order to empower everyone to build apps, we need to focus on bringing greater abstraction and automation to the app development process. We need to remove code — and all its complexity — from the equation.”
In other words, just as Grace Hooper saw the need to move from Coding to Programming, we need to move from Programming to something else. Let’s call it Composing: a visually interactive way to construct Apps with minimal need to write lines of text to express logical operations. Of course, just as Hooper faced resistance from the Coders, who poured scorn on the dumbing down of their art, the same will happen with the Programmers, who will claim it cannot be done.
But the world of digital is much greater than the creation of ‘Apps’. The vast majority of the time spent doing IT in this world is in implementing pre-built commercial packages. If one is implementing them as intended, then they are configured using quite simple configuration tools that aim to eliminate the need to do any programming at all. Ok, so someone in SAP or Oracle or elsewhere had to program the applications in the software package, but they are a relatively small population of technical staff when compared to the numbers who go out to implement these solutions in the field.
Of course it can all go wrong, and often does. I am thinking of a bank that was in trouble because their creaking old core banking system – written in COBOL decades ago by programmers in the bank – was no longer fit for purpose. Every time changes were made to financial legislation, such as tax, the system needed tweaking. But it was now a mess, and when one bug was fixed, another took its place.
So the company decided to implement an off-the-shelf package, which would do everything they needed, and more. The promise was the ability to become a really ‘agile’ bank. They would be able to introduce new products to market rapidly in response to market needs or to respond to new legislation. It would take just a few weeks, rather than the 6 months it was currently taking. All they needed to do was to do some configurations of the package so that it would work just as they needed it too.
The big bosses approved the big budget then left everyone to it. They kept on being told everything was going well, and so much wanted to believe this, so failed to ask the right questions of the team. Well, guess what, it was a complete disaster. After 18 months and everything running over time and over budget, what emerged? The departmental managers had insisted on keeping all the functionality from their beloved but creaking old system; the big consultancy was being paid for man-hours of programming so did not seem to mind that the off-shored programmers were having to stretch and bend the new package out of shape to make it look like the old system. And the internal project management was so weak, they were unable to call out the issues, even if they had fully understood them.
Instead of merely configuration, the implementation had large chunks of custom programming bolted onto the package, making it just as unstable and difficult to maintain as the old system. Worse still, it made it very difficult to upgrade the package; to install the latest version (to derive benefits from new features), given the way it had been implemented. There was now a large support bill just to keep the new behmoth alive.
In a sense, nothing had changed.
Far from ‘coding’ being the great advance for our economy, it is often, as in this sorry tale, a great drag on it, because this is how many large system implementations fail.
Schools, Colleges and Universities train everyone to ‘code’, so what will they do when in the field? Like a man with a hammer, every problem looks like a nail, even when a precision milling machine was the right tool to use.
Shouldn’t the student be taught how to reframe their thinking to use different skills that are appropriate to the task in hand? Today we have too many Coders and not enough Composers, and its seems everyone is to blame, because we are all seduced by this zeitgeist of the ‘coder’.
When we consider the actual skills needed to implement, say, a large, data-oriented software package – like that banking package – one finds that activities needed are, for example: Requirements Analysis, Data Modelling, Project Management, Testing, Training, and yes of course, Composing. Programming should be restricted to those areas such as data interfaces to other systems, where it must be quarantined, so as not to undermine the upgradeability of the software package that has been deployed.
So what are the skills required to define and deploy information management solutions, which are document-oriented, aimed at capturing, preserving and reusing the knowledge within an organization?
Let the credits roll: Project Manager; Information Strategist; Business Analyst; Process Architect; Information Architect; Taxonomist; Meta-Data Manager; Records Manager; Archivist; Document Management Expert; Document Designer; Data Visualizer; Package Configurer; Website Composer; … and not a Coder, or even a Programmer, in sight.
The vision of everyone becoming coders is not only the wrong answer to the question; its also the wrong question. The diversity of backgrounds needed to build a knowledge economy is very great. It is a world beyond ‘coding’ which is richer and more interesting, open to those with backgrounds in software of course, but also in science and the humanities. We need linguists as much as it we need engineers; philosophers as much we need data analysts; lawyers as much as we need graphics artists.
To build a true ‘knowledge economy’ worthy of that name, we need to differentiate and explore a much richer range of competencies to address all the issues we will face than the way in which information professionals are narrowly defined today.
(C) Richard W. Erskine, 2017
——
Note:
In his essay I am referring to business and institutional applications of information management. Of course there will be areas such as scientific research or military systems which will always require heavy duty, specialist software engineering; but this is another world when compared to the vast need in institutions for repeatable solutions to common problems, where other skills are argued to be much more relevant and important to success.
The tragic fire at Grenfell Tower breaks one’s heart.
There was a question asked tonight on BBC’s Newsnight which amounted to:
How is it, in 21st Century UK, a rich and prosperous country despite everything, that a fire can engulf a tower block in the way it did last night?
This got me thinking.
People from the council, politicians and others talk of the need to ‘learn lessons’ in a way that makes one wonder if they really believe it.
Apparently, in the British Army they ban the use of such language. Because we all know what this means. Another report. Another expert ignored. Another tragedy, and another lesson unheard, and ignored. A lesson demonstrated through a change in behaviour, great, but some aspirational statement that one will change at some indeterminate time in the future? No thanks.
We know that tragedies like this are multi-causal, so no single cause can explain it. But that doesn’t mean it was unforeseen. In this case there are factors that have been raised:
cladding that is not fire-retardant, but rather designed to make a building more aesthetically pleasing, with scant regard for how it undermines the underlying fire-safety of the original building;
a lack of any alarm to warn the residents of fire;
a lack of sprinklers in rooms or hallways (whereas in hotels this is standard practice; why the difference);
a failure to implement a report by a Select Committee of Parliament published following a previous tower-block fire;
a building with only one staircase for escape;
building standards that are evidently not fit for purpose and widely criticised (for some time) as providing a very low bar for compliance;
an arms length management organisation that refused to listen to the concerns of residents.
These and no doubt other factors compounded to either make the fire worse than it should have been, or the response to the fire by residents and rescue workers less effective than it could have been.
No doubt there will be questions about how it is that experts have known about the risks of the kind of cladding used, and have published papers on this, but their knowledge has fallen on deaf ears. No one in authority has had the smidgen of intellectual curiosity or moral impulse to track it down using Google. We apparently need another report to rediscover stuff we already knew, which who knows, maybe they will read this time.
No doubt there are questions to be asked of organisations like the British Standards Institute (BSI) that produces standards in this case that seem to fail to challenge the industry to reach the highest common factor for health and safety, but instead, to arrive a lowest common denominator of standard. They specify tests that are clearly not real-world tests. One is bound to ask if the BSI is fit for purpose, and whether its processes lead to an excessive chumminess with the industries it works with. It has a business model where it generates and sells standards and associated consultancy. Better not rock too many boats? No doubt the standards are “pragmatic” in the business-speak synonym for barely adequate.
“Can anything be done about the worldwide legacy of buildings with combustible cored composite panels? Unless something radical is done, such as national retro-fitting subsidy schemes, it seems inevitable that there will be further fires involving aluminium-faced polyethylene core panels. Nightmare scenarios include multiple-fatality building-engulfing fires as in China, or given the proximity of towers in some districts, the ignition of neighbouring buildings’ cladding from an external cladding fire, or disintegrated burning panels igniting the roofs of lower buildings adjacent.
It is difficult to envisage owners voluntarily stripping off entire existing aluminium composite panel facades and replacing them with Fire Code-compliant cladding panels, as the cost would be prohibitive. Partial replacement with barrier bands of fire resistant panels has been suggested to stop fires spreading, [48] but given the flame heights at the Tamweel, Torch and The Address, such barrier bands would have to be substantially large. The works necessary to provide these barriers would involve much of the scaffolding and associated costs of full replacement.
It seems inevitable that insurers will differentiate between buildings with and without combustible aluminium composite panels and will charge higher premiums for higher risks. One or two more fires, or a fatal fire, could lead to insurance cover being refused if the risk is considered excessive. Insurance issues, bad publicity and loss of property value might then make retro-fitting external cladding a viable option in commercial, as well as fire safety terms.”
But despite all these unlearned lessons, there is something far more insidious at work here.
The sneering right wing commentators like Richard Littlejohn of the Daily Fail have waged a campaign for many years against what they claim is an over-weaning attempt by the liberal elite to protect us from ourselves, which goes under the catchy title of “elf ’n safety” (snigger, snigger, sneer). Imagine …
Poor Johnny can’t even go diving off some rocks without someone doing a bloody risk assessment, then someone else has to hold a flag.
Stuff and nonsense – in my day we used to ski down black runs blindfolded. Never did us any harm.
You get the picture.
I remember once doing a study for the HSE (Health & Safety Executive) back in the 90s, and some of the horror stories of what used to happen in industries like farming and chemicals would make your hair stand on end.
And of course deaths and injury in these and other industries have fallen dramatically in the last few decades, thanks to organisations like the HSE. Far from hurting productivity, it has helped it, by enhancing efficiency and professionalism. In some industries it even drives innovation, as with the noise regulations for aircraft.
And even in the more parochial area of school trips, there was plenty of evidence that just a little bit of prior planning might well prevent poor performance (and injury).
But no, to Richard Littlejohn and his ilk, the “world has gone mad”.
Too often the bureaucrats seem to have bought into – maybe unconsciously – this background noise of derision towards health and safety. They feel inclined to dismiss the concerns raised by experts or ride roughshod over citizens concerns.
What do they know? Business must go on.
And once again we have the chumminess effect: councillors too close to developers, and lacking the critical faculties to ask searching questions, or even obvious ones.
For example, one might have imagined a councillor asking the questions …
“This cladding we plan use… is it anything like that used on that tower block that went up in flames in Dubai? Have we assessed the risks? Can we assure the tenants we have investigated this, and its OK?”.
There is good box-ticking (in the cock-pit of an aeroplane) and the bad kind. The good kind is used by engineers, pilots, surgeons, school-teachers and others who are skilled in their respective arts.
The bad kind is used by bureaucrats wanting to cover their arses. We heard some of this last night on Newsnight “we got the design signed off”, “we followed the standards”, etc.
Where is the imagination, the critical thinking, the challenging of lazy assumptions?
And most importantly, where is the answering of tenants’ questions and concerns, and taking health and safety seriously as the number one requirement, not as an afterthought?
But risk assessment planning and execution is incessantly mocked by the sneering, curled lip brigade who inhabit the Daily Mail, Daily Telegraph and other right wing denigrators of “elf ’n safety”.
This has created a culture of jocular disregard for safety.
Try this. Go to a cafe with a few friends and ask “shall we have a chat about health and safety?”. I bet you that they will – whatever their political views – either laugh or roll their eyes.
Well, maybe not any more. Maybe they may feel suitably chasticised for a while at least, and stop their lazy sneering.
The champion sneerers have been successful through their drip, drip of cherry-picked stories or outright myths; their project has had an insidious effect, and has done its worst inundermining respect for health and safety.
But you see, it is not really health and safety that they have in their sights. It’s just the easy to mock first hurdle in a wider programme.
David Davis, the Secretary of State for Exiting the European Union, claims not to know the difference between a ‘soft’ Brexit and a hard one.
Well, here’s a guide, David.
A hard Brexit is one where we have a bonfire of regulations; where we have no truck with experts who advise us on risks of ethylene-based cladding or excess carbon dioxide in our atmosphere; where ‘risk assessment’ is a joke we have down the club; where the little people enjoy the fruits of ‘trickle down’ economics in a thriving Britain, free of (allegedly) over-weaning regulation.
But the British have made it clear they do not want a hard Brexit.
I hope and trust that the time is over for the sneering, arrogant advocates for de-regulation, and their purile and dangerous disregard for people’s health, and their safety.
Whether in bringing forth and implementing effective measures to prevent another terrible fire like at Grenfell Tower, or in all the other areas of life and work in the UK that are important for a safe and secure future, the time to take experts and regulations seriously is needed now, more than ever.
There seems to be a lot of discussion about an apparently simple question:
Can science be ‘certain’ about, well, anything?
If that meant not doing anything – not building a bridge; not releasing a new drug; not taking off for the flight to New York; not flying a spacecraft to Saturn; not vaccinating the whole world against polio; not taking action to decarbonise our energy supply; Etc. – then this lack of 100% certainty might totally debilitate a modern society, frozen with doubt and so unable to act.
But of course, we do not stop implementing solutions based on our current best knowledge of nature and the world, however limited it might be. We make judgments. We assess risks. We weigh the evidence. We act.
I think scientists often fall into the trap of answering a quite different question:
Do we have a complete and all encompassing theory of the world (or at least, ‘this’ bit of the world, say how black holes work or how evolution played out)?
And everyone will rush defensively to the obvious answer, “no”. Why? Because we can always learn more, we can always improve, and indeed sometimes – although surprisingly rarely – we can make radical departures from received bodies of knowledge.
We are almost 100% certain of the ‘Second Law of Thermodynamics’ and Darwin’s ‘Evolution by Natural Selection’, but almost everything else is of a lower order.
But even when we do make radical departures, it doesn’t always mean a complete eradication of prior knowledge. It does when moving from superstition, religious dogma, witch-doctoring and superstitious theories of illness: as when we move to the germ theory of disease and a modern understanding biology, because people get cured, and ignorance is vanquished.
But take Newtonian mechanics. This remains valid for the not too small (quantum mechanical) and not too massive or fast (relativistic) domains of nature, and so remains a perfectly good approximation for understanding snooker balls, the motion of the solar system, and even the motion of fluids.
As Helen Czerski describes in her book Storm In A Teacup, the physics of the everyday covers many interesting and complex phenomena.
In the following Figure, from her entertaining TEDxManchester talk The fascinating physics of everyday life, she shows how the physics of the every day applies over a huge range of scales (in time and space); bracketed between the exotic worlds of the extremely small (quantum mechanics) and extremely large (general relativity) which tend to dominate our cultural perceptions of physics today.
Want to build a bridge, or build a solar system, or understand Saturn’s rings? Move over Schrodinger and Einstein, come on board Newton!
And yes, if you want to understand the interaction of molecules? Thank you Schrodinger.
Want to predict gravitational waves from a distant galaxy where two neutron stars are collinding? Thank you Einstein.
That is why the oft promulgated narrative of science – the periodic obliteration of old theories to be replaced by new ones – is often not quite how things work in practice. Instead of a vision of a singular pyramid of knowledge that is torn down when someone of Einstein’s genius comes along and rips away its foundations, one instead sees new independent pyramids popping up in the desert of ignorance.
The old pyramids often remain, useful in their own limited ways. And when confronting a complex problem, such as climate change, we see a small army of pyramids working together to make sense of the world.
As one such ‘pyramid’, we have the long and tangled story of the ‘atom’ concept, a story that began with the ancient greeks, and has taken centuries to untangle. Building this pyramid – the one that represents our understanding of the atom – we follow many false trails as well as brilliant revelations. Dalton’s understanding of the uniqueness and differentiation of atoms was one such hard fought revelation. There was the kinetic theory of gases that cemented the atomic/ molecular role in the physical properties of matter: the microscopic behaviour giving rise to the macroscopic properties such as temperature and pressure. Then there was the appreciation of the nuclear character and the electronic properties of atoms, leading ultimately to an appreciation of the fundamental reason for the structure of the periodic table, with a large dose of quantum theory thrown in. And then, with Chadwick’s discovery of the neutron, a resolution of the reason for isotopes very existence. Isotopes that, with the help of Urey’s brilliant insight, enabled their use in diverse paleoclimatogical applications that have brought glaciologists, chemists and atmospheric physicists together to track the progress of our climate and its forcing agents.
We can trace a similar story of how we came to be able to model the dynamical nature of our weather and climate. The bringing together of the dynamics of fluids, their thermodynamics, and much more.
Each brick in these pyramids starting as a question or conundrum and then leading to decades of research, publications, debate and resolutions, and yes, often many new questions.
Science never was and never will be the narrative of ignorance overcome by heroic brilliance overnight by some hard pressed crank cum genius. Galilieo was no crank, neither was Newton, nor was Einstein.
Even if our televisual thirst for instant gratification demands a science with instant answers, the reality is that the great majority of science is a long process of unfolding and developing the consequences of the fundamental principles, to see how these play out. Now, with the help of the computational facilities that are part of an extended laboratory (to add to the test tube, the spectometer, x-ray diffration, and so much more) we can see further and test ideas that were previously inaccessible to experimentation alone (this is true in all fields). Computers are the microscope of the 21st Century, as one molecular biologist has observed.
When we look at climate change we have a subject of undoubted complexity, that is a combination of many disciplines. Maybe for this reason, it was only in the late 1950s that these disparate disciplines recognised the need to come together: meteorology, glaciology, atmospheric chemistry, paleoclimatology, and much more. This convergence of disciplines ultimately led to the formation 30 years later to the IPCC in 1988.
At its most basic, global warming is trivial, and beyond any doubt: add more energy to a system (by adding more infra-red absorbing carbon dioxide to the atmosphere), and the system gets hotter (because, being knocked out of equilibrium, it will heat up faster than it loses heat to space, up and until it reaches a new equilibrium). Anyone who has spent an evening getting a frying pan to the point where it is hot enough to fry a pancake (and many to follow), will appreciate the principle.
Today, we have moved out of a pre-existing equilibrium and are warming fast, and have not yet reached a new equilibrium. That new equilibrium depends on how much more fossil fuels we burn. The choice now is between very serious and catastrophic.
The different threads of science that come together to create the ‘climate of consilience’ are diverse. They involve everything from the theory of isotopes; the understanding of Earth’s meteorological system; the nature of radiation and how different gases react with different types of radiation; the carbonate chemistry of the oceans; the dynamics of heat and moisture in the atmosphere based on Newtonian mechanics applied to fluids; and so much more.
Each of these threads has a well established body of knowledge in its own right, confirmed through published evidence and through their multiple successful applications.
In climate science these threads converge, and hence the term consilience.
So when did we know ‘for certain’ that global warming was real and is now happening?
Was it when Tyndall discovered in 1859 that carbon dioxide strongly absorbed infra-red radiation, whereas oxygen and nitrogen molecules did not? Did that prove that the world would warm dangerously in the future? No, but it did provide a key building block in our knowledge.
As did the findings of those that followed.
At each turn, there was always some doubt – something that suggested a ‘get out clause’, and scientists are by nature sceptical …
Surely the extra carbon dioxide added to the atmosphere by human activities would be absorbed by the vast oceans?
No, this was shown from the chemistry of the oceans to be wrong by the late 1950s, and thoroughly put to bed when sufficient time passed after 1958, when Charles Keeling started to accurately measure the concentration of carbon dioxide in the atmosphere. The ‘Keeling Curve’ rises inexorably.
Surely the carbon dioxide absorption of heat would become ‘saturated’ (unable to absorb any more heat) above a certain concentration.
No, this was raised in the early 20th Century but thoroughly refuted in the 1960s. Manabe & Wetherald’s paper in 1967 was the final nail in the coffin of denial for those that pushed against the ‘carbon dioxide’ theory. To anyone literate in science, that argument was over in 1967.
But will the Earth system not respond in the way feared … won’t the extra heat be absorbed by the oceans?
Good news, bad news. Yes, 93% of the extra heat is indeed being absorbed by the oceans, but the remainder is more than enough to ensure that the glaciers are melting; the great ice sheets are losing ice mass (the loses winning out over any gains of ice); seasons are being affected; sea levels are rising inexorably; and overall the surface temperature is rising. No need for computer models to tell us what is happening, it is there in front of us, for anyone who cares to look.
Many pour scorn on consensus in science.
They say that one right genius is better than 100 fools, which is a fine argument, except when uttered by a fool.
Even the genius has to publish, and fools never will or can, but shout from the sidelines and claim genius. All cranks think they are geniuses, whereas the converse is not true.
Einstein published, and had to undergo scrutiny. When the science community finally decided that Einstein was right, they did so because of the integrity of the theory and weight of evidence were sufficient. It was not a show of hands immdiately after he published, but in a sense, it was a show of hands after years of work to interrogate and test his assertions.
It was consilience followed by consensus (that’s science), not consensus followed by consilience (that’s political dogms).
We are as certain that the Earth has warmed due to increases in greenhouse gases – principally carbon dioxide, arising from human activities – as we are of the effects of smoking on human health, or the benefits of vaccination, and much more. And we are in part reinforced in this view because of the impact that is already occuring (observations not only theory).
The areas of doubt are there – how fast will the West Antarctica Ice Sheet melt – but these are doubts in the particulars not in the general proposition. Over 150 years of accumulated knowledge have led to this consilience, and was until recently, received wisdom amongst leaders of all political persuasions, as important and actionable knowledge.
The same is true of the multiple lines of enquiry that constitute the umbrella of disciplines we call ‘climate science’. Not a showing of hands, but a showing of published papers that have helped create this consilience of knowledge, and yes, a consensus of those skilled in their various arts.
It would be quicker to list the various areas of science that have not impacted on climate science than those that have.
In the two tables appended to the end of this essay, I have included:
Firstly, a timeline of selected discoveries and events over a long period – from 1600 to nearly the present – over which time either climate has been the topic or the underlying threads of science have been the topic. I have also included parallel events related to institutions such as the formation of meteorological organisations, to show both scientific and social developments on the same timeline.
Secondly, I have listed seminal papers in the recent history of the science (from 1800 onwards), with no doubt omissions that I apologise for in advance (comments welcome).
When running workshops on climate fluency I used a 5 metre long roll – a handwritten version of the timeline – and use it to walk along and refer to dates, personalities, stories and of course, key publications. It seems to go down very well (beats Powerpoint, for sure) …
All this has led to our current, robust, climate of consilience.
There was no rush to judgment, and no ideological bias.
It is time for the commentariat – those who are paid well to exercise their free speech in the comment sections of the media, at the New York Times, BBC, Daily Mail, or wherever – to study this history of the science, and basically, to understand why scientists are now as sure as they can be. And why they get frustrated with the spurious narrative of ‘the science is not yet in’.
If they attempted such arguments in relation to smoking, vaccination, germ theory or Newtonian mechanics, they would be laughed out of court.
The science of global warming is at least as robust as any of these, but the science community is not laughing … it’s deeply concerned at the woeful blindness of much of the media.
The science is well beyond being ‘in’; it is now part of a textbook body of knowledge. The consilience is robust and hence the consequent 97% consensus.
It’s time to act.
And if you, dear commentator, feel unable to act, at least write what is accurate, and avoid high school logical fallacies, or bullshit arguments about the nature of science.
Richard Erskine, 2nd May 2017
Amended on 17th July 2017 to include Tables as streamed Cloudup content (PDFs), due to inability of some readers to view the tables. Click on the arrow on bottom right of ‘frame’ to stream each document in turn, and there will then be an option to download the PDF file itself.
Amended 31st October 2017 to include a Figure I came across from Helen Czerski TED Talk, which helps illustrate a key point of the essay.
TABLE 1 – Timeline of Selected Discoveries and Events (since 1600)
TABLE 2 – Key Papers Related to Climate Science (since 1800)
Cherish not only those who you love, but that which you love. Yesterday I went with my wife on the March for Science in Bristol, the city where we fell in love many years ago. We were on one of over 600 marches globally, to express a love for the science that has brought us so much, and promises so much more.
We do not want in the future to find ourselves mournfully recalling the words of some great poet, words of regret at our careless disregard, our taking for granted –
“When to the session of sweet silent thought, I summon up remembrance of things past, I sigh the lack of many a thing I sought, And with old woes new wail my dear time’s waste….”
(Shakespeare, Sonnet 30)
Humanity needs more experts now than ever before, but it also needs poets and novelists too to find that voice, that will reach the hearts of those who will be hurt by the cynical disregard for truth, for evidence.
This is no longer the preserve of cranks, but now influences men (and it is mostly men) in power who attack the science of evolution, vaccination and climate change, that has saved the lives of billions and promises to save the lives of billions more in the future. Notwithstanding the more prosaic inability to live without the fruits of science (try having a no science friday).
That is why the over 600 cities that Marched for Science yesterday spoke with a true voice. Science is for everyone and we all benefit from its fruits but just as few really know where their food comes from, we have become blind to the processes and creativity of the scientists who will bring us the next wonders, and the next solutions to the challenges we face. We the people, and scientists, must both now pledge to remedy our careless assumption that the Englightenment will prevail against the tide of ignorance that has reached the pinnacle of power, without strong and systemic defenses.
We ignore these threats at our peril.
Let’s not regret being so careless that we allowed an opinionated, ideologically motivated few to use their positions of power to drown out the voices of reason.
Let us, most of all, not waste our dear, precious time.
. . .. o o O o o .. . .
Richard W. Erskine, essaysconcerning.com, 23rd April 2017
The speakers at the Bristol event were Professor Bruce Hood from the Bristol University’s School of Experimental Psychology; TV naturalist Chris Packham; science writer and scientist Dr Simon Singh; At-Bristol’s creative director Anna Starkey; and, scientist and writer Dr Suzi Gage.
When I go to the Netherlands I feel small next to men from that country, but then I am 3 inches smaller than the average Brit, and the average Dutchman is 2 inches taller than the average Brit. So I am seeing 5 inches of height difference in the crowd around me when surrounded by Dutch men. No wonder I am feeling an effect that is much greater than what the average difference in height seems to be telling me on paper.
Averages are important. They help us determine if there is a real effect overall. Yes, men from the Netherlands are taller than men from Britain, and so my impressions are not merely anecdotal. They are real, and backed up by data.
If we are wanting to know if there are changes occurring, averages help too, as they ensure we are not focusing on outliers, but on a statistically significant trend. That’s not to say that it is always easy to handle the data correctly or to separate different factors, but once this hard work is done, the science and statistics together can lead us to knowing important things, with confidence.
For example, we know that smoking causes lung cancer and that adding carbon dioxide into the atmosphere leads to increased global warming.
But, you might say, correlation doesn’t prove causation! Stated boldly like that, no it doesn’t. Work is required to establish the link.
Interestingly, we knew the fundamental physics of why carbon dioxide (CO2) is a causative agent for warming our atmosphere – not merely correlated – since as early as Tyndall’s experiments which he started in 1859, but certainly no later than 1967, when Manabe & Wetherald’s seminal paper resolved some residual physics questions related to possible saturation of the infra-red absorption in the atmosphere and the co-related effect of water vapour. That’s almost 110 years of probing, questioning and checking. Not exactly a tendency on the part of scientists to rush to judgment! And in terms of the correlation being actually observed in our atmosphere, it was Guy Callendar in 1938 who first published a paper showing rising surface temperature linked to rising levels of CO2.
Whereas, in the case of lung cancer and cigarettes correlation came first, not fundamental science. It required innovations in statistical methods to prove that it was not merely correlation but was indeed causation, even while the fundamental biological mechanisms were barely understood.
In any case, the science and statistics are always mutually supportive.
Average Global Warming
In the discussions on global warming, I have been struck over the few years that I have been engaging with the subject how much air time is given to the rise in atmospheric temperature, averaged for the whole of the Earth’s surface, or GMST as the experts call it (Global Mean Surface Temperature). While it is a crucial measure, this can seem a very arcane discussion to the person in the street.
So far, it has risen by about 1 degree Centigrade (1oC) compared to the middle of the 19th Century.
There are regular twitter storms and blogs ‘debating’ a specific year, and last year’s El Nino caused a huge debate as to what this meant. As it turns out, the majority of recent warming is due to man-made global warming, and this turbo-charged the also strong El Nino event.
Anyone daring to take a look at the blogosphere or twitter will find climate scientists arguing with opinion formers ill equipped to ‘debate’ the science of climate change, or indeed, the science of anything.
What is the person in the street supposed to make of it? They probably think “this is not helping me – it is not answering the questions puzzling me – I can do without the agro thanks very much”.
To be fair, many scientists do spend a lot of time on outreach and in other kinds of science communications, and that is to be applauded. A personal favourite of mine is Katharine Hayhoe, who always brings an openness and sense of humility to her frequent science communications and discussions, but you sense also, a determined and focused strategy to back it up.
However, I often feel that the science ‘debate’ generally gets sucked into overly technical details, while basic, or one might say, simple questions remain unexplored, or perhaps assumed to be so obvious they don’t warrant discussion.
The poor person in the street might like to ask (but dare not for fear of being mocked or being overwhelmed with data), simply:
“Why should we worry about an average rise of 1oC temperature, it doesn’t seem that much, and with all the ups and downs in the temperature curve; the El Nino; the alleged pause; the 93% of extra heat going into the ocean I heard about … well, how can I really be sure that the surface of the Earth is getting warmer?”
There is a lot to unpick here and I think the whole question of ‘averages’ is part of the key to approaching why we should worry.
Unequivocally Warming World
Climate Scientists will often show graphs which include the observed and predicted annual temperature (GMST) over a period of 100 years or more.
Now, I ask, why do they do that?
Surely we have been told to that in order to discern a climate change trend, it is crucial to look at the temperature averaged over a period of at least 10 years, and actually much better to look at a 30-year average?
In this way we smooth out all the ups and downs that are a result of the energy exchanges that occur between the moving parts of the earth system, and the events such as volcanic eruptions or humans pumping less sulphur into the atmosphere from industry. We are interested in the overall trend, so we can see the climate change signal amongst the ‘noise’.
We also emphasis to people – for example, “the Senator with a snowball” – that climate change is about averages and trends, as distinct from weather (which is about the here and now).
So this is why the curve I use – when asked “What is the evidence that the world is warming?” – is a 30-year smoothed curve (red line) such as the one shown below (which used the GISS tool):
The red line shows inexorable warming from early in the 20th Century, no ifs, no buts.
End of argument.
When I challenged a climate scientist on Twitter, why don’t we just show this graph and not get pulled into silly arguments with a Daily Mail journalist or whoever, I was told that annual changes are interesting and need to be understood.
Well sure, for climate scientists everything is interesting! They should absolutely try to answer the detailed questions, such as the contribution global warming made to the 2016 GMST. But to conflate that with the simpler and broader question does rather obscure the fundamental message for the curious but confused public who have not even reached base camp.
They may well conclude there is a ‘debate’ about global warming when there is none to be had.
There is debate amongst scientists about many things: regional impact and attribution; different feedback mechanisms and when they might kick in; models of the Antarctic ice sheet; etc. But not about rising GMST, because that is settled science, and given Tyndall et al, it would be incredible if it were not so; Nobel Prize winning incredible!
If one needs a double knock-out, then how about a triple or quadruple knock-out?
When we add the graphs showing sea level rise, loss of glaciers, mass loss from Greenland and Antarctica, and upper ocean temperature, we have multiple trend lines all pointing in one direction: A warming world. It ain’t rocket science.
We know the world has warmed – it is unequivocal.
Now if a the proverbial drunk, duly floored, still decides to get up and wants to rerun the fight, maybe we should be choosing not to play his games!?
So why do arguments about annual variability get so frequently aired on the blogosphere and twitter?
I don’t know, but I feel it is a massive own goal for science communication.
Surely the choice of audience needs to be the poor dazed and confused ‘person in the street’, not the obdurately ignorant opinion columnists (opinion being the operative word).
Why worry about a 1oC rise?
I want to address the question “Why worry about a 1oC rise (in global mean surface temperature)?”, and do so with the help of a dialogue. It is not a transcript, but along the lines of conversations I have had in the last year. In this dialogue, I am the ClimateCoach and I am in conversation with a Neighbour who is curious about climate change, but admits to being rather overwhelmed by it; they have got as far as reading the material above and accept that the world is warming.
Neighbour: Ok, so the world is warming, but I still don’t get why we should worry about a measly 1oC warming?
ClimateCoach: That’s an average, over the whole world, and there are big variations hidden in there. Firstly, two thirds of the surface of the planet is ocean, and so over land we are already talking about a global land mean surface temperature in excess of 1oC, about 1.5oC. That’s the first unwelcome news, the first kicker.
Neighbour: So, even if it is 5oC somewhere, I still don’t get it. Living in England I’d quite like a few more Mediterranean summers!
ClimateCoach: Ok, so let’s break this down (and I may just need to use some pictures). Firstly we have an increase in the mean, globally. But due to meteorological patterns there will be variations in temperature and also changes in precipitation patterns around the world, such as droughts in California and increased Monsoon rain in India. This regionality of the warming is the second kicker.
Here is an illustration of how the temperature increase looks regionally across the world.
Neighbour: Isn’t more rain good for Indian farmers?
ClimateCoach: Well, that depends on timing. It has started to be late, and if it doesn’t arrive in time for certain crops, that has serious impacts. So the date or timing of impacts is the third kicker.
Here is an illustration.
Neighbour: I noticed earlier that the Arctic is warming the most. Is that a threat to us?
ClimateCoach: Depends what you mean by ‘us’. There is proportionally much greater warming in the Arctic, due to a long-predicted effect called ‘polar amplification’, in places as much as 10oC of warming. As shown in this map of the arctic. But what happens in the Arctic doesn’t stay in the Arctic.
Neighbour: I appreciate that a warming Arctic is bad for ecosystems in the Arctic – Polar Bears and so on – but why will that effect us?
Neighbour: But we’ve had very hot summers before, why would this be different?
ClimateCoach: It’s not about something qualitatively different (yet), but it is quantitatively. Very hot summers in Europe are now much more likely due to global warming, and that has real impacts. 70,000 people died in Europe during the 2003 heatwave. Let me show you an illustrative graph. Here is a simple distribution curve and it indicates a temperature at and above which (blue arrow) high impacts are expected, but have a low chance. Suppose this represents the situation in 1850.
Neighbour: Ok, so I understand the illustration … and?
ClimateCoach: So, look at what happens when we increase the average by just a little bit to a higher temperature, say, by 1oC to represent where we are today. The whole curve shifts right. The ‘onset of high impact’ temperature is fixed, but the area under the curve to the right of this has increased (the red area has increased), meaning a greater chance than before. This is the fourth kicker.
ClimateCoach: Exactly. We (humans) are loading the dice. As we add more CO2 to the atmosphere, we load the dice even more.
Neighbour: Even so, we have learned to cope with very hot summers, haven’t we? If not, we can adapt, surely?
ClimateCoach: To an extent yes, and we’ll have to get better at it in the future. But consider plants and animals, or people who are vulnerable or have to work outside, like the millions of those from the Indian sub-continent who work in construction in the Middle East. It doesn’t take much (average) warming to make it impossible (for increasingly long periods) to work outside without heat exhaustion. And take plants. A recent paper in Nature Communications showed that crop yields in the USA would be very vulnerable to excessive heat.
Neighbour: Can’t the farmers adapt by having advanced irrigation systems. And didn’t I read somewhere that extra CO2 acts like a fertiliser for plants?
ClimateCoach: To a point, but what that research paper showed was that the warming effect wins out, especially as the period of excessive heat increases, and by the way the fertilisation effect has been overstated. The extended duration of the warming will overwhelm these and other ameliorating factors. This is the fifth kicker.
This can mean crop failures and hence impacts on prices of basic food commodities, even shortages as impacts increase over time.
Neighbour: And what if we get to 2oC? (meaning 2oC GMST rise above pre-industrial)
ClimateCoach: Changes are not linear. Take the analogy of car speed and pedestrian fatalities. After 20 miles per hour the curve rises sharply, because the car’s energy is a function of the square of the speed, but also the vulnerability thresholds in the human frame. Global warming will cross thresholds for both natural and human systems, which have been in balance for a long time, so extremes get increasingly disruptive. Take an impact to a natural species or habitat: one very bad year, and there may be recovery in the following 5-10 years, which is ok if the frequency of very bad years is 1 in 25-50 years. But suppose very bad years come 1 in every 5 years? That would mean no time to recover. Nature is awash with non-linearities and thresholds like this.
Neighbour: Is that what is happening with the Great Barrier Reef – I heard something fleetingly on BBC Newsnight the other night?
ClimateCoach: I think that could be a very good example of what I mean. We should talk again soon. Bring friends. If they want some background, you might ask them to have a read of my piece Demystifying Global Warming & Its Implications, which is along the lines of a talk I give.
Putting it together for the person in the street.
I have explored one of many possible conversations I could have had. I am sure it could be improved upon, but I hope it illustrates the approach. We should be engaging those people (the majority of the population) who are curious about climate change but have not involved themselves so far, perhaps because they feel a little intimidated by the subject.
When they do ask for help, the first thing they need to understand is that indeed global warming is real, and is demonstrated by those average measures like GMST, and the other ones mentioned such as sea-level rise, ice sheet mass loss, and ocean temperature; not to mention the literally thousands of indicators from the natural world (as documented in the IPCC 5th Assessment Report).
There are also other long-term unusual sources of evidence to add to this list, as Dr Ed Hawkins has discussed, such as the date at which Cherry blossom flowers in Kyoto, which is trending earlier and earlier. Actually, examples such as these, are in many ways easier for people to relate to.
Gardeners the world over can relate to evidence of cherry blossom, wine growers to impacts on wine growing regions in France, etc. These diverse and rich examples are in many ways the most powerful for a lay audience.
The numerous lines of evidence are overwhelming.
So averages are crucial, because they demonstrate a long-term trend.
When we do raise GMST, make sure you show the right curve. If it is to show unequivocal global warming at the surface, then why not show one that reflects the average over a rolling 30 year period; the ‘smoothed’ curve. This avoids getting into debates with ‘contrarians’ on the minutae of annual variations, which can come across as both abstract and arcane, and puts people off.
This answers the first question people will be asking, simply: “Is the world warming?”. The short answer is “Unequivocally, yes it is”. And that is what the IPCC 5th Assessment Report concluded.
But averages are not the whole story.
There is the second but equally important question “Why worry about a 1oC rise (in global mean surface temperature)?”
I suspect many people are too coy to ask such a simple question. I think it deserves an answer and the dialogue above tried to provide one.
Here and now, people and ecosystems experience weather, not climate change, and when it is an extreme event, the impacts are viscerally real in time and place, and are far from being apparently arcane debating points.
So while a GMST rise of 1oC sounds like nothing to the untutored reader, when translated into extreme weather events, it can be highly significant. The average has been magnified to yield a significant effect, as evidenced by the increasing chance of extreme events of different kinds, in different localities, which can increasingly be attributed to man-made global warming.
The kickers highlighted in the dialogue were:
Firstly, people live on land so experience a higher ‘GMST’ rise (this is not to discount the impacts on oceans);
Secondly, geographical and meteorological patterns mean that there are a wide range of regional variations;
Thirdly, the timing (or date) at which an impact is felt is critical for ecosystems and agriculture, and bad timing will magnify the effect greatly;
Fourthly, as the average increases, so does the chance of extremes. The dice are getting loaded, and as we increase CO2, we load the dice more.
Fifthly, the duration of an extreme event will overwhelm defences, and an extended duration can cross dangerous thresholds, moving from increasing harm into fatal impacts, such as crop failure.
I have put together a graphic to try to illustrate this sequence of kickers:
As noted on this graphic (which I used in some climate literacy workshops I ran recently), the same logic used for GMST can be applied to other seemingly ‘small’ changes in global averages such as rainfall, sea-level rise, ocean temperature and ocean acidification. To highlight just two of these other examples:
an average global sea-level rise translates into impacts such as extreme storm surges, damaging low-lying cities such as New York and Miami (as recently reported and discussed).
an average ocean temperature rise, translates into damage to coral reefs (two successive years of extreme events have caused serious damage to two thirds of the Great Barrier Reef, as a recent study has confirmed).
Even in the relatively benign context of the UK’s temperate climate, the Royal Horticultural Society (RHS), in a report just released, is advising gardeners on climate change impacts and adaptation. The instinctively conservative ‘middle England’ may yet wake up to the realities of climate change when it comes home to roost, and bodies such as the RHS reminds them of the reasons why.
The impacts of man-made global warming are already with us, and it will only get worse.
How much worse depends on all of us.
Not such a stupid question
There was a very interesting event hosted by CSaP (Centre for Science and Policy) in Cambridge recently. It introduced some new work being done to bring together climate science and ‘big data analytics’. Dr Emily Schuckburgh’s talk looked precisely at the challenge of understanding local risks; the report of the talk included the following observation:
“Climate models can predict the impacts of climate change on global systems but they are not suitable for local systems. The data may have systematic biases and different models produce slightly different projections which sometimes differ from observed data. A significant element of uncertainty with these predictions is that they are based on our future reduction of emissions; the extent to which is yet unknown.
To better understand present and future climate risks we need to account for high impact but low probability events. Using more risk-based approaches which look at extremes and changes in certain climate thresholds may tell us how climate change will affect whole systems rather than individual climate variables and therefore, aid in decision making. Example studies using these methods have looked at the need for air conditioning in Cairo to cope with summer heatwaves and the subsequent impact on the Egyptian power network.”
This seems to be breaking new ground.
So maybe the eponimous ‘person in the street’ is right to ask stupid questions, because they turn out not to be so stupid after all.
Changing the Conversation
I assume that the person in the street is curious and has lots of questions; and I certainly don’t judge them based on what newspaper they read. That is my experience. We must try to anticipate and answer those questions, and as far as possible, face to face. We must expect simple questions, which aren’t so stupid after all.
We need to change the focus from the so-called ‘deniers’ or ‘contrarians’ – who soak up so much effort and time from hard pressed scientists – and devote more effort to informing the general public, by going back to the basics. By which I mean, not explaining ‘radiative transfer’ and using technical terms like ‘forcing’, ‘anomaly’, or ‘error’, but using plain English to answer those simple questions.
Those embarrasingly stupid questions that will occur to anyone who first encounters the subject of man-made global warming; the ones that don’t seem to get asked and so never get answered.
Maybe let’s start by going beyond averages.
No one will think you small for doing so, not even a Dutchman.
According to Megan McArdle in a Bloomberg View opinion piece we cannot trust computer models of the climate because economists have failed when they tried to model complex economic systems.
Leaving aside the fundamental fact that the ‘atoms’ of physics (molecules, humidity, etc.) are consistent in their behaviour, whereas the ‘atoms’ of economics (humans) are fickle and prone to ‘sentiment’, this is a failed form of denialism.
You do not have to be Champagne maker Taittinger investing in sparkling wine production in Kent (England), for example, to know that global warming is real, because there are thousands of scientifically observed and published indicators of a warming world. Most of these receive little attention in the media compared to the global average surface temperature (important though it is).
In her article she repeats something I believe is a key confusion in her piece:
“This lesson from economics is essentially what the “lukewarmists” bring to discussions about climate change. They concede that all else equal, more carbon dioxide will cause the climate to warm. But, they say that warming is likely to be mild unless you use a model which assumes large positive feedback effects.”
Matt Ridley is also often railing against the fact that the feedback from increased humidity turns a warming of 1C (from doubling CO2 from pre-industrial levels) into closer to 3C (as the mean predicted level of warming).
This has nothing to do with the inherent complexity in the climate models as it is derived from basic physics (the Infra-Red spectra of CO2 and H2O; the Clausius–Clapeyron relation that determines the level of humidity when the atmosphere warms; some basics of radiative transfer; etc.). Indeed, it is possible to get to an answer on the basic physics with pencil and paper, and the advanced computer models come to broadly the same conclusion (what the models are increasingly attempting to do is to resolve more details on geographic scales, time scales and within different parts of the Earth system, such as that big block of ice called Antarctica).
But even in the unlikely event that Megan McArdle were to accept these two incontrovertible points (the world is warming and the central feedback, from H2O, are not in any way compromised by some hinted at issue of ‘complexity’), she might still respond with something like:
“oh, but we do rely on complex models to make predictions of the future and things are too chaotic for this to be reliable.”
Well, we have learned from many great minds like Ilya Prigogine that there is complex behaviour in simple systems (e.g. the orbit of Pluto appears on one level to perform according to simple Newtonian mechanics, but in addition, has apparently random wobbles). One needs therefore to be careful at specifying at what level of order ‘chaotic behaviour’ exists. Pluto is both ordered and chaotic.
Whereas for other system that are complex (e.g. the swirling atmosphere of Jupiter) they can display ’emergent’ ordered behaviour (e.g. the big red spot). We see this all around us in the world, and ‘complexity theory’ is now a new branch of science addressing many phenomena that were otherwise inaccessible to pencil and paper: the computer is an essential tool in exploring these phenomena.
Complexity is therefore not in itself a reason for casting out a lazy slur against models, that predictability is impossible. There is often an ability to find order, at some level, in a system, however complex it is.
Yet, it can also be very simple.
At its most basic, adding energy to the climate system as we are doing by adding heat-trapping gases to the atmosphere, tends to warm things up, because of well established basic physics.
In a similar way, printing too much money in an economy tends to lead to inflation, despite the irreducible random factors in human nature.
It ain’t rocket science and you don’t need to be an expert in complexity theory to understand why we are a warming world.
Clive James is known as a man of letters and, in the UK at least, as an erudite and witty commentator on culture, for which he is widely respected. He has also been extremely courageous in sharing his thoughts on his terminal cancer, with his customary wit and flair.
For all these reasons it is sad that he has decided to become embroiled in climate change in the way he has. For sure he has the right to an opinion, but he seems to have muddied the art he loves, with the science that he clearly does not, and the result will satisfy neither discipline.
For those in broadcasting and the media, paid to express a view on anything and everything, it must be easy to develop a self assurance that belies any lack of knowledge. We are now resigned to the almost daily stream of nonsense that those such as Melanie Philips and others produce, given free rein to fulminate in the press.
The poem reveals more about Clive James’ self-declared ignorance on climate change than it does about the scientists, and if there is a metre absent then it is surely in his poetry, not the predicted sea level rise.
Let’s unpick the poem.
“imminent catastrophe”
No self-respecting climate scientists has ever talked about “imminent” catastrophe. The timescales vary greatly depending on the impacts in question. Yes, there is a strong argument about how fast we need to stop emitting carbon dioxide, in order to avoid the medium to long term consequences. But that is a distinction lost on CJ.
“Not showing any signs of happening”
There are many signs and CJ must either be too lazy or too blinkered to find out about them. The receding mountain glaciers are not imminent, they are already well on their way, and there are many other signs, as illustrated in NASA’s ‘Vital Signs’.
“The ice at the North Pole should have gone”
A typical exaggerated straw-man statement, rather than an accurate reflection of the scientific position. The clear evidence is that the minimum in sea ice is on a downward trend. “The Arctic Ocean is expected to become essentially ice free in summer before mid-century”, says NASA (see Vital Signs above).
“Awkwardly lingering”
Yes it is … rather like those discredited contrarian memes, that CJ slavishly trots out. Not much creativity at work here I am afraid on his part.
“It seems no more than when we were young”
CJ’s anecdotal personal experience is worthless, like those who claim that smoking is safe because granny smoked 20 a day and lived to 90, so it must be ok. The disrupted weather systems are already bringing extremes in terms of both wetter winters and hot summers, depending on the region. While ‘attribution’ can get us into the difficult area of probabilities, the dice is already slightly loaded towards more extreme weather, and the loading will increase as the world warms. The National Academy of Sciences have just reported on this (But once again, I am sure that CJ will not want his opinion to be confused by facts).
“Continuing to not go up by much”
Well, CJ might not be impressed by the sea level rise so far, but the projected sea level rise is expected to be up to 1 metre by the end of the century, which would have a devastating impact on many countries and many cities situated near sea level. The long term picture, over millennia, offers little solace because of the long time it takes for elevated concentrations of carbon dioxide to remain in the atmosphere.
“sure collapse of the alarmist view”
A word of caution here from CJ regarding the sceptics’ who “lapse into oratory”, but he clearly shares the belief that those who warn of serious impacts of global warming should be labelled alarmist, while at the same time being affronted at the label denialist. Sauce for the goose is apparently not sauce for the gander.
He lazily conflates the science with those that who at first sight may easily be cast in the mould of alarmist: those dreaded environmentalists. Let us assume for arguments sake that some of who he objects to are shrill alarmists. Does that have any bearing on the veracity of the science? Of course not, yet he applies his broad brush to tar anyone who might dare raise a concern.
Scientists for their part are often a rather quiet and thoughtful bunch. They often take years before publishing results, so they can check and re-check. But what are they to do about global warming? Keep quiet and they could be criticised for not raising the alarm; yet if they tell us about the worst prognostications in the calmest of voices, they will surely be accused of alarmism. A no-win situation.
It is rather easy for those like CJ, whose opinions are unencumbered by knowledge, to discount thousands of diligent scientists with an insulting and pejorative label.
“His death … motivates the doomsday fantasist”
Scientists such as Sagan have demonstrated a far less parochial view of the future than CJ. Boltzmann foresaw the heat death of the universe and scientists routinely remind us of what tiny specks we humans are in the universe. It is CJ not they that need reminding of how insignificant we all are.
Scientists show an amazing ability to have both a deep knowledge which challenges our deepest assumptions of the world, and a positive attitude to humanity. A combination of realism and optimism that is often inspiring.
The real fantasists here are those like CJ who imagine that they can stand judgment on 200 years of cumulative scientific knowledge, by rubbishing all those men and women who have established the understanding we now have, including the scientific evidence for global warming resulting from human activities that is now incontrovertible.
It is sad that someone who knows and loves poetry should decide to adulterate his art with this hatchet job on another discipline, science, for which he has little empathy and even less knowledge, but feels qualified to insult with the poetic equivalent of a latter day Margarita Pracatan.
Entertaining for some no doubt, but a rather sad reflection on CJ. He could have used a poem to provide a truly reflective and transcendent piece on the subject of climate change, but instead merely offered an opinion piece masquerading as art, clouded by contrarian myths.
We still love you Clive, but I really hope this poem is not your last.
(c) Richard Erskine, 2016
Note: If readers would like a presentation of a golden thread through the science, in plain English, then my essay Demystifying Global Warming & Its Implications aims to provide just that.
At this time of year, cynics and sceptics pour scorn on Santa and his faithful reindeer, the prancers and dancers of this festive time. The gauntlet is often laid down as follows. Santa will visit all those children who want presents from him – in about one billion homes – which he has to visit on Christmas Eve.
Thankfully, Fermilabs published the calculations some years ago and proved that Santa, travelling at close to the speed of light, would have no problems covering the ground, in 500 seconds, leaving a generous but fleeting 0.15 milliseconds per dwelling to wolf down some sherry and mince pies. We are of course assuming there is just one Santa, but please note that in Iceland they have 13 Santa Clauses, sons of a horrible mountain hag called Grýla (we leave the re-calculation as an exercise for the reader!).
So what about data? Let’s think not about boring networks and bandwidth, but something more fantastic: the whole of our digital universe.
The Guardian reported back in 2009 that “At 487bn gigabytes (GB), if the world’s rapidly expanding digital content were printed and bound into books it would form a stack that would stretch from Earth to Pluto 10 times.”
Assuming 500bn Gb was being added every 18 months, the speed of the 2009 virtual stack of books was about 1000 kilometres per second. This is fast but well short of the speed of light, that is 300 times this value.
The rate of growth is not constant. It too is doubling every 18 months. It is no wonder this was characterised as the “expanding digital universe”. IDC’s fifth annual study on the digital universe published in June 2011 estimated that we had reached 1.8 trillion gigabytes. We are exploding according to plan!
Translated into a velocity, I have calculated that the exponentially accelerating virtual stack of books, reaching well beyond our solar system, will be travelling at more than the speed of light by 2018. Unlike Santa and crew, our ‘virtual stack’ does not have to comply with the special theory of relativity (Einstein, 1905).
So data will not only catch Santa, but accelerate well beyond him, if we carry on at this rate.
With some thought and some digital out-sourcing, maybe Santa can use this virtual stack as a delivery mechanism, and so create a little space in his busy schedule at this time of year to enjoy the mince pies and sherry at more leisure, and avoid indigestion.