The Curious Case of Heat Pumps in the UK

Heat Pumps, whether Air-Sourced or Ground-Sourced, can and should be making a major contribution to decarbonising heating in the UK. Heating (both space heating and water heating) is major contributor to our carbon footprint.

Heat pumps are now incredibly efficient – for 1 unit of electrical energy you put in you can get at least 3 units back in the form of heat energy (a pump compresses the air and this causes it to rise in temperature; two century old physics at work here).  The process works sufficiently well even in UK winters.

The pumps are now relatively quiet (think microwave level of noise). They can deliver good payback (even more so if there was a cost on carbon). They even work with older properties (countering another one of the many myths surrounding heat pumps).

I even heard Paul Lewis on BBC’s ‘Money Box’ (Radio 4) – clearly getting confused between heat pumps and geothermal energy – saying ‘oh, but you need to be in a certain part of the country to use them’ (or words to that effect).

We clearly need much more education out there to raise awareness of the potential of heat pumps.

When combined with solar (to provide some of the electricity), they are even better.

So why is the take-up of heat pumps still too slow? Why is the Government not pushing them like crazy (it is an emergency, right!)? Why are households, when replacing old boilers, till opting for gas?

When we had the AIDS crisis in the 1980s, the UK Government undertook a major health awareness campaign, and other countries also, which largely succeeded. In an emergency, Governments tend to act in a way that ‘signals’ it is an emergency.

The UK Government is sending no such signals. Bland assurances that the commitment to reach net zero by 2050 is not a substitute for actions. In the arena of heat, where is the massive programme to up-skill plumbers and others? Where is the eduation programme to demystify heat pumps and promote their adoptions?

And where is the joined up thinking?

This article below from Yorkshire Energy Systems, based on their extensive research and practical experience, suggests one reason – that EPCs (Energy Performance Certificates) issued for homes and including recommended solutions – are biased against heat pumps.

The mismatch between what the Government is saying (that heat pumps are part of the decarbonisation solution) and what EPCs are advising suggests a clear lack of joined up thinking.

… and no sign that the Government really believes that urgent action is required.

1 Comment

Filed under Uncategorized

Increasing Engineering Complexity and the Role of Software

Two recent stories from the world of ‘big’ engineering got me thinking: the massive delays in the Crossrail Project and the fatal errors in the Boeing 737 Max, both of which seem to have been blighted by issues related to software.

Crossrail, prior to the announcement of delays and overspend, was being lauded as an example of an exemplar on-time, on-budget complex project; a real feather in the cap for British engineering. There were documentaries celebrating the amazing care with which the tunnelling was done to avoid damage at the surface, using precise monitoring and accurately positioned webs of hydraulic grouting to stabilise the ground beneath buildings. Even big data was used to help interpret signals received from a 3D array of monitoring stations, to help to actively manage operations during tunnelling and construction. A truly awesome example of advanced engineering, on an epic scale.

The post-mortem has not yet been done on why the delays came so suddenly upon the project, although the finger is being pointed not at the physical construction, but the digital one. To operate the rail service there must be advanced control systems in place, and to ensure these operate safely, a huge number of tests need to be carried out ‘virtually’ in the first instance, to ensure safety is not compromised.

Software is something that the senior management of traditional engineering companies are uncomfortable with; in the old days you could hit a machine with a hammer, but not a virtual machine. They knew intuitively if someone told them nonsense within their chosen engineering discipline; for example, if a junior engineer planned to pour 1000 cubic metres of cement into a hole and believed it would be set in the morning. But if told that testing of a software sub-system will take 15 days, they wouldn’t have a clue as to whether this was realistic or not; they might even ask “can we push to get this done in 10 days?”.

In the world of software, when budgets and timelines press, the most dangerous word used in projects is ‘hope’. “We hope to be finished by the end of the month”; “we hope to have that bug fixed soon”; and so on  Testing is often the first victim of pressurised plans. Junior staff say “we hope to finish”, but by the time the message rises up through the management hierarchy to Board level, there is a confident “we will be finished” inserted into the Powerpoint. Anyone asking tough questions might be seen as slowing the project down when progress needs to be demonstrated.

You can blame the poor (software) engineer, but the real fault lies with the incurious senior management who seem to request an answer they want, rather than try to understand the reality on the ground.

The investigations of the Boeing 737 Max tragedy are also unresolved, but of course, everyone is focusing on the narrow question of the technical design issue related to a critical new feature. There is a much bigger issue at work here.

Arguably, Airbus has pursued the ‘fly by wire’ approach much earlier than Boeing, whose culture has tended to resist over automation of the piloting. Active controls to overcome adverse events has now become part of the design of many modern aircraft, but the issue with the Boeing 737 Max seems to have been that this came along without much in the way of training; and the interaction between the automated controls and the human controls is at the heart of the problem. Was there also a lack of realistic human-centric testing to assess the safety of the combined automated/ human control systems? We will no doubt learn this in due course.

Electronics is of course not new to aerospace industries, but programmable software has grown in importance and increasingly it seems that the issue of growing complexity and how to handle the consequent growth in testing complexity, has perhaps overtaken the abilities of traditional engineering management systems. This is extending to almost every product or project – small and large – as the internet of everything emerges.

This takes me to a scribbled diagram I found in an old notebook – made on a train back in 2014, travelling to London, while I debated the issue of product complexity with a project director for a major engineering project. I have turned this into the Figure below.

Screenshot 2019-08-14 at 19.30.09

There are two aspects of complexity identified for products: 

  • Firstly, the ‘design complexity’, which can be thought of as the number of components making up the product, but also the configurability and connectivity of those components. If printed on paper, you can thinking of how high the pile of paper would be that identified every component, with a description of their configuration and connection. This would apply to physical aspects but also software too; and all the implied test cases. There is a rapid escalation in complexity as we move from car to airliner to military platform.
  • Secondly, the ‘production automation complexity’, which represents the level of automation involved in delivering the required products. Cars as they have become, are seen as having the highest level of production automation complexity. 

You can order a specific build of car, with desired ‘extras’, and colour, and then later see it travelling down the assembly line with over 50% of the tasks completely automated; the resulting product with potentially a nearly unique selection of options chosen by you. It is at the pinnacle of production automation complexity but it also has a significant level of design complexity, albeit well short of others shown in the figure. 

Whereas an aircraft carrier will in each case be collectively significantly different from any other in existence (even when originally conceived as a copy of an existing model) – with changes being made even during its construction – so does not score so high on ‘production automation complexity’. But in terms of ‘design complexity’ it is extremely high (there are only about 20 aircraft carriers in operation globally and half of these are in the US Navy, which perhaps underlines this point).

As we add more software and greater automation, the complexity grows, and arguably, the physical frame of the product is the least complex part of the design or production process. 

I wonder is there a gap between the actual complexity of the final products and an engineering culture that is still heavily weighted towards the physical elements – bonnet of a car, hull of a ship, turbine of a jet engine – and is this gap widening as the software elements grow in scope and ambition? 

Government Ministers, like senior managers, will be happy being photographed next to the wing of a new model of airliner – and talk earnestly about workers riveting steel – but what may be more pivotal to success is some software sub-system buried deep in millions of lines of ‘code’; no photo opportunities here.

Screenshot 2019-08-14 at 19.30.27

As we move from traditional linear ‘deterministic’ programming to non-deterministic algorithms – other questions arise about the increasing role of software. 

Given incomplete, ambiguous or contradictory inputs the software must make a choice about how to act in real time. It may have to take a virtual vote between independently written algorithms. It cannot necessarily rely on supplementary data from external sources (“no, you are definitely nose diving not stalling!”), for system security reasons if not external data bandwidth reasons.

And so we continue to add further responsibility, onto the shoulders of the non-physical elements of the system.

Are Crossrail and the 737 Max representative of a widening gap, reflected in an inability of existing management structures to manage the complexity and associated risks of the software embedded in complex engineering products and projects? 

© Richard W. Erskine, 2019

1 Comment

Filed under Engineering Complexity, Essay, Uncategorized

Boris loves Corbyn

No not Jeremy; his brother.

For some years now Boris Johnson has channelled the crank theories of Piers Corbyn, who appeared in the 2007 film The Great Global Warming Swindle, which was shown to be ill-founded.

Rather like the myth that carrots helped RAF pilots see at night during WWII  which was such a great story that even today it is repeated and believed, the idea that some changes in the Sun’s output is responsible for recent climate change is a similarly attractive myth, which keeps on being repeated.

The BBC had to apologise for Quentin Letts’ execrable hatched job on the Met Office in 2015, which also included Piers Corbyn. 

The truth is that we know with a confidence unsurpassed in many fields of science what is causing global warming; it’s not the sun, it’s not volcanoes; it’s not contrails. The IPCC’s 5th Assessment Report (2013) was clear that greenhouse gases (principally carbon dioxide) resulting from human activities are the overwhelming driver of global warming (see Figure 8.15)

So you might expect Boris Johnson as a leading politician, to reference the IPCC (Intergovernmental Panel on Climate Change), which gathers, analyses and synthesises the published work of thousands of scientists with relevant expertise on behalf of the nations of the world.

Instead, he has referred to the “great physicist and meteorologist Piers Corbyn” (It’s snowing, and it really feels like the start of a mini ice age, Boris Johnson, Daily Telegraph, 20th January 2013). Piers Corbyn has no expertise in climate science and theories like his have been completely debunked in a paper published in the Proceedings of The Royal Society:

… the long-term changes in solar outputs, which have been postulated as drivers of climate change, have been in the direction opposite to that required to explain, or even contribute to, the observed rise in Earth’s global mean air surface temperature (GMAST) …

What is alarming is that in the face of this strong scientific evidence, some Internet sources with otherwise good reputations for accurate reporting can still give credence to ideas that are of no scientific merit. These are then readily relayed by other irresponsible parts of the media, and the public gain a fully incorrect impression of the status of the scientific debate.

“Solar change and climate: an update in the light of the current exceptional solar minimum”, Proceedings of The Royal Society A, Mike Lockwood, 2nd December 2009

So, for Boris Johnson to call himself an “empiricist” is, frankly, laughable.

He has also cozied up to neoliberal ‘think tanks’ implacably opposed to action on global warming. 

I think we can safely say that hitherto he has firmly placed himself in the DENIAL bucket (in the illustration below).

Screenshot 2019-07-29 at 21.24.30

He shares this perspective with other hard Brexiteers in the new Cabinet, who are itching to deregulate the UK economy, such as Jacob Rees-Mogg, and see action on global warming as a constraint on unregulated markets.

In his acceptance speech on becoming Prime Minister, Boris Johnson never mentioned climate change. But since then he has reiterated Theresa May’s Government’s commitment to net zero by 2050, and

Responding to concerns expressed by Shadow Treasury Minister Anneliese Dodds that he had not focused sufficiently climate change in the initial statements outlining his priorities as Prime Minister, Johnson replied: “The House will know that we place the climate change agenda at the absolute core of what we are doing.”

(edie, 29th July 2019)

He went on to say

He said: “This party believes in the private sector-generated technology which will make that target attainable and deliver hundreds of thousands of jobs. That is the approach we should follow.” …

Predicting that the UK will “no longer” be contributing to climate change by 2050, Johnson said: “We will have led the world in delivering that net-zero target. We will be the home of electric vehicles—cars and even planes—powered by British-made battery technology, which is being developed right here, right now.”

(edie, 29th July 2019)

By imagining that industry alone (without any stated plans for an escalating tax on carbon), can somehow address the huge transformation required, on the timescale required, without concerted effort at every level of Government (top down and bottom up), and civil society, he remains disconnected from reality, let alone science.

Moving from DENIAL to COMPLACENCY is an advance for Boris – assuming for the moment this is not another flip-flopping of positions that he is famed for – but it is hardly the sign of the climate leadership required. We need a leadership that respects the science, and understands the policy implications and prescriptions required.

Did anyone in the house ask the Prime Minister if he accepts and will fully support the recommendation of the Climate Change Committee’s report Net Zero – The UK’s contribution to stopping global warming? 

They need to, because great words need to turned into a plan of action, and every year we delay will make the transition more painful (it is already going to be painful enough, but they are not telling you that, are they?).

That will not be enough to meet the public’s concerns over the climate emergency, and increasingly, the public will be expecting leadership that has moved from COMPLACENCY to the URGENCY position.

Many see GREEN RADICALISM as now an unavoidable response to the COMPLACENCY in Whitehall.

If Boris Johnson fails to jettison his neoliberal friends and the crank science that is part of their tool-kit – who are trying (and have succeeded so far) in putting the breaks on meaningful and urgent action – the longer term political fall-out will make Brexit look like a tea party.

(c) Richard W. Erskine,, July 2019


Filed under Uncategorized

The Climate Change Committee just failed to invent a time machine

These past two weeks have been such a momentous time for climate change in the UK it is hard to take in. My takes:

On 21st April, Polly Higgins, the lawyer who has spent a decade working towards establishing ecocide as a crime under international law, sadly died. At a meeting at Hawkwood Centre, Stroud, I heard the inspiring Gail Bradbrook speak of how Polly had given her strength in the formation of Extinction Rebellion. 

On 23rd April, Greta Thunberg spoke to British Parliamentarians with a clear message that “you did not act in time’, but with imagination and some ‘Cathedral thinking’ it is not too late to act (full text of speech here).

On 30th April, Extinction Rebellion met with the Environment Secretary Michael Gove, a small step but one that reflects the pressure that their actions (widely supported in the country) are having. Clare Farrell said the meeting “.. was less shit than I thought it would be, but only mildly”, but it’s a start.

On 1st May, the UK’s Parliament has declared a climate emergency

On 2nd May the Committee on Climate Change (CCC), setup under the 2008 Climate Change Act, has published its report “Net Zero – The UK’s contribution to stopping global warming” to the Government on how to reach net zero by 2050.

These are turbulent times. Emotions are stirring. Expectations are high. There is hope, but also fear.

The debate is now raging amongst advocates for climate action about whether the CCC’s report is adequate.

Let’s step back a moment.

The IPCC introduced the idea of a ‘carbon budget’ and this is typically expressed in the form such as (see Note):

“we have an X% chance of avoiding a global mean surface temperature rise of  Y degrees centigrade if our emissions pathway keeps carbon emissions below Z billion tonnes”

The IPCC Special 1.5C Report, looked at how soon we might get to 1.5C and the impacts of this compared to 2C. As Carbon Brief summarised it:

At current rates, human-caused warming is adding around 0.2C to global average temperatures every decade. This is the result of both “past and ongoing emissions”, the report notes.

If this rate continues, the report projects that global average warming “is likely to reach 1.5C between 2030 and 2052”

Perhaps the most shocking and surprising aspect of this report was the difference in impacts between 1.5C and the hitherto international goal of 2C. The New York Times provided the most compelling, graphic summary of the change in impacts. Here are a few examples:

The percentage of the world’s population exposed to extreme heat jumps from 14% to 37%

Loss of insect species jumps from 6% to 18%

Coral reefs suffer “very frequent mass mortalities” in a 1.5C world, but “mostly disappear” in a 2C world.

So, in short, 1.5C is definitely worth fighting for.

In view of the potential to avoid losses, it is not unreasonable for Extinction Rebellion and others to frame this as a “we’ve got 12 years”. The IPCC says it could be as early as 12 years, but it might be as late as 34 years. What would the Precautionary Principle say? 

Well, 12 years of course.

But the time needed to move from our current worldwide emissions to net zero is a steep cliff. You’ve all seen the graph.


It seems impossibly steep. It was a difficult but relatively gentle incline if we’d started 30 years ago. Even starting in 2000 was not so bad. Every year since the descent has  become steeper. It is now a precipice.

It is not unreasonable to suggest it is impossibly steep.

It is not unreasonable to suggest we blew it; we messed up.

We have a near impossible task to prevent 1.5C.

I’m angry about this. You should be too.

I am not angry with some scientists or some committee for telling me so. That’s like being angry with a doctor who says you need to lose weight. Who is to blame: the messenger? Maybe I should have listened when they told me 10 years back.

So if the CCC has come to the view that the UK at least can get to net zero by 2050 that is an advance – the original goal in the Act was an 80% reduction by 2050 and they are saying we can do better, we can make it a 100% reduction.

Is it adequate?

Well, how can it ever be adequate in the fundamental sense of preventing human induced impacts from its carbon emissions? They are already with us. Some thresholds are already crossed. Some locked in additional warming is unavoidable.

Odds on, we will lose the Great Barrier Reef.  Let’s not put that burden on a committe to do the immpossible. We are all to blame for creating the precipice.

That makes me sad, furious, mournful, terrified, angry.

There is a saying that the best time to have started serious efforts to decarbonise the economy was 30 years ago, but the next best time is today.

Unfortunately, the CCC does not have access to a time machine.

Everyone is angry.

Some are angry at the CCC for not guaranteeing we stay below 1.5C, or even making it the central goal. 

Extinction Rebellion tweeted:

The advice of @theCCCuk to the UK government is a betrayal of current & future generations made all the more shocking coming just hours after UK MPs passed a motion to declare an environment & climate emergency. 

It is I think the target of 2050 that has angered activists. It should be remembered that 2050 was baked into the Climate Change Act (2008). It should be no surprise it features in the CCC’s latest report. The CCC is a statutory body. If we don’t like their terms of reference then it’s easy: we vote in a Government that will revise the 2008 Act. We haven’t yet achieved that.

Professor Julia Steinberger is no delayist (quite the opposite, she’s as radical as they come), and she has tweeted back as follows:

Ok, everyone, enough. I do need to get some work done around here.

(1) stop pretending you’ve read & digested the whole CCC net-zero report. It’s 277 pretty dense pages long. 

(2) there is a lot of good stuff & hard work  making the numbers work there.  

3) Figuring out what it means for various sectors, work, finance, education, training, our daily lives & cities & local authorities and so on is going to take some thinking through.

(4) If you want a faster target, fine! I do too! Can you do it without being horrid to the authors and researchers who’ve worked like maniacs to try to get this much figured out? THEY WANT TO BE ON YOUR SIDE! 

(5) So read it, share it, reflect on it, and try to figure out what & how we can do a lot faster, and what & how we can accelerate the slower stuff.

Treat the CCC report as in reality an ambitious plan – it really is – in the face of the precipice, but also believe we can do better.

These two ideas are not mutually exclusive.

Maybe we do not believe that people can make the consumption changes that will make it possible to be more ambitious; goals that politicians might struggle to deliver.

Yet communities might decide – to hell with it – we can do this. Yes we can, do better.

Some are scornful at Extension Rebellion for asking the impossible, but they are right to press for better. However, can we stop the in-fighting, which has undermined many important fights against dark forces in the past. Let’s not make that mistake again.

Can we all be a little more forgiving of each other, faced with our terrible situation.

We are between a rock and a hard place.

We should study the CCC report. Take it to our climate meetings in our towns, and halls, and discuss it. 

How can we help deliver this?

How can we do even better?

I for one will be taking the CCC report to the next meeting of the climate action group I help run.

I’m still mournful.

I’m still angry.

But I am also a problem solver who wants to make a difference.

Good work CCC.

Good work XR.

We are all in this together.

… and we don’t have a time machine, so we look forward.

Let not the best be the enemy of the good.

Let not the good be a reason for not striving for better, even while the best is a ship that has long sailed.

© Richard W. Erskine, 2019



You pick an X and Y, and the IPCC will tell how much we can emit (Z). The ‘X%’ is translated into precisely defined usages of terms such as ‘unlikely’, ‘likely’, ‘very likely’, etc. To say something is ‘likely‘ the IPCC means it has a greater than 66% chance of happening.

Leave a comment

Filed under Global Warming Solutions, Science in Society, Transition to Low Carbon, Uncategorized

No Magic Bullet for Climate Change

Matt McGrath, Environment Correspondent for BBC News, posted a short piece entitled A ‘magic bullet’ to capture carbon dioxide?

Which was introduced as follows:

“CO2 is a powerful warming gas but there’s not a lot of it in the atmosphere – for every million particles of air, there are 410 of CO2.

The gas is helping to drive temperatures up around the world, but the comparatively low concentration means it is difficult to design efficient machines to remove it.

But a Canadian company, Carbon Engineering, believes it has found a solution.

Air is exposed to a chemical solution that concentrates the CO2. Further refinements mean the gas can be purified into a form that can be stored or utilised as a liquid fuel.”

The ‘magic bullet’ in the title is of course clickbait, because anyone who has spent any time looking at all the ways we need to reduce emissions or to draw down CO2 from the atmosphere will know that we need a wide range of solutions. There is no single ‘magic bullet’.

Not specifically commenting on this story, but in a related piece about so-called ‘Negative Emissions Technologies’ (NETs), Glen Peters highlights the scale of the challenge facing any type of NET, which aims to remove CO₂ from the atmosphere. 

To remove the excess CO₂, sufficient at least to keep below 2oC …

“essentially we need to build an industry that’s 3 to 4 times the size of the current oil & gas industry just to clean up our waste” (2nd April 2019)

The issue is one of both scale and timing. We need big interventions and we need them fast (or fast enough).

It would take time and considerable resources to scale up NETs, which are currently mostly still in their development phase, and so the immediate focus needs to be on other strategies including energy in the home, reduced consumption, rolling out renewables, changing diets, etc., for which the solutions are ready and waiting and just needed a massive push from Governments, industry and civil society.

Glen Peters stresses that the first priority is emissions reductions, rather than capture, although capture will be needed in due course either using natural methods, or technological ones, or some combination. 

There are big questions hanging over NETs such as BECCS (Bio-Energy with Carbon Capture and Storage), which would require between 1 and 5 ‘Indias’ of land area to make the contribution needed. The continuing fertility of soils to grow plants for BECCS and competition for land-use for agriculture, are just two of the concerns raised.

The technology highlighted in the BBC piece is DAC (Direct Air Capture) which could – powered by renewables – have great potential and avoids land-use competition, but is energy intensive. As with BECCS, DAC used in sequestration mode would still need to overcome hurdles, such as the geological ones related to safely burying CO₂ in perpetuity (my emphasis)

My concerns with Carbon Engineering’s proposed application of DAC – for fuel to be used in transport – are as follows.

Firstly, road, rail, and even shipping, are being electrified, making fuel redundant.  There is the competing hydrogen economy that would use fuel, but a non-carbon based one.  Either way, this will rapidly decarbonise these parts of transport. Since transport is overall 25% of global emissions currently, this is a highly significant ‘quick win’ for the planet (within 2 or at most 3 decades).

Commercial Aviation is 13% of transport’s carbon emissions, but is less easy to electrify – at the scale of airliners travelling long-distance – because of the current energy density and weight of batteries (this could change in the future, as Professor Clare Grey explained during an episode of The Life Scientific).

Aviation is therefore just above 3% of global emissions (13% of 25%) from all sectors (albeit a probably increasing percentage).  A development-stage technology being focused on just 3% of global emissions can hardly be framed as a ‘magic bullet’ to the climate crisis.

Secondly, in terms of Government financing, would we focus it on decarbonising road, or decarbonising aviation? I suggest the former not the latter if it came down to a choice.

DAC may be great to invest some money in, as development phase technology, but the big bucks needed immediately, to make a huge dent in emissions, are in areas such as road sector. 

It is not a binary choice of course, but the issue with financing is timing and scale again. The many solutions we forge ahead with now must meet the test that they are proven (not futurism/ delayism solutions like nuclear fusion), can be scaled fast, and will contribute significantly to carbon reductions while also helping to transition society in positive ways (as for example, the solutions in Project Drawdown offer, with numerous ‘co-benefits’)

Finally, it is worth stressing that the focus for Carbon Engineering (and hence the BBC report) is on the capture of carbon dioxide, to be converted into hydrocarbons as fuel, for burning. This effectively recycles atmospheric carbon. It neither adds to, nor takes away, carbon dioxide through this cycle.

This therefore makes zero change to CO₂ in the atmosphere. It might be whimsically called Carbon Capture and re-Emission technology (CCE)! 

So I think it was wrong of the BBC piece to give the impression that the goal was ‘Carbon Capture and Storage’ (CCS), whose aim is to draw down CO₂.

It is confusing to conflate CCE and CCS!

Especially when neither are magic bullets.

(c) Richard W. Erskine, 2019

Leave a comment

Filed under Uncategorized

‘Possibilities Everywhere’ for more BP Greenwash

If you say “I am cutting down on smoking” and it turns out that from 7,300 cigarettes per year over the last 10 years you have managed to reduce your consumption by 25 cigarettes per year over the last 4 years and now are at 7,200 per year, then yes, it is true, you are cutting down.

But are you being honest?

In fact, it is fair to say that far from telling the truth you are in a sense lying or at least ‘dissembling’

screenshot 2019-01-23 at 11.15.32

That is what BP is doing with it’s latest massive ‘Possibilities Everywhere’ public relations and media advertising campaign, which was “jointly created by Ogilvy New York and Purple Strategies, with the films directed by Diego Contreras through Reset (US) and Academy (UK). The global media agency is Mindshare.”, as Campaign reports.

In a Youtube video on the initiative Lightsource BP is craftily suggesting it is seriously investing in solar energy, but don’t worry folks if the sun goes in, because we have plenty of gas as backup.

They want it both ways: claiming to be supporting renewables while continuing to push ahead with investments in fossil fuel discovery and production.

So let’s look at BP Annual Report and Form 20-F 2017 and what do we find. Let’s follow the money.

The on-going investments in upstream oil & gas development runs into many billions of dollars annually, which rather dwarfs the measly £300 million that Lightsource will be getting over three years by a factor of over 250.

This is not a serious push for renewables. 

If they were serious they would have actual renewables energy generation (arising from their ‘investments’) as one of their Key Performance Indicators (KPIs) in their Annual Report. They don’t because they don’t actually care, and they don’t expect their investors to care.

No, this is what BP cares about (from the same BP Annual Report) …

screenshot 2019-01-23 at 11.05.22

…. the value of their fossil fuel reserves. The more the better, because that has a huge influence on the share price.

In the Annual Report referenced above, BP states:

“Today, oil and gas account for almost 60% of all energy used. Even in a scenario that is consistent with the Paris goals of limiting warming to less than 2oC, oil and gas could provide around 40% of all energy used by 2040. So it’s essential that action is taken to reduce emissions from their production and use.

In a low carbon world, gas offers a much cleaner alternative to coal for power generation and a valuable back-up for renewables, for example when the sun and wind aren’t available. Gas also provides heat for industry and homes and fuel for trucks and ships.”

How do we decode this?

Well, what BP sees in a collapse of coal is a massive opportunity to grow oil & gas, but especially gas; they are not the only oil & gas company spotting the opportunity.

So they are not pushing energy storage for renewables, no, they are using intermittency as a messaging ploy to have gas as “a backup”.  So while 60% to 40% might look like a fall in profits, for BP’s gas investments it is a growth business, and less renewables means more growth in that gas business. So don’t get too big for your boots renewables – if we own you we can keep you in your place. Maybe you can rule when we have dug the last hole, but don’t expect that any time soon.

No amount of tinkering with emissions from production facilities or more efficient end-use consumption will avoid the conclusion that the “transition” they talk of must be a whole lot more urgent than the – dare I use the metaphor – glacial pace which BP are demonstrating.

Maybe BP should take seriously 3 key learning points:

  • Firstly, we have run out of time to keep playing these games. Your fossil fuel industry has to be placed on an aggressive de-growth plan, not the growth one you envisage, if you take seriously the implications of the IPCC’s 1.5C Special Report.
  • Secondly, far from your not-so-subtle digs at renewables, it is possible to construct an energy regime based on renewables (that does address intermittency issues); try reading reports like Zero Carbon Britain: Rethinking the Future from the Centre for Alternative Technology.
  • Thirdly, your investors will not thank you if you continue to ignore the serious risks from a ‘carbon bubble’. Claiming a value for BP assets based on unburnable fossil fuels will catch you out, sooner or later, and that your shareholders, pensioners and many others won’t thank you for your complacency.

Dissembling in respect of your commitment to the transition – which you intend to drag out for as long as possible it seems – will fool no one, and certainly not a public increasingly concerned about the impacts of global warming (and, by the way, also the impacts of plastics – another of your gifts to Mother Earth).

We are out of time.

By investing seriously and urgently in solutions that demonstrate a real commitment to the transition, and in planning to leave a whole lotta reserves in the ground, you can earn the trust of the public.

Change your KPIs to show you have read and understood the science on global warming.

Then you can build a PR campaign that demonstrates honesty and earns trust.

Until then, please, no more #BPGreenwash.

(c) Richard W. Erskine, 2019

Leave a comment

Filed under Transition to Low Carbon

Veganism is an answer to the climate crisis, despite what the critics say

How the world feeds itself while at the same time becoming carbon neutral within a few decades (see Note 1), while at the same time protecting biodiversity and respecting other planetary boundaries, is a hugely complex issue. 

It is not helped by simplistic arguments on any side of the debate.

Food is much more complex than say, electricity generation or transport, because it brings together so many different interlocking threads, not least our different cultures and trading practices around the world; it cannot be glibly addressed through some technical silver bullet or indeed any single prescription.

Although it seems perfectly possible to have rewilding without conflating this with meat production for human consumption, Knepp Castle Estate clearly see these twinned in their overall vision for the Estate.

Knepp Castle Estate have done some wonderful work in their experiment to rewild the Estate’s farm and this has yielded some great results in promoting biodiversity on the farm. 

It is therefore disappointing that Isabelle Tree – who runs the Estate with her husband Sir Charles Burrell – decided that the way to counter what she believes are simplistic “exhortations” in favour of veganism is to use strawman arguments, which I will come to in a moment.

In her article “If you want to save the world, veganism isn’t the answer: Intensively farmed meat and dairy are a blight, but so are fields of soya and maize. There is another way” (Guardian, 25th August 2018), she offers a vision of meat produced on a rewilded farm as an alternative.

The article ends with a statement I think can be defended (even if I disagree with it):

“There’s no question we should all be eating far less meat, and calls for an end to high-carbon, polluting, unethical, intensive forms of grain-fed meat production are commendable. But if your concerns as a vegan are the environment, animal welfare and your own health, then it’s no longer possible to pretend that these are all met simply by giving up meat and dairy.”

The key words here are “simply by”, because of course, any diet begs a lot of questions on how food is produced, processed and transported. We all agree it is complicated.  We can all agree that a goat farmer in the Himalayas cannot simply adopt the practices of a farmer in England’s green pastures. We need to respect cultural and geographic diversity.

Except her last sentence does not address crop production methods, but simply asserts:

“Counterintuitive as it may seem, adding the occasional organic, pasture-fed steak to your diet could be the right way to square the circle.”

The problem is that to feed the UK or feed the world, we need to know what this means in quantitative terms, and there is really no indication of what a balanced omnivorous diet would look like or how to scale up the Knepp Castle Estate experiment, even for the UK.

Today, the reality of the impact – both in ecological and climate terms – of the meat industry is pretty terrifying, as the Friends Of the Earth laid bare in their 2008 report  “What’s feeding our food? – The environmental and social impacts of the livestock sector”.

We need alternatives, for sure, but any changes will take a long time to make a dent on a global scale. The world could simply follow the example of India with it relatively low level of meat consumption, but any proposed system must be able to scale effectively. 

Protein from livestock requires much greater land use, and also puts huge pressure on water resources, and as noted in the study ‘Redefining agricultural yields: from tonnes to people nourished per hectare’:

“… shifting the crop calories used for feed and other uses to direct human consumption could potentially feed an additional ∼4 billion people.”

Emily Cassidy et al, Environ. Res. Lett. 8 (2013) 034015

And if our goal is to address climate disruption as well as sustainable agriculture, the land will be in demand for other purposes: crops for human consumption; re-forestation; bio-energy crops; renewable energy assets; etc. 

Meat production whether it is intensively produced, or in a rewilded context, cannot wish away the basic fact that it is a relatively inefficient way of using land to produce calories.

The UK currently imports over 40% of its food, and on top of that imports soy and other crops for feed for livestock. Of the land we have in the UK, about 50% is given over to grassland for livestock, as illustrated in this Figure from the Zero Carbon Britain Report: Rethinking The Future:

screenshot 2019-01-07 at 09.19.53

The Centre for Alternative Technology’s alternative, set out in the same report, is aimed at getting the UK to zero carbon; balancing all the sectors that are involved, including food production, but recognising we need to fit everything required into the available land. They arrive at a radically different distribution of land-use:

screenshot 2019-01-07 at 09.20.06

In their scenario, livestock are not eliminated but are radically reduced.

What is most disappointing about Isabella Tree’s piece in the Guardian is that she feels the need to use Strawman Arguments to support her case (which immediately suggests it has some holes in it):

Strawman argument #1

“Rather than being seduced by exhortations to eat more products made from industrially grown soya, maize and grains, we should be encouraging sustainable forms of meat and dairy production based on traditional rotational systems, permanent pasture and conservation grazing.”  

My Response: Well, since most of those crops are grown for animal consumption, that is another reason to release that land to grow sustainable crops (in soil-carbon caring ways); for forests; for bio-energy crops; for human habitation; etc.  The net result of low intensive meat production is that we would need to massively reduce meat production.

Strawman argument #2

“In the vegan equation, by contrast, the carbon cost of ploughing is rarely considered … up to 70% of the carbon in our cultivated soils has been lost to the atmosphere”

My Response: Untrue. Why do we have the permaculture movement, low-till systems, etc.? And to stress again, the majority of the cropland in UK and US, for example, today is to feed livestock. If we want to improve soil carbon there are many ways of doing it.

I could go on.

She implies that the proposed method of farming will make a big impact on soil carbon sequestration, and there is no doubt that soil plays a hugely important role in carbon sequestration, but this is an area which is very complex. It is reassuring that the article does not make outlandish claims (such as those made by Savory, see Note 2), but again, there is a lack of any estimates as to the extent to which the proposed farming practices would mitigate increases in greenhouse gas emissions. Plausibility arguments won’t cut it I’m afraid. 

For those interested in exploring all the questions touched on so far and more besides – with the benefit of some science to back up claims –  they could not do better than look at a few of the excellent food research organisations in Oxford. 

Isabella Tree acknowledges that we need to reduce meat consumption. No doubt she would agree that the sky-rocketing consumption of meat in China and globally is unsustainable. Here is the current picture:

screenshot 2019-01-07 at 14.28.04

 And as Godfray et al. state in the paper from which I took this Figure:

“It is difficult to envisage how the world could supply a population of 10 billion or more people with the quantity of meat currently consumed in most high-income countries without substantial negative effects on environmental sustainability. “

Godfray et al., Science 361, 243 (2018), 20th July 2018

Yes, it is much more complicated than simply choosing one’s diet, and we must all take care to consider the processes and pathways by which we get our food and how land is used – whether we eat meat or not. 

But for many, veganism remains an increasingly obvious option to make an immediate dent in one’s carbon footprint, and it remains a perfectly justifiable choice, whether from an environmental, ethical or scientific standpoint. 

It is by no means clear that even as a portion of our weekly diet, rewilded meat will be the solution to the world’s environmental and sustainability challenges, or at least on the timescales required. Veganism can make an immediate impact.

In fact, without a whole lot more vegans on this planet, it is difficult to see how those who want to remain meat eaters can carry on doing so with a clear conscience, given the current (as opposed to, wished for) farming practices.

In the future, meat eaters may have to pay a lot more to eat meat and even then give a big nod of thanks to vegans for making a space for them to do so.

If Isabella Tree’s article was entitled “If you want to save the world, veganism isn’t the whole answer: Intensively farmed meat and dairy are a blight along with the fields of soya and maize they depend on. But there is a case for low levels of meat consumption.” …          it would have been less catchy but at least defensible.

Knepp Castle Estate are doing great work showing how to promote biodiversity on their farm, but as a model for feeding the world and preventing dangerous climate disruption, by 2050 or earlier … they have failed to make a convincing case that they have a credible plan.

(c) Richard W. Erskine, 2019


Note 1

On our current emissions trajectory, the world “is likely to reach 1.5C between 2030 and 2052”. If we are to avoid a global mean surface temperature rise of 1.5C, net global CO2 emissions need to fall by about 45% from 2010 levels by 2030 and reach “net zero” by around 2050.  See Carbon Brief’s ‘In-depth Q&A: The IPCC’s special report on climate change at 1.5C’ for more details. The IPCC’s 1.5C report made it clear that the difference between a 1.5C world and a 2C world was very significant, and so every year counts. The sooner we can peak the atmospheric concentration of greenhouse gases (especially CO2, being long-lived) in the atmosphere, the better.

Note 2

Savory suggested that over a period of 3 or 4 decades you can draw down the whole of the anthropogenic amount that has accumulated (which is nearly 2000 Gigatonnes of carbon dioxide), whereas a realistic assessment (e.g. ) is suggesting a figure of 14 gigatonnes of carbon dioxide (more than 100 times less) is possible in the 2020-2050 timeframe.

FCRN explored Savory’s methods and claims, and find that despite decades of trying, he has not demonstrated that his methods work.  Savory’s case is very weak, and he ends up (in his exchanges with FCRN) almost discounting science; saying his methods are not susceptible to scientific investigations. 

In an attempt to find some science to back himself up, Savory referenced Gattinger, but that doesn’t hold up either. Track down Gattinger et al’s work  and it reveals that soil organic carbon could (on average, with a large spread) capture 0.4GtC/year (nowhere near annual anthropogenic emissions of 10GtC), and if it cannot keep up with annual emissions, forget soaking up the many decades of historical emissions (the 50% of these that persists for a very long time in the atmosphere), which some are claiming is possible.

I recommend Dr Tara Garnett‘s Blog-post: ‘Why eating grass-fed beef isn’t going to help fight climate change’, 3rd October 2017 – and if you need more, read the full paper referenced in the blog.


Filed under Uncategorized

On the Nth Day of Christmas …

We had a seasonal pub lunch with neighbours, and my christmas cracker included the question:

“How many gifts would you have if you received all the gifts in the song ‘The Twelve Days of Christmas’?”

The song indicates the following gifts I will receive on each day:

on 1st day I’ll receive, a partridge in a pear tree;

on 2nd day, 2 turtle doves and a partidge in a pear tree;

on 3rd day, 3 French hens, 2 turtle doves and a partidge in a pear tree;


So, by the twelth day I will have received a total of:

12 x 1 partridges (each in a pear tree);

11 x 2 turtle doves;

10 x 3 French hens;

… etc; (until we get to)

1 x 12 drummers drumming.

So, the total number of gifts is:

(12×1) + (11×2) + (10×3) + … + (1×12)

which my abstemious wife very rapidly computed is 364 gifts.

By which time and after a few glasses of wine, I was of course wanting a more general result, so I declared:

“What about the number of gifts on the Nth day of Christmas?”

My wife mumbled “here we go!”, and by then my pen and paper napkin were at the ready …

Assuming the general gifts were denoted g1, g2, g3, …, gN, then we’d end up with…

N x 1 of gift g1

(N-1) x 2 of gift g2

(N-3) x 3 of gift g3

… etc. until

1 x N of gift gN

Let’s call the total number of gifts arrived at as G(N). So as an example, we already know that G(12) = 364

In mathematical notation I can write this in a different way (see Note 1), and solve the equation to show that …

G(N) = (1/6) * N * (N+1) * (N+2)

Testing this equation for case of N=12 I get

G(12) = (1/6) * 12 * 13 *14

           = 2 *13 * 14

           = 364

Job done!

When I got home I wondered if there was a geometrical way of deriving this result, rather like the trick that Gauss used as a young boy when the teacher asked the class to add the whole numbers from 1 to 100 (see Note 2).

I rather like the visual proof which I can show for N=4 as:


which generally (for N rather than 4), and expressed algebrailly, can be expressed as

N∑[i] = (N2 – N)/2  + N

= (N2 /2) – N/2 + N

= (N2 /2) + N/2

= (1/2) * N * (N+1)

My question to myself was can we do a similar visual trick with the Nth Days of Christmas sum? (I say we, but without the genius Gauss to assist me!).

We have to go three dimensional now to build a picture of the number. The child’s blocks I could find were too few in number and we don’t have sugar cubes, but we do have veggie stock cubes! So, I created the following …

IMG_4738 3.JPG

The left hand portion represents 3×1 + 2×2 + 1×3 which is G(3)

The same number of blocks is in the right portion (in mirror image).

In the middle I have added 1+2+3+4 which is the familiar 4∑[i]

Put these all together and the picture is as follows:

IMG_4739 3.JPG

which is clearly 12+22+32+42  which is the familiar 4∑[i2]

That’s a nice pictorial solution of a kind.

So in algebraic terms that gives

2 G(3) + 4∑[i]  =  4∑[i2]

This gives me an algebraic solution that is not any simpler than the original solution I made on the napkin. The stock cubes give me:

G(N-1)  =  (1/2) * ( N∑[i2] – N∑[i] )

which can be solved (Note 3) to give

G(N) = (1/6) * N * (N+1) * (N+2)

as before.

However, I felt I had failed in my quest to avoid algebra or at least a much simpler algebraic resolution. Ultimately I couldn’t find one, but the visualization is at least a great way to play with the number relationships.

At least I will be very quick with the answer if ever I am asked

“How many gifts would you have if you received all the gifts in the general song ‘The N Days of Christmas’?”

“Oh, that’s easy, it one sixth of N, times N plus one, times N plus two.”


Richard W. Erskine, 30th December 2018


Note 1

G(N) can be written as the following sum:

G(N) = ∑ [(N – i + 1) * ( i )]

where N∑[] is shorthand for “sum of expression […] for i ranging from 1 to N”

Expanding the expression, I get

G(N) = ( (N+1) * N∑[i] )  –   N∑[i2]

Now, there are well known results that give us, firstly

N∑[i] = (1/2) * N * (N+1)

and secondly,

N∑[i2] = (1/6) * N * (N+1) * (2N + 1)

So, combining these I get,

G(N) = ((N+1) * (1/2) * N * (N+1) )   –  ((1/6) * N * (N+1) * (2N + 1))

Taking out a common factor (1/6) * N * (N+1), this becomes

G(N) = (1/6) * N * (N+1) * { 3*(N+1) – (2N + 1) }

Simplifying { 3*(N+1) – (2N + 1) } I get {N+2}, so

G(N) = (1/6) * N * (N+1) * (N+2)

Note 2

Gauss as a boy spotted a short-cut, which can be seen in the following picture:


The sum 1+2+3+4 is represented by the shaded blocks, and the unshaded blocks are also the same sum in reverse order. So, we can see

1+2+3+4 = (1/2) * 4 * (4+1) = 2 * 5 = 10

and in general

N∑[i] = (1/2) * N * (N+1)


Note 3

G(N-1) = (1/2) * ( N∑[i2] – N∑[i] )

= (1/2) * { { (1/6) * N * (N+1) * (2N + 1) } – {(1/2) * N * (N+1) } }

= (1/2) * (1/6) * N * (N+1) * { (2N + 1)  –  3 }

= (1/2) * (1/6) * N * (N+1) * { 2N  –  2 }

= (1/6) * N * (N+1) * { N – 1 }


G(N) = (1/6) * (N+1) * (N+2)  * N

or, rearranging

G(N) = (1/6) * N * (N+1) * (N+2)

as before.


Filed under Uncategorized

Is Shell serious about carbon emissions reductions?

Royal Dutch Shell plc, or Shell for short, have issued a statement, under pressure from institutional investors, on how they will contribute to achieving the Paris climate change commitments. They state:

“Shell fully supports the Paris Agreement and believes that society has the scientific and technical knowledge to achieve a world where global warming is limited to well below 2°C.” (Ref. 1)

That sounds pretty unequivocal, and Shell are spending a lot of money aiming to persuade us that they are indeed serious. Let’s take a look at their claims.

Reuters reported in March 2018 that:

“Shell, the world’s top trader of liquefied natural gas, currently produces around 3.7 million barrels of oil equivalent per day, of which roughly half is natural gas.” (Ref. 2)

That’s 1.35 billion barrels per year, of which 50% is natural gas. This equates to about 0.5 billion tonnes of CO2 equivalent per year, or 0.5 GtCO2e/yr  for short, from the end-use emissions from their products (Note 1).

Shell is claiming to take a lead on emissions reductions, but take a look at Shell’s own statement of ‘direct emissions’ (those resulting from operation of their operations):

“The direct greenhouse gas (GHG) emissions from facilities we operate were 73 million tonnes on a CO2-equivalent basis in 2017, … The indirect GHG emissions from the energy we purchased (electricity, heat and steam) were 12 million tonnes on a CO2-equivalent basis in 2017” (Ref. 7)

So Shell are focusing on these production-based emission totalling 85 million tonnes of CO2e in 2017, or 0.085 GtCO2e.  

From the above figures we see that their production related emissions of CO2e are nearly 15% of the net CO2e resulting from production and end-use (see Note 2). By no means a trivial part of their net emissions, but no surprise that their marketing focuses on their production methods not the 85% coming for end-use of their products, that they clearly cannot mitigate, except by not producing them in the first place.

This explains why Shell, in the disclaimer to their statement to institutional investors, state:

“Also, in this statement we may refer to “Net Carbon Footprint” or “NCF, which includes Shell’s carbon emissions from the production of our energy products, our suppliers’ carbon emissions in supplying energy for that production and our customers’ carbon emissions associated with their use of the energy products we sell. Shell only controls its own emissions but, to support society in achieving the Paris Agreement goals, we aim to help and influence such suppliers and consumers to likewise lower their emissions. The use of the terminology “Net Carbon Footprint” is for convenience only and not intended to suggest these emissions are those of Shell or its subsidiaries.” (Ref. 1)

The key words are “Shell only controls its own emissions”, but offers to “support society” in meeting Paris Agreement goals.

Off the agenda of this statement is any suggestion of keeping fossil fuels in the ground or an acknowledgement of the devastating implications of the IPCC’s 1.5C special report.

They plan to boost natural gas extraction, tripling this by 2050 according to the Reuters report. Citing measures such as use of CCS (Carbon Capture and Storage) lead them to state an aspiration to halve the ‘Net Carbon Footprint’ (which includes end-use emissions) by 2050. This may seem to be an ambitious and welcome commitment for a fossil fuel major, but it fails to acknowledge the urgency with which we must decarbonise energy, and relies on the same magical thinking involved in the massive scaling required in CCS technologies by 2050 that many policy-makers are prone to.

This is a self-administered license to carry on extracting and selling fossil fuels.

Shell have been flooding the media with reports of how they are reducing carbon emissions, and they will fund events such as the annual New Scientist Live 2018, where they had a large stand in the middle of the exhibition hall; right next to BP’s stall offering, you guessed it, the same soothing words on emissions reductions. They will be back for more next year (Ref. 8).

If Shell are the trail-blazers amongst the fossil fuel majors, then what to expect from the laggards? 40% reduction by 2050, or 30%, or 20%, or less? What is the ambition of the industry as a whole, which last year was responsible for nearly 37 GtCOoverall (Ref. 5)?

But they, like the other carbon majors and all the minors, are collectively in denial about the challenge we face, and the urgency required to get to net zero emissions by 2050 or earlier, not 50%.

Shell can, as they admit, only seriously impact on the 15% (production emissions), not the 85% (end use emissions), of their fossil fuel cake, when the real issue is that we need a radical shrinking cake, not the growing one we have today.

I regard it as distraction tactics to focus on production emission, trying to deflect the discussion away from the calls to ‘keep it in the ground’.

Unfortunately for Shell and other gas majors, the science is showing we have run out of time.

‘Keep it in the ground, keep it in the ground’, the protestors cried at COP24.

They at least, will not be distracted by the latest greenwash from Shell and the others.

o o O o o


(c) Richard W. Erskine, 19th December 2018


  1. This assumes 0.43 metric tonnes of CO2 per barrel of oil (Ref. 3), and using 75% of this value for natural gas (Ref. 4). The figure of 0.5GtCO2e for Shell aligns with the figure shown in the CDP Carbon Majors Report 2017 (Ref. 6). Note also that 5,800 cubic feet of natural gas is equal (using an energy metric) to a Barrel of Oil Equivalent
  1. Add the 0.5 GtCO2e from the end-use burning of their products and we see that this 0.085 GtCO2e is nearly 15% of the total for which Shell is ultimately responsible for. Note also that CO2e or CO2 equivalent includes the CO2 resulting from combustion as well as any leakages of methane, with the methane contribution converted into the equivalent amount (by its warming potential) of CO2.


  1. Joint Statement Between Institutional Investors on behalf of Climate Action 100+ and Royal Dutch Shell plc (Shell), 3rd December 2018
  1. “Shell’s gas production could be triple oil by 2050: CEO”, Ron Bousso, 7th March 2018, Reuters
  1. “Greenhouse Gases Equivalencies Calculator – Calculations and References”, US Environmental Protection Agency, EPA
  1. “Frequently Asked Questions”, US Energy Information Administration, EIA
  1. “Analysis: Global CO2 emissions set to rise 2% in 2017 after three-year ‘plateau’”, Zeke Hausfather, 13th November 2017, CarbonBrief 
  1. The Carbon Majors Database: CDP Carbon Majors Report 2017
  1. Shell Sustainability Reporting and Performance Data / Greenhouse Gase Emissions
  1. New Scientist Live 2019, Exhibitors / Shell

Leave a comment

Filed under Uncategorized

12 years to Climate Armageddon?

The Guardian reported “We have 12 years to limit global warming, warns the UN”, and many others published similar stories, following the latest IPCC report.

Well not quite that simple, but let me explain.

I want to start with a quote from Risk, statistics and the media: David Spiegelhalter’s IPSO lecture :

“there are some fundamental difficulties with story-telling from data. Classic narratives have an emotional hit to the reader, they reveal a clear causal path, and have a neat conclusion. But science and statistics are not like that – they can seem impersonal, they don’t have a clear chain of causation, and their results are often ambiguous.”

In other words, when we convey facts through narrative we often seek certainty whereas paradoxically, scientists are the ones often having to grapple with uncertainty and ambiguity. In our popular imaginations, we might think that the reverse was true.

Yet on global warming, the irreducible uncertainty is increasingly concerning the when, not the if, of serious impacts. By debating the when (and trying to put a date on it), we are in danger of losing sight of the grindingly unavoidable fact that if a tsunami is heading your way, and you are on the beach, the exact ‘when’ is somewhat academic; the imperative is to run like heck to high ground!

We are already experiencing the impacts of global warming thanks to a rise of about 1C rise in global mean surface temperature (GMST). The impacts of man-made global warming are seen in thousands of places and contexts, and to cite just two – the rapid decline in the population of the European pied flycatcher; and the increasing severity of wildfires in California – illustrate how diverse these impacts can be.

So the questions being raised – how bad could it get and how soon – fit on a spectrum of possibilities. Our responses also fit on a spectrum, concerning how much work we are prepared to put in to limit the impacts. 

How much urgency are we prepared to put in to limiting the impacts or adapting to them, and will this be fair to everyone? Will it be fair to the energy poor of the UK, to rural Indians, to the flora and fauna already experiencing catastrophic losses (and set to escalate)?

If we miss the 1.5°C goal, can we limit it to 1.75°C,  and if not 1.75°C then maybe 2°C, and if not 2°C then can we limit it to 2.5°C,? The impacts are not ‘linearly related’ to temperature rise. There is an escalating level of impacts that ensue in areas such as heat stress, species loss, sea-level rise, crop yields, and more, and there are ‘tipping points’ that can create nasty surprises at multiple stages on this rising, jagged curve.

The context for this latest report was the Paris Agreement – arising from the  21st Conference Of the Parties (COP) to the UN Framework Convention on Climate Change (UNFCCC) – held in Paris in December 2015. Hitherto, the UNFCCC had discussed policy aimed at ‘avoiding dangerous climate change’, which was deemed to be a 2C GMST rise. The UNFCCC in Paris was basing policy in part on the scientific input of the 5th Assessment Report by the IPCC (Intergovernmental Panel on Climate Change) published in 2013/2014. However, low lying countries and those prone to the worst impacts of climate change requested that there be an investigation on the feasibility of limiting the GMST rise to a more ambitious 1.5°C, and also determining the benefits (in terms of reduced impacts) of 1.5°C as compared to 2°C.

The IPCC’s Summary for Policymakers of the 1.5C study includes the statement:

“Human activities are estimated to have caused approx. 1.0°C of global warming above pre-industrial levels, with a likely range of 0.8°C to 1.2°C. Global warming is likely to reach 1.5°C between 2030 and 2052 if it continues to increase at the current rate. (high confidence)”

So, strictly speaking, it is ‘likely’ (meaning at least a 66% chance; for those that read footnotes) that we have between 12 and 34 years before we cross the 1.5°C threshold, unless we do something to improve on this prognosis.

So is 1.5°C more special than all the other thresholds laid out before us? Well, it is an upcoming and fast approaching threshold, and so, theoretically avoidable.

This is what Prof. Jim Skea told to Matt McGrath (BBC Environment Correspondent, 8th October 2018), summarising the IPCC report with two key observations:

“The first is that limiting warming to 1.5°C brings a lot of benefits compared with limiting it to two degrees. It really reduces the impacts of climate change in very important ways,” said Prof Jim Skea, who co-chairs the IPCC.

“The second is the unprecedented nature of the changes that are required if we are to limit warming to 1.5°C – changes to energy systems, changes to the way we manage land, changes to the way we move around with transportation.”

The 1.5°C report found – to the surprise of many – that there were significant benefits to keeping GMST to this lower level. For example, by 2100:

  • 14% of the world’s population will be exposed to extreme heat (as experienced in southeastern Europe) at least once every 5 years in a 1.5°C world, but this rises to 37% in a 2°C world;
  • Some sea-ice will remain in the artic in most summers in a 1.5°C world, but ice-free summers are 10 times more likely in a 2°C world;
  • Urban populations exposed to water scarcity would increase from 250 million to 411 million;
  • Species loss for insects, plants and vertebrates would increase from 6, 8 and 4% to 18, 16 and 8%, respectively;
  • Coral reefs would suffer frequent mass mortalities in a 1.5°C world, but would mostly disappear in a 2°C world;
  • Crop yields would be lower in a 2°C world, especially in sub-sub-Saharan Africa, Southeast Asia, and Central and South America.

We see how there is a definite ‘non linearity’ occurring in some of the examples.

For some natural systems that have been able to recover to naturally occuring extremes in climate in the past, the future is a marked change. The effect of repeated, closely spaced extreme conditions means that they then fail to recover. Like a boxer that has been floored, they may get up once, or even twice, but at some point they stay down.

So coral reefs are hurting badly today (in a 1°C world), and will manage to cling on in a 1.5°C world, but will disappear in a 2°C world. This is a graph where the line falls off the cliff; no comforting linearity here.

A key concept introduced in the 5th Assessment Report was the ‘carbon budget’ – the maximum amount of cumulative carbon emissions allowable to stay within a target GMST rise. 

The news which, if not good, is at least something of a relief, is that the so-called ‘committed warming’ due to emissions to-date (all that heat locked up in the oceans that will continue to drive increases in atmospheric temperature / GMST rise until the system reaches equilibrium again), is less than 1.5°C; although changes (e.g. to sea-level raise) will continue for centuries to millennia.

The not so good news (or rather extremely challenging news) is that 2030 emissions must reduce by 45% versus 2010 emissions to achieve 1.5°C, and get to zero by 2050 (note also that for 2°C, 2030 emissions would need to reduce by 20% versus 2010, and get to zero by 2075).

The IPCC make it clear that achieving 1.5°C would require rapid and far-reaching transitions in energy, land, urban and infrastructure (transport and buildings), and industry. 

These system transition are unprecedented in terms of scale, but not necessarily in terms of speed (we have all seen how quickly cars replaced horse-drawn carriages in New York City), and imply deep emissions reductions in all sectors, a wide portfolio of mitigation options and upscaling of investments.

However, achieving 1.5°C cannot be achieved solely by decarbonising sectors, but must be supplemented by technologies that will take ‘carbon’ out of the atmosphere and effectively bury it. The 1.5°C pathways explored by IPCC assume between 100 and 1000 billion tonnes of carbon dioxide removal (CDR) over the 21st Century. 

The most popular form of CDR currently being investigated is BECCS, which stands for Bio-energy with Carbon Capture and Storage. It works by growing plants/ trees that will capture carbon, and these are then burned to produce energy; but rather than re-release the carbon into the atmosphere, this is extracted from the exhaust gas stream, then buried in deep geological structures where CO₂ will remain in a condensed state. 

The required transition can be summarised in a graphic (produced by Glen Peters from the CICERO Institute), consistent with the median scenario from the IPCC 1.5°C report:

Screenshot 2018-11-17 at 00.16.37

(Credit: @Peters_Glen, 15th October 2018, On Twitter)

This graph illustrates that human activities would (by 2100) need to move from the current situation – a net source of 40 billion tonnes of carbon dioxide (40GtCO₂) – to becoming a net sink of -15GtCO₂. Coal would be eliminated, and oil would almost be; gas would be uncertain, but any that was used would need to be combined with CDR; land-use would need to move from being a net producer of greenhouse gas emissions to a net sink; and CDR/BECCS would have to be massively scaled up.

Many question the feasibility of such a large roll-out of CDR, requiring perhaps 2 or 3 times the area of India for the energy crops required (while at the same time, there are many other pressures on the land, not least, feeding a population that could grow/ stabilize at about 10 billion).

The graph above shows a complete change in the net CO₂ but this is like turning around a very large tanker on a sixpence.

Personally, I would conclude that the very tough challenge of keeping warming below 2°C has just got even tougher. The scale and range of changes needed requires something like a Marshall Plan for the whole world to stay below 1.5°C.

This is why many commentators such as Professor Kevin Anderson at the Tyndall centre in Manchester says that the only way we can square the circle is through a massive reduction in consumption (particularly amongst the high emitters). He has noted that if the top 10% of emitters reduced their emissions to the average European level, that would equate to a 33% reduction in global emissions!

We have run out of time to decarbonise all sectors fast enough, so are creating ‘magical thinking’ to imagine a scale of CDR that would allow us to continue to consume at the current rate (in the high emitting countries).

So the picture is mixed. Yes, the 1.5°C target is extremely important and worthwhile because it brings so many benefits as compared to 2°C. However, the already very challenging goal of decarbonising the world’s economy (both the established countries and those seeking justice and development), is made even harder.

Irrespective of whether you are an optimistic or pessimistic by nature, the fact is that we have to some extent already left it too late to avoid serious impacts. Whatever level of tolerance to risk we choose for ourselves and our families, by failing to seriously engage in action now, we are in effect making choices for our neighbours, for those communities and ecosystems that may not have the resources to adapt as well as we can.

Whether we imagine it is 0 years left to take action, 12, 34, or any other number that lies within the reasonable range between bad and catastrophic, we are really out of time. 15 years is a flick of the fingers in terms of transforming all sectors in all countries.

We should not hold up “12 years” as some magical number that is a binary switch between “we’ll be OK” and “it’s Armageddon”, but yet another milestone on the slow, and somewhat slippery path towards a very dangerous future.

Where each of us – as individuals, communities, countries or at whatever scale we wish to frame it – have started to take meaningful action, we should celebrate that and strive for wider and deeper change. Where this is not happening, we have to say that the sooner this process starts the better.

As Twitter just reminded me…

“The best time to start was 20 years ago, the second best time is now”

Let’s not wait another 12 years to act on the scale required.


Richard W. Erskine, 17th November 2018

Leave a comment

Filed under Uncategorized

Embracing the Denial Curve

Most people are naturally conservative with a small ‘c’ – they really find it very difficult to change.

For nearly 30 years after leaving academia, I spent a lot of time helping organizations be better at managing their enterprise information and retaining knowledge. Many skills are required to help such aspirations to be realised, and not merely technical ones; as I discussed in The Zeitgeist of the Coder.

As always, techies would run around thinking they had a silver bullet that would make people adopt new practices, simply by installing the software and with a few hours of training. Time and time again I would find that an organization that claimed to have made a big change hadn’t. They had changed very little because no real effort had been put into the changes in behaviour that are required to ensure that the claimed outcomes of an enterprise system rollout would actually transpire.

Old habits die hard.

In my large tool-box of diagrams I used when consulting is the following figure, which originates from the field of ‘business change management’ (I have been using it for many years; long before I became active in climate change).

Screen Shot 2018-11-01 at 13.57.23

The denial (or change) curve is just a name for the grey path from Denial to Commitment, with each stage described as follows:

  • Denial – people do not believe that the change is needed or will really happen, focusing on business as usual and not engaging their own feelings.
  • Resistance – people now know change is coming and engage their feelings of anxiety, anger and rejection. The focus is personal resistance, not the wider organization. It can be disruptive and even counter to one’s interests.
  • Exploration – a switch occurs whereby people recognize need for personal change and start to explore new ways of working.
  • Commitment – people gain mastery of new ways of working and the focus moves from the personal to the organization – and participating in helping to make the change a success.

The denial referred to here is the regular kind of denial we are all prone to, and is well known to psychologists. In the business context I worked in, it was an illusion to imagine you could get staff to jump en masse from Denial to Commitment.

Denial is more characterised by folded arms and non-communication than by argument or engagement; denial of this kind is a shutting out of the possibility of change, not arguing against it.

Resistance is different. The resentment and anger that comes with Resistance is almost a necessary part of the journey; finding reasons why the change won’t work, and using active measures to frustrate implementation. This can be loud and angry.

Only when the benefits of at least the promise of change start to become appreciated, does Exploration begin, and while there will still be arguments, they move from destructive ones (“It can’t work”) to constructive alternatives (“I don’t see how it can work, but show me”).

Commitment follows, and those that have made this journey are much better at helping others tread in their footsteps than an external consultant. Personal journeys get transformed into communcal ones. There is a tipping point, when enough people are reinforcing the positives so that everyone wants to join the party.

Of course, getting action on global warming and decarbonising our economy is much tougher than getting a large organization to adopt new practices, but maybe there is a lesson here.

For one thing, there is nothing wrong in using the word ‘denial’. Some claim that this is being conflated with ‘holocaust denial’ and is therefore an outrageous slur on  ‘contrarians’ (it is always contrarians that make this claim and merely, I would argue, because they wish to deflect criticism and adopt a posture of victimhood, whereas they are the aggressors).

In any case, I am not so much interested in the tiny percentage of politically motivated contrarians, or ‘ideological denialists’ as I would prefer to call them, even though some are in highly influential positions.

We have to find ways to work around them, rather than give them too much of our time (although it has to be said, I am frustrated at how much airtime these people get on Twitter; and maybe that is a problem with Twitter itself as a platform). Their attention seeking behaviour is self-reinforcing and will not be a path to change (ask a psychologist), and it is wasting time that should be focused on the conversations that really matter.

The great majority of people fit more easily into the standard psychological meaning of the word denial; they are blocking their eyes and ears hoping it will go away.

La la la … la la la.

We are all in denial in some sense and to some level; otherwise we’d be seeing a rebellion wouldn’t we?

It is surely unrealistic to expect it will be a smooth and easy (psychological) journey, from Denial to Commitment, and that we can convince people with a few graphs. We can show all the data in the world – on the efficiency of heat pumps; the health benefits of low meat and EV buses; the falling costs of renewables; the ecological impacts underway; or whatever – but until this becomes internalised as the way we think and act, every day and in every way, it will not lead to measurable outcomes.

We have to pass through The Denial Curve – the pain and anger of the ‘loss’, for what we assumed was forever. That high consumption, limitless travelling, throwaway culture, and our infinite planet, with a mode of consumption that we have somehow slipped into sometime in the 60s or 70s. We have to shake ourselves out of a kind of consumerist trance.

We need to make so many changes at so many levels that the change is bound to create huge anxiety and then, of course, Resistance.

If people are resentful at feeling trapped between a rock and a hard place – between the  terrifying consequences of inaction and the trapped-in-the-headlights ‘conservative’ preference for inaction  – then that is entirely natural. Yet this is exactly the ‘tension’ that needs to be explored, and ultimately, needs to be resolved in each of us, and in our communities.

I sense that there is already a growing number of people who have moved from cross-armed denial, to resentment and hence Resistance. This is to be expected, and we should expect it to get a lot louder. We’ll need strong leaders and ‘counsellors’ amongst us, to help guide people on the journey; despite the noises off.

I would argue therefore that we should embrace The Denial Curve, rather than get stuck in the loop of raging against denial per se (that’s what the ‘ideological denialists’ want!).

If we can reassure people and find ways to help them transition from Resistance to Exploration, then that is the hardest part. Commitment will follow.


(c) Richard W. Erskine, 2018


Leave a comment

Filed under Uncategorized

Brexit’s Sunlit Uplands

Those pesky Europeans, imposing their values on us – you know, a belief in the rule of law and all that. How dare they!

Freed from this prison, the UK can forge a new future with the world, based on emerging economies, and Britain’s long-established record for gun-boat diplomacy.

What can possibly go wrong?

China, for example, has emerged as a powerhouse likely to overtake the USA as the largest economy in the world by 2050. Ok, so they execute more people than the rest of world put together and lock up millions for ‘re-education’; anyone who does not support the ruling dictatorship.

Sunlit uplands.

Hey ho, at least they want to buy our stuff, so what’s not to like?

Ok, so they actually want to steal our stuff – the stuff they have not already hacked using their superior mathematicians (check out the Olympiad results) – but will then be handed over legally. As the mafia discovered, the easiest way to rob a bank is to own one. And the easiest way to steal from, say, Rolls Royce is to own it. Expect when, not if.

Sunlit uplands.

And what about weapons? Well Saudi Arabia is a great client, and the advantage of having an on-going war – namely pulverising Yemen back to the dark ages and murdering children without challenge (let alone journalists) – is that there is such a great repeat-business order book. BAE Systems shareholders are smiling all the way to the bank.

Sunlit uplands.

There seems no end to powerful and anti-democratic forces who want our stuff.

Let’s cut ourselves off from the cultures that we spent hundreds of years wrestling with – in war and peace – and ultimately worked together with to create a platform for peace, diversity and sharing, of hope, and collaboration. Be it the scientific endeavours, or the regulations that allow safe medicines across Europe, or the protection of consumer rights in telecommunications or even swimming on beaches (without going through the motions).

Who exactly are our friends?

Those in Russia, China and Saudi Arabia that have a long history of suppressing freedoms or those in the EU that have a long history of non-conformism and defence of freedoms, even in the face of despots (Diderot eat your heart out).

As we confront issues such as the unrestricted power of Google and Facebook, or the issue of man-made Global Warming, do we trust the USA, China, India or Russia to act as our friends, and in our interests? Not bloody likely.

Oh, but <keep chanting in the dark> ‘sunlit uplands’ (you know it makes you feel better, you just have to believe and everything will work out – even Chris Grayling’s 50 mile tailbacks for lorries – no pain, no gain).

Is there another path?

We could work with Europeans to push for change and to continue the tradition of European enlightenment and rebellion against elites, towards a better world however flawed; including radical reforms of the EU?

Inside the tent, we have a chance to make the changes, but outside it we merely become prey to deals with those that neither share our values, nor value them. Those who turn our creativity into death and destruction. Is that what we want?

How would you choose in this turbulent world?

Will the UK ultimately find itself  a slave to China, where we will have to attend re-education camps in Milton Keynes; where we will have to unlearn the Glorious Revolution (such a dangerous idea)?


No less so than the sunlit uplands of post-Brexit Britain we have been promised in the false prospectus of Boris Johnson or Jacob Rees-Mogg; a race to the bottom future of vestigual government, and the power of moneyed elites, who want to frame our future in 19th Century terms.

How about a 21st Century world where we are leaders in Europe, using our talents in genomics, engineering, and yes, regulation (we Brits are geniuses at that), to build a better, safer world. Where we transition our industries to confront climate change, mental health, ecocide, the digital economy and other great challenges; as Brits and as Europeans.

Is that too much to ask?


Richard W. Erskine, 2018.


Filed under Uncategorized

Butterflies, Brexit & Brits

I attended an inspiring talk by Chris Packham in Stroud at the launch of Stroud Nature’s season of events. Chris was there to show his photographs but naturally ranged over many topics close to his heart.

The catastrophic drop in species numbers in the UK was one which he has recently written about. The 97% reduction in hedgehogs since the 1950s, and the Heath Fritillary has fallen by 82% in just a decade 

These are just two stats in a long list that attest to this catastrophe.

Chris talked about how brilliant amateur naturalists are in the UK – better than in any other country – in the recording of flora and fauna. They are amateur only in the sense that they do not get paid, but highly professional in the quality of their work. That is why we know about the drop in species numbers in such comprehensive detail. It appears that this love of data is not a new phenomenon.

I have been a lover of butterflies since very young. I came into possession of  a family heirloom when I was just 7 years old which gave a complete record of the natural history butterflies and moths in Great Britain in the 1870s. Part of what made this book so glorious was the intimate accounts of amateur scientists who meticulously recorded sightings and corresponded though letters and journals.


The Brits it seems are crazy about nature, and have this ability to record and document. We love our tick boxes and lists, and documenting things. It’s part of our culture.

I remember once doing a consultancy for a German car manufacturer who got a little irritated by our British team’s insistence on recording all meetings and then reminding the client of agreed points later, when they tried to change the requirements late in the project: “you Brits do love to write things down, don’t you!”.

Yes we do.

But there is a puzzling contradiction here. We love nature, we love recording data, but somehow have allowed species to be harmed, and have failed to stop this? Is this a naive trust in institutions to act on our behalf, or lack of knowledge in the wider population as to the scale of the loss?

I heard it said once (but struggle to find the appropriate reference) that the Normans were delighted after conquering Britain in 1066 to find that unlike most of Europe, the British had a highly organised administration and people paid their dues. Has anything changed?

But we have our limits. Thatcher’s poll tax demonstrated her lack of understanding of the British character. We will riot when pushed too hard – and I don’t know what you think, but by god they frighten me (as someone might have said). Mind you, I can imagine British rioters forming an orderly queue to collect their Molotov Cocktails. Queue jumping is the ultimate sin. Rules must be obeyed.

I have a friend in the finance sector, and we were having a chat about regulations. I asked if it was true in his sector if Brussels ‘dictated’ unreasonable regulations – “Not at all he said. For one thing, Brits are the rule writers par excellence, and the Brits will often gold-plate a regulation from Brussels.”

Now, I am sure some will argue that yes, we Brits are rule followers and love a good rule, but would prefer it if it is always our rules, and solely our rules. Great idea except that it is a total illusion to imagine that we can trade in high value goods and services without agreeing on rules with other countries. 

In sectors like Chemicals and Pharmaceuticals where the UK excels, there are not only European regulations (concerning safety, licensing, event reporting, etc. – all very reasonable and obvious regulations by the way) but International ones. In Pharma, the has Harmonization in its title for a reason, and is increasingly global in nature.

Innovation should be about developing the best medicines, not reinventing protocols for drug trials or the design of a drug dossier used for multi-country licensing applications. One can develop an economy on a level playing field.

The complete freedom the hard-right Brexiteers dream of rather highlights their complete lack of knowledge of how the world works. 

Do we really think we can tear up regulations such as REACH and still trade in in Chemicals, in Europe or even elsewhere? 

And are we really going to tear up the Bathing Water Directive?

Maybe Jacob Rees-Mogg fancies going to the beach and rediscovering the delights of going through the motions, but I suspect the Great British Public might well riot at the suggestion, or at least, get very cross. 

Richard Erskine, 10th July 2018

Leave a comment

Filed under Bexit, Science in Society, Uncategorized

Experiments in Art & Science

My wife and I were on our annual week-end trip to Cambridge to meet up with my old Darwinian friend Chris and his wife, for the usual round of reminiscing, punting and all that. On the Saturday (12th May) we decided to go to Kettle’s Yard to see the house and its exhibition and take in a light lunch.

As we were about to get our (free) tickets for the house visit, we saw people in T-shirts publicising a Gurdon Institute special event in partnership with Kettle’s Yard that we had been unaware of:

Experiments in Art & Science

A new collaboration between three contemporary artists 

and scientists from the Gurdon Institute, 

in partnership with Kettle’s Yard

The three artists in question were Rachel Pimm, David Blandy and Laura Wilson, looking at work being done at the labs, respectively, on:

This immediately grabbed our attention and we changed tack, and went to the presentation and discussion panel, intrigued to learn more about the project.

The Gurdon Institute do research exploring the relationship between human disease and development, through all stages of life.  They use the tools of molecular biology, including model systems that share a lot of their genetic make-up with humans. There were fascinating insights into how the environment can influence creatures, in ways that force us to relax Crick’s famous ‘Central Dogma’. But I am jumping into the science of what I saw, and the purpose of this essay is to explore the relationship between art and science.

I was interested to learn if this project was about making the science more accessible – to draw in those who may be overwhelmed by the complexities of scientific methods – and to provide at least some insight into the work of scientists. Or maybe something deeper, that might be more of an equal partnership between art and science, in a two-way exchange of insights.

I was particularly intrigued by Rachel’s exploration of the memory of trauma, and the deep past revealed in the behaviour of worms, and their role as custodians of nature; of Turing’s morphogenesis, fractals and the emergence of self-similarity at many scales. A heady mix of ideas in the early stages of seeking expression.

David’s exploratory animations of moving through neural networks was also captivating.

As the scientists there noted, the purpose of the art may not be so much as to precisely articulate new questions, but rather to help them to stand back and see their science through fresh eyes, and maybe find unexpected connections.

In our modern world it has almost become an article of faith that science and art occupy two entirely distinct ways of seeing the world, but there was a time, as my friend Chris pointed out, when this distinction would not have been recognised.

Even within a particular department – be it mathematics or molecular biology – the division and sub-division of specialities makes it harder and harder for scientists to comprehend even what is happening in the next room. The funding of science demands a kind of determinism in the production of results which promotes this specialisation. It is a worrying trend because it is something of an anathema when it comes to playfulness or inter-disciplinary collaboration. 

This makes the Wellcome Trust’s support for the Gurdon Institute and for this Science-Art collaboration all the more refreshing. 

Some mathematicians have noted that even within the arcane world of number theory, group theory and the rest, it will only be through the combining of mathematical disciplines that some of the long-standing unresolved questions of mathematics be solved.

In areas such as climate change it was recognised in the lated 1950s that we needed to bring together a diverse range of disciplines to get to grips with the causes and consequences of man-made global warming: meteorologists, atmospheric chemists, glaciologists, marine biologists, and so many more.

We see through complex questions such as land-use and human civilisation how we must broaden this even further to embrace geography, culture and even history, to really understand how to frame solutions to climate change.

In many ways those (in my days) unloved disciplines such as geography, show their true colours as great integrators of knowledge – from human geography to history, from glaciology to food production – and we begin to understand that a little humility is no bad thing when we come to try to understand complex problems. Inter-disciplinary working is not just a fad; it could be the key to unlock complex problems that no single discipline can resolve.

Leonardo da Vinci was both artist and scientist. Ok, so not a scientist in the modern sense that David Wootton explores in his book The Invention of Science that was heralded in by the Enlightenment, but surely a scientist in the sense of his ability to forensically observe the world and try to make sense of it. His art was part of his method in exploring the world, be it the sinews of the human body or birds in flight, art and science were indivisible.

Since my retirement I have started to take up painting seriously. At school I chose science over art, but over the years have dabbled in painting but never quite made progress. Now, under the watchful eye of a great teacher, Alison Vickery, I feel I am beginning to find a voice. What she tells me hasn’t really changed, but I am finally hearing her. ‘Observe the scene, more than look at the paper’; ‘Experiment and don’t be afraid of accidents, because often they are happy ones’; the list of helpful aphorisms never leaves me.

A palette knife loaded with pigment scrapped across a surface can give just the right level of variegation if not too wet and not too dry; there is a kind of science to it. The effect is to produce a kind of complexity that the human eye seems to be drawn to: imperfect symmetries of the kind we find alluring in nature even while in mathematics we seek perfection.

Scientists and artists share many attributes.

At the meeting hosted by Kettle’s Yard, there was a discussion on what was common between artists and scientists. My list adapts what was said on the day: 

  • a curiosity and playfulness in exploring the world around them; 
  • ability to acutely observe the world; 
  • a fascination with patterns;
  • not afraid of failure;
  • dedication to keep going; 
  • searching for truth; 
  • deep respect for the accumulated knowledge and tools of their ‘art’; 
  • ability to experiment with new methods or innovative ways of using old methods.

How then are art and science different?  

Well, of course, the key reason is that they are asking different questions and seeking different kinds of answers.

In art, the question is often simply ‘How do I see, how do I frame what I see. and how do I make sense of it?’ , and ‘How do I express this in a way that is interesting and compelling?’. If I see a tree, I see the sinews of the trunk and branches, and how the dappled light reveals fragmentary hints as to the form of the tree.  I observe the patterns of dark and light in the canopy. A true rendering of colour is of secondary interest (this is not a photograph), except in as much as it helps reveal the complexity of tree: making different greens by playing with mixtures of 2 yellows and 2 blues offers an infinity of greens which is much more interesting than having tubes of green paint (I hardly ever buy green).

Artists do not have definite answers to unambiguous questions. It is OK for me to argue that J M W Turner was the greatest painter of all time, even while my friend vehemently disagrees. When I look at a painting (or sculpture, or film) and feel an emotional response, there is no need to explain it, even though we often seem obliged to put words to emotions, we know these are mere approximations.

In science (or experimental science at least), we ask specific questions, which can be articulated as a hypothesis that challenges the boundaries of our knowledge. We can then design experiments to test the hypothesis, and if we are successful (in the 1% of times that maybe we are lucky), we will have advanced the knowledge of our subject. Most times this is an incremental learning, building on a body of knowledge. Other times, we may need to break something down before building it up again (but unlike the caricature of science often seen on TV, science is rarely about tearing down a whole field of knowledge, and starting from scratch). 

When I see the tree, I ask, why are the leaves of Copper Beech trees deep purple in colour rather than green? Are the energy levels in the chlorophyll molecule somehow changed to produce a different colour or is a different molecule involved?

In science, the objective is to find definite answers to definite questions. That is not to say that the definite answer is in itself a complete answer to all the questions we have. When Schrodinger asked the question ‘What is Life?’ the role and structure of DNA were not known, but there were questions that he could ask and find answers to. This is the wonder of science; this stepping stone quality.

I may find the answer as to why the Copper Beech tree’s leaves are not green, but what of the interesting question of why leaves change colour in autumn and how they change, not from one state (green) to another (brown), but through a complex process that reveals variegations of colour as Autumn unfolds? And what of a forest? How does a mature forest evolve from an immature one; how do pioneer trees give way to a complex ecology of varyingly aged trees and species over time? A leaf begs a question, and a forest may end up being the answer to a bigger question. Maybe we find that art, literature and science are in fact happy bedfellows after all.

As Feynman said, I can be both fascinated by something in the natural world (such as a rainbow) while at the same time seeking a scientific understanding of the phenomenon.

Nevertheless, it seems that while artists and scientists have so much in common, their framings struggle to align, and that in a way is a good thing. 

There is great work done in the illustration of scientific ideas, in textbooks and increasingly in scientific papers. I saw a recent paper on the impact of changes to the stratospheric polar vortex on climate, which was beautifully illustrated. But this is illustration, intended to help articulate those definite questions and answers. It is not art.

So what is the purpose of bringing artists into laboratories to inspire them; to get their response to the work being done there?

The answer, as they say, is on the tin (of this Gurdon Institute collaborative project): It is an experiment.

The hypothesis is that if you take three talented and curious young artists and show them some leading edge science that touches on diverse subjects, good things happen. Art happens.

Based on the short preview of the work being done which I attended, good things are already happening and I am excited to see how the collaboration evolves.

Here are some questions inspired in my mind by the discussion 

  • How do we understand the patterns in form in the ways that Turing wrote about, based on the latest research? Can we explore ‘emergence of form’ as a topic that is interesting, artistically and scientifically?
  • In the world of RNA epigenetics can the previously thought of ‘junk DNA’ play a part in the life of creatures, even humans, in the environment they live in? Can we explore the deep history of our shared genotype, even given our divergent phenotypes? Will the worm teach us how to live better with our environment?
  • Our identity is formed by memory and as we get older we begin to lose our ability to make new memories, but older ones often stay fast, but not always. Surely here there is a rich vein for exploring the artistic and scientific responses to diseases like Alzheimers?

Scientists are dedicated and passionate about their work, like artists. A joint curiosity drives this new collaborative Gurdon Institute project.

The big question for me is this: can art reveal to scientists new questions, or new framings of old questions, that will advance the science in novel ways? Can unexpected connections be revealed or collaborations be inspired?

I certainly hope so.

P.S. the others in my troop did get to do the house visit after all, and it was wonderful, I hear. I missed it because I was too busy chatting to the scientists and artists after the panel discussion; and I am so grateful to have spent time with them.

(c) Richard W. Erskine, 2018


1 Comment

Filed under Art & Science, Essay, Molecular Biology, Uncategorized

Anatomy of a Conspiracy Theory

Normally, as with 9/11, a conspiracy theory involves convoluted chains of reasoning so torturous that it can take a while to determine how the conjuring trick was done: where the lie was implanted. But often, the anatomy of a conspiracy theory takes the following basic form:

Part 1 is a plausible but flawed technical claim that aims to refute an official account, and provides the starting point for Part 2, which is a multi-threaded stream of whataboutery. To connect Part 1 and 2 a sleight of hand is performed. This is the anatomy of a basic conspiracy theory.

I have been thinking about this because a relative of mine asked me for my opinion about a video that turns out to be a good case study in this form of conspiracy theory. It was a video posted by a Dr Chris Busby relating to the nerve gas used to poison the Skripals: 

So, against my better judgment, I sat through the video.

Dr Busby who comes across initially as quite affable proceeds to outline his experience at length. He says he was employed at the Wellcome Research Laboratories in Beckenham (see Note 1), where he worked, in his words, 

“… on the physical chemistry of pharmaceutical compounds or small organic compounds”, and he used “spectroscopic and other methods to determine the structure of these substances, as they were made by the chemists”. 

I have no reason to doubt his background, but equally have not attempted to verify it either; in any case, this is immaterial because I judge people on their arguments not their qualifications.

I want to pass over Busby’s first claim – that a state actor was not necessarily involved because (in his view):

“any synthetic organic chemist could knock up something like that without a lot of difficulty”

… which is questionable, but is not the main focus of this post. I do have a few observations on this subsidiary claim in Note 2.

He explains correctly that a Mass Spectroscopy spectrum (let’s abbreviate this as ‘spectrum’ in what follows) is a pattern of the masses of the ionised fragments created when a substance passes through the instrument. This pattern is characteristic of the molecule under investigation.

So a spectrum “identifies a material”. So far, so good.

He now makes his plausible but flawed technical claim. I don’t want to call it a lie because I will assume Dr Busby made it in good faith, but it does undermine his claim to be an ‘expert’, and was contained in the following statement he made:

“… but in order to do that, you need to have a sample of the material, you need to have synthesized the material”

In brief we can summarise the claim as follows: In order for you to identify a substance, you need to have synthesised it.

Curiously, later in the video he says that the USA manufactured the A-234 strain that is allegedly involved (see Note 3) and put the spectrum on the NIST database, but then later took it down. 

It does not occur to Dr Busby that Porton Down could have taken a copy of data from NIST before it was removed and used that as the reference spectrum, thereby blowing a huge hole in Busby’s chain of logic (also, see Note 4).

But there is a more fundamental reason why the claim is erroneous even if the data had never existed.

One of the whole points of having a technique like mass spectroscopy is precisely to help researchers in determining the structures of unknown substances, particularly in trace quantities where other structural techniques cannot be used (see Note 5).

To show you why the claim is erroneous, here is an example of a chemistry lecturer taking his students through the process of analysing the spectrum of a substance, in order to establish its structure (Credit: Identify a reasonable structure for the pictured mass spectrum of an unknown sample, Professor Heath’s Chemistry Channel, 6th October 2016).

This method uses knowledge of chemistry, logic and arithmetic to ‘reverse engineer’ the chemical structure, based on the masses of the fragments:

Now it is true that with a library of spectra for known substances, the analysis is greatly accelerated, because we can then compare a sample’s spectrum with ones in the library. This might be called ‘routine diagnostic mass spectroscopy’.

He talked about having done a lot of work on pharmaceuticals that had been synthesised “in Spain or in India”, and clearly here the mode of application would have been the comparison of known molecules manufactured by (in this case Wellcome) with other samples retrieved from other sources – possibly trying to break a patent – but giving away their source due to impurities in the sample (see Note 6).

It then struck me that he must have spent so much time doing this routine diagnostic diagnostic mass spectroscopy that he is now presenting this as the only way in which you can use mass spectroscopy to identify a substance.

He seems to have forgotten the more general use of the method by scientists.

This flawed assumption leads to the scientific and logical chain of reasoning used by Dr Busby in this video. 

The sleight of hand arrives when he uses the phrase ‘false flag’ at 6’55” into a 10’19” video.  

The chain of logic has been constructed to lead the viewer to this point. Dr Busby was in effect saying ‘to test for the agent, you need to have made it; if you can make it, maybe it got out; and maybe the UK (or US) was  responsible for using it!’.

This is an outrageous claim but he avoids directly accusing the UK or US Governments; and this is the sleight of hand. He leaves the viewer to fill in the gap.

This then paves the way for Part 2 of his conspiracy theory which now begins in earnest on the video. He cranks up the rhetoric and offers up an anti-American diatribe, full of conspiracy ideation.

He concludes the video as follows:

“There’s no way there’s any proof that that material that poisoned the Skripal’s came from Russia. That’s the take home message”

On the contrary, the message I took away is that it is sad that an ex-scientist is bending and abusing scientific knowledge to concoct conspiracy theories, to advance his political dogma, and helping to magnify the Kremlin’s whataboutery.

Now, Dr Busby might well respond by saying “but you haven’t proved the Russians did it!”.  No, but I would reply ‘you haven’t proved that they didn’t, and as things stand, it is clear that they are the prime suspect’; ask any police inspector how they would assess the situation.

My purpose here was not to prove anything, but to discuss the anatomy of conspiracy theories in general, and debunk this one in particular.

But I do want to highlight one additional point: those that are apologists for the Russian state will demand 100% proof the Russians did it, but are lazily accepting of weak arguments – including Dr Busby’s video – that attempt to point the finger at the UK or US Governments. This is, at least, double standards.

By all means present your political views and theories on world politics, Dr Busby – the UK is a country where we can express our opinions freely – but please don’t dress them up with flawed scientific reasoning masquerading as scientific expertise.

Hunting down a plausible but flawed technical claim is not always as easy as in the case study above, but remember the anatomy, because it is usually easy to spot the sleight of hand that then connects with the main body of a conspiracy theory.

We all need to be inoculated against this kind of conspiracy ideation, and I hope my dissection of this example is helpful to people.


© Richard W. Erskine, 2018


Note 1: The Wellcome Research Laboratories in Beckenham closed in 1995, when the GlaxoWellcome merged company was formed, and after further mergers transformed into the current leading pharmaceutical global entity GSK.

Note 2: Busby’s first claim is that the nerve agent identified by Porton Down is a simple organic compound and therefore easy for a chemist to synthesise. Gary Aitkenhead, the chief executive of the government’s Defence Science and Technology Laboratory (DSTL) said on Sky News (here reported in The Guardian)

“It’s a military-grade nerve agent, which requires extremely sophisticated methods in order to create – something that’s probably only within the capabilities of a state actor.”

But the difficulty of synthesising a molecule is not simply based on the number of atoms in the molecule, but rather the synthetic pathway, and all that, and in the case of a nerve agent, the practical difficulties involved in making the stuff in a safe environment, then preparing it in some ‘weaponized’ formulation.

Vil Mirzayanov who was a chemist who worked on Novichok has said that  that this process is extremely difficult. Dr Busby thinks he knows better but not being a synthetic chemist (remember, he had chemists making the samples he analysed), cannot claim expertise on the ease or difficulty of nerve agent synthesis.

The UK position is that the extremely pure nature of the samples found in Salisbury point to a state actor. Most of us, and I would include Dr Busby, without experience of the synthesis of the nerve agent in question and its formulation as a weapon, cannot really comment with authority on this question.

Simply saying it is a simple molecule really doesn’t stand up as an argument.

Note 3: While the Russian Ambassador to the UK claims that the strain is A-234, neither the UK Government, nor Porton Down, nor the OPCW have stated which strain was used, and so the question regarding what strain or strains the USA might or might not have synthesized, is pure speculation.

Note 4: He says that if the USA synthesised it (the strain of nerve agent assumed to have been used), then it is possible that Porton Down did so as well. I am not arguing this point either way. The point of this post is to challenge what Dr Busby presents as an unassailable chain of logic, but which is nothing of the sort.

Note 5: There are many other techniques used in general for structuralwork, but not all are applicable in every situation. For large complex biological molecules, X-Ray Crystallography has been very successful, and more recently CryoEM has matured to the point where it is taking over this role. Neither will have used in the case of trace quantities of a nerve agent.

Note 6: He also talks about impurities that can show up in a spectrum and using these as a way to identify a laboratory of origin (in relation to his pharmaceuticals experience), but this is a separate argument, which is irrelevant if the sample is of high purity, which is what OPCW confirmed in relation to the nerve gas found in Salisbury.

.. o O o ..




Filed under Conspiracy Theories, Uncategorized

Cambridge Analytica and the micro-targeting smokescreen

I have an hypothesis.

The Information Commissioner’s Office (ICO) won’t find any retained data at Cambridge Analytica (CA) gleaned from Facebook user’s. They might even find proof it was deleted in a timely manner.

So, would that mean CA did not provide an assist to the Trump campaign? No.

Because the analysis of all that data would have been used to provide knowledge and insight into which buttons to push in the minds of voters, and crucially, in which States this would be most effective.

At that point you can delete all the source Facebook data.

The knowledge and insight would have powered a broad spectrum campaign using good old fashioned media channels and social media. At this point, it is not micro-targeting, but throwing mud knowing it will stick where it matters.

Maybe the focus on micro-targeting is a smokescreen, because if the ICO don’t find retained data, then CA can say “see, we are innocent of all charges of interference”, when in fact the truth could be quite the opposite.

It is important the ICO, Select Committees in the UK Parliament and, when they get their act together, committees on Capitol Hill, ask the right questions, and do not succumb to smokescreens.

But then, that is only an hypothesis.

What do I know?

(c) Richard W. Erskine, 2018

Leave a comment

Filed under Uncategorized

The Myth of Facebook’s Free Lunch

We all know that there is no such thing as a free lunch, don’t we?

Except when we get the next offer of a free lunch. It’ll be different this time, because they are so nice and well, what could go wrong?

The Facebook offer was always the offer of a free lunch. No need to pay anything for you account, and just share and share alike.

In fact the encouragement to be as open and sharing as possible was made easier by the byzantine complexity of the access controls (to allow people to be more private). It never occurred to Facebook that humans have complex lives where the family friends was a non-overlapping set of people to the tennis club friends, or the ‘stop the fracking’ friends!

No, there is a binary reductionism to the happy clappy religion which is ‘the world is my friend’  dogma of social media, of which Facebook is the prime archetype.

Of course, the business model was always to monetise our connectivity. We view a few pages on artist materials, and suddenly we are deluged by adverts for artist materials. Basic stuff you might say, and often it is; small minded big data. But it feels like and is an intrusion. Facebook is wanting to take business away from WPP and the rest and uses the social desire to connect as the vehicle for gaining a better insight into our lives than traditional marketing can achieve. Why did Facebook not make this clear to people from the start?

The joke was always that marketing companies know that 50% of their spending is wasted but don’t know which parts make up that 50%.

Facebook will now say that they know.

Don’t get me wrong, I love Facebook, because it reunited me with a long lost ‘other’ family. That is another story but I am eternally grateful to Facebook for making that connection. It also provides the town I live in the ability to connect over local issues. It can be a force for good.

But the most egregious issue that Facebook is now facing (and seem in denial about) is that the bill for the lunch is now proving to be exceptionally high indeed.

If Facebook data effectively helped Cambridge Analytica help Trump and the Brexit campaigns to win even a marginal assist – as is now alleged – that could have been crucial, as both won by a marginal amount.

We cannot go back to a pre-digital world.

We need trust in institutions and in what will happen to our data, and not just the snaps we took of the new kitten playing on the sofa. We want the benefits that combining genomics and clinical data will do to revolutionise medicine. We want to develop ground-up social enterprises to address issues like climate change. We need to be able to move beyond primitive cloudscum fileshares or private storage devices to a truly trusted, long term repository for personal data; guaranteed to a level no less than a National Archive.

There are many reasons we need community governed, rigorously audited and regulated data, to help in many aspects of our personal lives, social enterprises, and as safe places for retention of knowledge and cultural assets in the digital world.

Even without the Cambridge Analytica scandal, the geek-driven models of Facebook, Google and the rest betray a level of naivety and lack of insight into this challenge which is breathtaking.

Call it Web 4.0 or choose a higher number if you like.

But what this episode proves is that the current generation of social media is barely a rough draft on what society needs in the digital world of the 21st Century.

Leave a comment

Filed under Social Media

Communicating Key Figures from IPCC Reports to a Wider Public

If you were to think about ranking the most important Figures from the IPCC Fifth Assessment Report, I would not be surprised if the following one (SPM.10) did not emerge as a strong candidate for the number one slot:

IPCC AR5 Figure SPM.10

This is how the Figure appears in the main report, on page 28 (in the Summary for Policymakers) of The Physical Basis Report (see References: IPCC, 2013). The Synthesis Report includes a similar figure with additional annotations.

Many have used it in talks because of its fundamental importance (for example, Sir David King in his Walker Institute Annual Lecture (10th June 2015), ahead of COP21 in Paris). I have followed this lead, and am sure that I am not alone.

This Figure shows an approximately linear1 relationship between the cumulative carbon dioxide we emit2, and the rise in global average surface temperature3 up to 2100. It was crucial to discussions on carbon budgets held in Paris and the goal of stabilising the climate.

I am not proposing animating this Figure in the way discussed in my previous essay, but I do think its importance warrants additional attention to get it out there to a wider audience (beyond the usual climate geeks!).

So my question is:

“Does it warrant some kind of pedagogic treatment for a general audience (and dare I say, for policy-makers who may themselves struggle with the density of information conveyed)?”

My answer is yes, and I believe that the IPCC, as guardians of the integrity of the report findings, are best placed to lead such an effort, albeit supported by skills to support the science communications.

The IPCC should not leave it to bloggers and other commentators to furnish such content, as key Figures such as this are fundamental to the report’s findings, and need to be as widely understood as possible.

While I am conscious of Tufte’s wariness regarding Powerpoint, I think that the ‘build’ technique – when used well – can be extremely useful in unfolding the information, in biteable chunks. This is what I have tried to do with the above Figure in a recent talk. I thought I would share my draft attempt.

It can obviously do with more work, and the annotations represent my emphasis and use of  language4. Nevertheless, I believe I was able to truthfully convey the key information from the original IPCC Figure more successfully than I have before; taking the audience with me, rather than scaring them off.

So here goes, taken from a segment of my talk … my narrative, to accompany the ‘builds’, is in italics …

Where are we now?

“There is a key question: what is the relationship between the peak atmospheric concentration and the level of warming, compared to a late 19th century baseline, that will result, by the end of the 21st century?”

“Let’s start with seeing where we are now, which is marked by a X in the Figure below.” 

Unpacking SYR2.3 - Build 1

“Our cumulative man-made emissions of carbon dioxide (CO2) have to date been nearly 2000 billion tonnes (top scale above)”

“After noting that 50% of this remains in the atmosphere, this has given rise to an increase in the atmospheric concentration from its long-standing pre-industrial value of 280 parts per million to it current value which is now about 400 parts per million (bottom scale above).”

“This in turn has led to an increase in averaged global surface temperature of  1oC above the baseline of 1861 to 1880 (vertical scale above).”

Where might we be in 2100?

“As we add additional carbon dioxide, the temperature will rise broadly in proportion to the increased concentration in the atmosphere. There is some uncertainty between “best case” and “worst case” margins of error (shown by the dashed lines).” 

Unpacking SYR2.3 - Build 2

“By the end of the century, depending on how much we emit and allowing for uncertainties, we can end up anywhere within the grey area shown here. The question marks (“?”) illustrate where we might be by 2100.”

Can we stay below 2C?

“The most optimistic scenario included in the IPCC’s Fifth Assessment Report (AR5) was based on the assumption of a rapid reduction in emissions, and a growing role for the artificial capture of carbon dioxide from the atmosphere (using a technology called BECCS).” 

Unpacking SYR2.3 - Build 3

“This optimistic scenario would meet the target agreed by the nations in Paris, which is to limit the temperature rise to 2oC.”

“We effectively have a ‘carbon budget’; an amount of fossil fuels that can be burned and for us to stay below 2oC”. 

“The longer we delay dramatically reducing emissions, the faster the drop would need to be in our emissions later, as we approach the end of the ‘carbon budget’.” 

“Some argue that we are already beyond the point where we can realistically move fast enough to make this transition.” 

“Generally, experts agree it is extremely challenging, but still not impossible.”

Where will we be in 2100?  – Paris Commitments

“The nationally determined contributions (or NDCs) – the amounts by which carbon dioxide emissions will fall – that the parties to the Paris Agreement put forward have been totted up and they would, if implemented fully, bring us to a temperature rise of between 2.5 and 3.5 oC (and an atmospheric concentration about twice that of pre-industrial levels).”

Unpacking SYR2.3 - Build 4

 “Now, the nations are committed to increase their ‘ambition’, so we expect that NDCs should get better, but it is deeply concerning that at present, the nations’ current targets are (1) not keeping us unambiguously clear of catastrophe, and (2) struggling to be met. More ambition, and crucially more achievement, is urgent.”

“I have indicated the orange scenarios as “globally severe”, but for many regions “catastrophic” (but some, for example, Xu and Ramanathan5, would use the term “Catastrophic” for any warming over 3oC, and “Unknown” for warming above 5oC). The IPCC are much more conservative in the language they use.”

Where will we be in 2100? – Business As Usual Scenario

“The so-called ‘business as usual’ scenario represents on-going use of fossil fuels, continuing to meet the majority of our energy needs, in a world with an increasing population and increasing GDP per capita, and consequently a continuing growth in CO2 emissions.”

Unpacking SYR2.3 - Build 5

”This takes global warming to an exceptionally bad place, with a (globally averaged) temperature rise of between 4 and 6 oC; where atmospheric concentrations will have risen to between 2.5 and 3 times the pre-industrial levels.”

“The red indicates that this is globally catastrophic.”

“If we go above 5oC warming we move, according to Xu and Ramanathan,  from a “catastrophic” regime to an “unknown” one. I have not tried to indicate this extended vocabulary on the diagram, but what is clear is that the ‘business as usual’ scenario is really not an option, if we are paying attention to what the science is telling us.”

That’s it. My draft attempt to convey the substance and importance of Figure SPM.10, which I have tried to do faithfully; albeit adding the adjectives “optimistic” etc. to characterise the scenarios.

I am sure the IPCC could do a much better job than me at providing a more accessible presentation of Figure SPM.10 and indeed, a number of high ranking Figures from their reports, that deserve and need a broader audience.

© Richard W. Erskine


  1. The linearity of this relationship was originally discussed in Myles Allen et al (2009), and this and other work has been incorporated in the IPCC reports. Also see Technical Note A below.
  1. About half of which remains in the atmosphere, for a very long time
  1. Eventually, after the planet reaches a new equilibrium, a long time in the future. Also see Technical Note B below.
  1. There are different opinions are what language to use – ‘dangerous’, ‘catastrophic’, etc. – and at what levels of warming to apply this language. The IPCC is conservative in its use of language, as is customary in the scientific literature. Some would argue that in wanting to avoid the charge of being alarmist, it is in danger of obscuring the seriousness of the risks faced. In my graphics I have tried to remain reasonably conservative in the use of language, because I believe things are serious enough; even when a conservative approach is taken.
  1. Now, Elizabeth Kolbert has written in the New Yorker:

In a recent paper in the Proceedings of the National Academy of Sciences, two climate scientists—Yangyang Xu, of Texas A. & M., and Veerabhadran Ramanathan, of the Scripps Institution of Oceanography—proposed that warming greater than three degrees Celsius be designated as “catastrophic” and warming greater than five degrees as “unknown??” The “unknown??” designation, they wrote, comes “with the understanding that changes of this magnitude, not experienced in the last 20+ million years, pose existential threats to a majority of the population.”


  • IPCC, 2013: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovern- mental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 1535 pp.
  • IPCC, 2001: Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change [Houghton, J.T., Y. Ding, D.J. Griggs, M. Noguer, P.J. van der Linden, X. Dai, K. Maskell, and C.A. Johnson (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 881pp.
  • Myles Allen at al (2009), “Warming caused by cumulative carbon emissions towards the trillionth tonne”,Nature 458, 1163-1166
  • Kirsten Zickfeld et al (2016), “On the proportionality between global temperature change and cumulative CO2 emissions during periods of net negative CO2 emissions”, Environ. Res. Lett. 11 055006

Technical Notes

A. Logarithmic relationship?

For those who know about the logarithmic relationship between added CO2 concentration and the ‘radiative forcing’ (giving rise to warming) – and many well meaning contrarians seem to take succour from this fact – the linear relationship in this figure may at first sight seem surprising.

The reason for the linearity is nicely explained by Marcin Popkiewicz in his piece “If growth of COconcentration causes only logarithmic temperature increase – why worry?”

The relative warming (between one level of emissions and another) is related to the ratio of this logarithmic function, and that is approximately linear over the concentration range of interest.

In any case, it is worth noting that CO2 concentrations have been increasing exponentially, and a logarithm of an exponential function is a linear function.

There is on-going work on wider questions. For example, to what extent ‘negative emissions technology’ can counteract warming that is in the pipeline?

Kirsten Zickfield et al (2016), is one such paper, “…[suggests that] positive CO2 emissions are more effective at warming than negative emissions are at subsequently cooling”. So we need to be very careful in assuming we can reverse warming that is in the pipeline.

B. Transient Climate Response and Additional Warming Commitment

The ‘Transient Climate Response’ (TCR) reflects the warming that results when CO2 is added at 1% per year, which for a doubling of the concentration takes 70 years. This is illustrated quite well in a figure from a previous report (Reference: IPCC, 2001):

TAR Figure 9.1

The warming that results from this additional concentration of CO2 occurs over the same time frame. However, this does not include all the the warming that will eventually result because the earth system (principally the oceans and atmosphere) will take a long time to reach a new equilibrium where all the flows of energy are brought back into a (new) balance. This will take at least 200 years (for lower emission scenarios) or much longer for higher emission levels.  This additional warming commitment must be added to the TCR. However, the TCR nevertheless does represent perhaps 70% of the overall warming, and remains a useful measure when discussing policy options over the 21st Century.

This discussion excludes more uncertain and much longer term feedbacks involving, for example, changes to the polar ice sheets (and consequentially, the Earth’s albedo), release of methane from northern latitudes or methane clathrates from the oceans. These are not part of the ‘additional warming commitment’, even in the IPCC 2013 report, as they are considered too speculative and uncertain to be quantified.

. . o O o . .

Leave a comment

Filed under Climate Science

Animating IPCC Climate Data

The IPCC (Intergovernmental Panel on Climate Change) is exploring ways to improve the communication of its findings, particularly to a more general  audience. They are not alone in having identified a need to think again about clear ‘science communications’. For example, the EU’s HELIX project (High-End Climate Impacts and Extremes), produced some guidelines a while ago on better use of language and diagrams.

Coming out of the HELIX project, and through a series of workshops, a collaboration with the Tyndall Centre and Climate Outreach, has produced a comprehensive guide (Guide With Practical Exercises to Train Researchers In the Science of  Climate Change Communication)

The idea is not to say ‘communicate like THIS’ but more to share good practice amongst scientists and to ensure all scientists are aware of the communication issues, and then to address them.

Much of this guidance concerns the ‘soft’ aspects of communication: how the communicator views themself; understanding the audience; building trust; coping with uncertainty; etc.

Some of this reflects ideas that are useful not just to scientific communication, but almost any technical presentation in any sector, but that does not diminish its importance.

This has now been distilled into a Communications Handbook for IPCC Scientists; not an official publication of the IPCC but a contribution to the conversation on how to improve communications.

I want to take a slightly different tack, which is not a response to the handbook per se, but covers a complementary issue.

In many years of being involved in presenting complex material (in my case, in enterprise information management) to audiences unfamiliar with the subject at hand, I have often been aware of the communication potential but also risks of diagrams. They say that a picture is worth a thousand words, but this is not true if you need a thousand words to explain the picture!

The unwritten rules related to the visual syntax and semantics of diagrams is a fascinating topic, and one which many – and most notably Edward Tufte –  have explored. In chapter 2 of his insightful and beautiful book Visual Explanations, Tufte argues:

“When we reason about quantityative evidence, certain methods for displaying and analysing data are better than others. Superior methods are more likely to produce truthful, credible, and precise findings. The difference between an excellent analysis and a faulty one can sometimes have momentous consequences”

He then describes how data can be used and abused. He illustrates this with two examples: the 1854 Cholera epidemic in London and the 1986 Challenger space shuttle disaster.

Tufte has been highly critical of the over reliance on Powerpoint for technical reporting (not just presentations) in NASA, because the form of the content degrades the narrative that should have been an essential part of any report (with or without pictures). Bulletized data can destroy context, clarity and meaning.

There could be no more ‘momentous consequences’ than those that arise from man-made global warming, and therefore, there could hardly be a more important case where a Tuftian eye, if I may call it that, needs to be brought to bear on how the information is described and visualised.

The IPCC, and the underlying science on which it relies, is arguably the greatest scientific collaboration ever undertaken, and rightly recognised with a Nobel Prize. It includes a level of interdisciplinary cooperation that is frankly awe-inspiring; unique in its scope and depth.

It is not surprising therefore that it has led to very large and dense reports, covering the many areas that are unavoidably involved: the cryosphere, sea-level rise, crops, extreme weather, species migration, etc.. It might seem difficult to condense this material without loss of important information. For example, Volume 1 of the IPCC Fifth Assessment Report, which covered the Physical Basis of Climate Change, was over 1500 pages long.

Nevertheless, the IPCC endeavours to help policy-makers by providing them with summaries and also a synthesis report, to provide the essential underlying knowledge that policy-makers need to inform their discussions on actions in response to the science.

However, in its summary reports the IPCC will often reuse key diagrams, taken from the full reports. There are good reasons for this, because the IPCC is trying to maintain mutual consistency between different products covering the same findings at different levels of detail.

This exercise is fraught with risks of over-simplification or misrepresentation of the main report’s findings, and this might limit the degree to which the IPCC can become ‘creative’ with compelling visuals that ‘simplify’ the original diagrams. Remember too that these reports need to be agreed by reviewers from national representatives, and the language will often seem to combine the cautiousness of a scientist with the dryness of a lawyer.

So yes, it can be problematic to use artistic flair to improve the comprehensibility of the findings, but risk losing the nuance and caution that is a hallmark of science. The countervailing risk is that people do not really ‘get it’; and do not appreciate what they are seeing.

We have seen with the Challenger reports, that people did not appreciate the issue with the O rings, especially when key facts were buried in 5 levels of indented bullet points in a tiny font, for example or, hidden in plain sight, in a figure so complex that the key findings are lost in a fog of complexity.

That is why any attempt to improve the summaries for policy makers and the general public must continue to involve those who are responsible for the overall integrity and consistency of the different products, not simply hived off to a separate group of ‘creatives’ who would lack knowledge and insight of the nuance that needs to be respected.  But those complementary skills – data visualizers, graphics artists, and others – need to be included in this effort to improve science communications. There is also a need for those able to critically evaluate the pedagogic value of the output (along the lines of Tufte), to ensure they really inform, and do not confuse.

Some individuals have taken to social media to present their own examples of how to present information, which often employs animation (something that is clearly not possible for the printed page, or its digital analogue, a PDF document). Perhaps the most well known example to date was Professor Ed Hawkin’s spiral picture showing the increase in global mean surface temperature:


This animation went viral, and was even featured as part of the Rio Olympics Opening Ceremony. This and other spiral animations can be found at the Climate Lab Book site.

There are now a number of other great producers of animations. Here follows a few examples.

Here, Kevin Pluck (@kevpluck) illustrates the link between the rising carbon dioxide levels and the rising mean surface temperature, since 1958 (the year when direct and continuous measurements of carbon dioxide were pioneered by Keeling)

Kevin Pluck has many other animations which are informative, particularly in relation to sea ice.

Another example, from Antti Lipponen (@anttilip), visualises the increase in surface warming from 1900 to 2017, by country, grouped according to continent. We see the increasing length/redness of the radial bars, showing an overall warming trend, but at different rates according to region and country.

A final example along the same lines is from John Kennedy (@micefearboggis), which is slightly more elaborate but rich in interesting information. It shows temperature changes over the years, at different latitudes, for both ocean (left side) and land (right side). The longer/redder the bar the higher the increase in temperature at that location, relative to the temperature baseline at that location (which scientists call the ‘anomaly’). This is why we see the greatest warming in the Arctic, as it is warming proportionally faster than the rest of the planet; this is one of the big takeaways from this animation.

These examples of animation are clearly not dumbing down the data, far from it. They  improve the chances of the general public engaging with the data. This kind of animation of the data provides an entry point for those wanting to learn more. They can then move onto a narrative treatment, placing the animation in context, confident that they have grasped the essential information.

If the IPCC restricts itself to static media (i.e. PDF files), it will miss many opportunities to enliven the data in the ways illustrated above that reveal the essential knowledge that needs to be communicated.

(c) Richard W. Erskine, 2018


Filed under Climate Science, Essay, Science Communications

When did you learn about the Holocaust?

“Where were you when Kennedy was shot?”,

used to be the question everyone asked, but of course is an increasingly irrelevant question, in an ageing population.

But a question that should never age, and should stay with us forever, is

“When did you learn about the holocaust?”.

I remember when I first learned about the holocaust, and it remains seared into my consciousness, thanks to a passionate and dedicated teacher, Mr Cromie.

I was a young child at a boarding school Stouts Hill Preparatory School, in the little village of Uley in Gloucestershire. The school no longer exists but that memory never fades. You cannot ‘unlearn’ something like that.

I was no more than 12 at the time, so this would have been 1965 or earlier, and our teacher told us about the mass murder of the Jews in Nazi Germany, but with a sense of anger and resentment at the injustice of this monstrous episode in history. And it has often occurred to me since that the peak of this programme of murder was just 10 years before I was born.

But what did I learn and what did I remember? I learned about the gas chambers, and the burning of bodies, but it was all a kind of vague memory of an atrocity, difficult to properly make sense of at that age.

What we did not really learn was the process by which a civilised country like Germany could turn from being at the centre of European culture to a murderous genocidal regime in just a decade.

For British viewers, this story of inhumanity was often framed through the lens of Bergen-Belsen, because it was the Brits that liberated this Concentration Camp, and the influential Richard Dimbleby was there to deliver his sonorous commentary on the horrors of the skeletal survivors and piles of corpses.

But it is curious how this story is still the reflex image that many Britons have of the holocaust, and I have often wondered why.  The Conversation tried to provide an answer:

“But even though many, if not most, of those involved in the rescue and relief effort were aware of the fact that Jews made up the largest number of the victims, the evolving official British narrative sidestepped this issue. The liberation of Bergen-Belsen became separated from what the people held in this camp had had to endure, and why they had been incarcerated in the first place.

Instead, the liberation of Bergen-Belsen was transformed into a British triumph over “evil”. The event was used to confirm to the wider British public that the British Army had fought a morally and ethically justified war, that all the personal and collective sacrifices made to win the war had now been vindicated. Bergen-Belsen gave sense and meaning to the British military campaign against Nazi Germany and the Allied demand for an unconditional surrender. The liberation of the camp became Britain’s finest hour.”

Each country, each culture, and each person, constructs their own narrative to try to make sense of the horror.

But despite the horror of Bergen-Belsen, and the 35,000 who died there, it is barely a footnote in the industrialised murder campaign that the Nazi leadership planned and executed.

Despite the fact that most people are vaguely aware of a figure of several million Jews and others dying, they are rather less aware of the distinction between Concentration Camps and Death Camps (also know as Extermination Camps).

Many died in the numerous Concentration Camps, as Wikipedia describes:

“Many of the prisoners died in the concentration camps due to deliberate maltreatment, disease, starvation, and overwork, or they were executed as unfit for labor. Prisoners were transported in inhumane conditions by rail freight cars, in which many died before reaching their final destination. The prisoners were confined in the boxcars for days or even weeks, with little or no food or water. Many died of dehydration in the intense heat of summer or froze to death in winter. Concentration camps also existed in Germany itself, and while they were not specifically designed for systematic extermination, many of their inmates perished because of harsh conditions or they were executed.”

The death camps at Chełmno, Treblinka, Sobibór and Belzec were designed purely as places of murder.  It is not simply about the arithmetic of the holocaust. After all, the death squads and related actions in the east accounted for 2.5 million murders, and the death camps over 3 million. But it is the sheer refinement of the industrialization of murder at the Extermination Camps that is difficult to comprehend:

“Visitors to the sites of Belzec, Sobibor and Treblinka (of who there are far, far fewer than travel to Auschwitz) are shocked by how tiny these killing camps were. A total of around 1.7 million people were murdered in these three camps – 600,000 more than the murder toll of Auschwitz – and yet all three could fit into the area of Auschwitz-Birkenau with room to spare. In a murder process that is an affront to human dignity at almost every level, one of the greatest affronts – and this may seem illiogical unless you have actually been there – is that so many people were killed in such a small area.”

Auschwitz: The Nazis & The ‘Final Solution’ – Laurence Rees, BBC Books, 2005

Majdanek and Auschwitz also became Extermination Camps, but were dual purpose, also being used as Concentration Camps, so they had accommodation, bunks, and so forth that where not needed in the small camps designed purely for murder.

It is helpful to those who deny the holocaust or its full horror that Belzec, Sobibor and Treblinka have not entered into the public imagination in the way that Auschwitz has. Being dual use it is then easier to play on this apparent ambiguity, to construct a denial narrative along the lines of: many died from hard labour, it was not systematic murder.

And of course, not knowing about Belzec, Sobibor, Treblinka and Chełmno is a lot easier than knowing, because they expose the full, unadulterated horror.

Remember that the Final Solution came after a decade of murderous projects – the death squads in the east, the euthanasia programmes, and early experiments with gassing – which led to the final horror of the Extermination Camps.

You can never stop learning, because you will never hear all the details, read all the books, or hear all the testimonies.

But if you ever find yourself not feeling deeply uncomfortable (as well as deeply moved) by the horrors of the Holocaust, then it is time to not turn away. To take another look.

For us today, the most important lesson is that it is possible for even a sophisticated and educated country to succumb to a warped philosophy that blames the ‘other’ for  problems in society, and to progressively desensitize the people to greater and greater levels of dehumanisation.

While nothing on the scale of the holocaust has occurred again, can we be confident that it never could? When we see what has happened under Pol Pot, or in Srebrenica, or in Rwanda, we know that the capacity of people to dehumanise ‘others’ for reasons of ethnicity or politics, and to murder them in large numbers, has not gone away.

The price of freedom, and decency in a society, is eternal vigilance.

Calling out hate speech is therefore, in a small way, honouring the 6 million – the great majority of whom were Jews – who died in the holocaust. It is stamping out that first step in that process of dehumanisation that is the common precursor of all genocidal episodes in history. It is always lurking there, waiting to consume a society that is looking for simple answers, and for someone to blame.

When did I learn about the holocaust?

I never stop learning.


#HolocaustMemorialDay #WeRemember


Filed under Holocaust, Uncategorized

Matt Ridley shares his ignorance of climate science (again)

Ridley trots out a combination of long-refuted myths that are much loved by contrarians; bad or crank science; or misunderstandings as to the current state of knowledge. In the absence of a Climate Feedback dissection of Ridley’s latest opinion piece, here is my response to some of his nonsense …

Here are five statements he makes that I will refute in turn.

1. He says: Forty-five years ago a run of cold winters caused a “global cooling” scare.

I say:

Stop repeating this myth Matt! A few articles in popular magazines in the 70s speculated about an impending ice age, and so according to dissemblers like Ridley, they state or imply that this was the scientific consensus at the time (snarky message: silly scientists can’t make your mind up). This is nonsense, but so popular amongst contrarians it is repeated frequently to this day.

If you want to know what scientists were really thinking and publishing in scientific papers read “The Myth of the 1970s Global Cooling Scientific Consensus”, by Thomas Peterson at al (2008), American Meteorological Society.

Warming, not cooling was the greater concern. It is astonishing that Ridley and others continue to repeat this myth. Has he really been unable – in the ten years since it was published – to read this oft cited article and so disabuse himself of the myth? Or does he deliberately repeat it because he thinks his readers are too lazy or too dumb to check the facts? How arrogant would that be?

2. He says: Valentina Zharkova of Northumbria University has suggested that a quiescent sun presages another Little Ice Age like that of 1300-1850. I’m not persuaded. Yet the argument that the world is slowly slipping back into a proper ice age after 10,000 years of balmy warmth is in essence true.

I say:

Oh dear, he cites the work of Zharkova, saying he is not persuaded, but then talks of ‘slowly slipping into a proper ice age’. A curious non sequitur. While we are on Zharkova, her work suffered from being poorly communicated.

And quantitatively, her work has no relevance to the current global warming we are observing. The solar minimum might create a -0.3C contribution over a limited period, but that would hardly put a dent in the +0.2C per decade rate of warming.

But, let’s return to the ice age cycle. What Ridley obdurately refuses to acknowledge is that the current warming is occurring due to less than 200 years of man-made changes to the Earth’s atmosphere, raising CO2 to levels not seen for nearly 1 million years (equal to 10 ice age cycles), is raising the global mean surface temperature at an unprecedented rate.

Therefore, talking about the long slow descent over thousands of years into an ice age that ought to be happening (based on the prior cycles), is frankly bizarre, especially given that the man-made warming is now very likely to delay a future ice age. As the a paper by Ganopolski et al, Nature (2016) has estimated:

“Additionally, our analysis suggests that even in the absence of human perturbations no substantial build-up of ice sheets would occur within the next several thousand years and that the current interglacial would probably last for another 50,000 years. However, moderate anthropogenic cumulative CO2 emissions of 1,000 to 1,500 gigatonnes of carbon will postpone the next glacial inception by at least 100,000 years.”

And why stop there, Matt? Our expanding sun will boil away the oceans in a billion years time, so why worry about Brexit; and don’t get me started on the heat death of the universe. It’s hopeless, so we might as well have a great hedonistic time and go to hell in a handcart! Ridiculous, yes, but no less so than Ridley conflating current man-made global warming with a far, far off ice age, that recedes with every year we fail to address man-made emissions of CO2.

3. He says: Well, not so fast. Inconveniently, the correlation implies causation the wrong way round: at the end of an interglacial, such as the Eemian period, over 100,000 years ago, carbon dioxide levels remain high for many thousands of years while temperature fell steadily. Eventually CO2 followed temperature downward.

I say:

The ice ages have indeed been a focus of study since Louis Agassiz coined the term in 1837, and there have been many twists and turns in our understanding of them even up to the present day, but Ridley’s over-simplification shows his ignorance of the evolution of this understanding.

The Milankovitch Cycles are key triggers for entering, an ice age (and indeed, leaving it), but the changes in atmospheric concentrations of carbon dioxide drives the cooling (entering) and warming (leaving) of an ice age, something that was finally accepted by the science community following Hays et al’s 1976 seminal paper (Variations in the Earth’s orbit: Pacemake of the ice ages) , over 50 years since Milankovitch first did his work.

But the ice core data that Ridley refers to confirms that carbon dioxide is the driver, or ‘control knob’, as Professor Richard Alley explains it; and if you need a very readable and scientifically literate history of our understanding of the ice cores and what they are telling us, his book “The Two-Mile Time Machine: Ice Cores, Abrupt Climate Change, and Our Future” is a peerless, and unputdownable introduction.

Professor Alley offers an analogy. Suppose you take out a small loan, but then after this interest is added, and keeps being added, so that after some years you owe a lot of money. Was it the small loan, or the interest rate that created the large debt? You might say both, but it is certainly ridiculous to say the the interest rate is unimportant because the small loan came first.

But despite its complexity, and despite the fact that the so-called ‘lag’ does not refute the dominant role of CO2, scientists are interested in explaining such details and have indeed studied the ‘lag’. In 2012, Shakun and others published a paper doing just that “Global warming preceded by increasing carbon dioxide concentrations during the last deglaciation”(Jeremy D. Shakun et al, Nature 484, 49–54, 5 April 2012). Since you may struggle to see a copy of this paywalled paper, a plain-English summary is available.

Those who read headlines and not contents – like the US Politician Joe Barton – might think this paper is challenging the dominant role of CO2, but the paper does not say that.  This paper showed that some warming occurred prior to increased CO2, but this is explained as an interaction between Northern and Southern hemispheres, following the Milankovitch original ‘forcing’.

The role of the oceans is crucial in fully explaining the temperature record, and can add significant delays in reaching a new equilibrium. There are interactions between the oceans in Northern and Southern hemispheres that are implicated in some abrupt climate change events (e.g.  “North Atlantic ocean circulation and abrupt climate change during the last glaciation”, L. G. Henry et al, Science,  29 July 2016 • Vol. 353 Issue 6298).

4. He says: Here is an essay by Willis Eschenbach discussing this issue. He comes to five conclusions as to why CO2 cannot be the main driver

I say:

So Ridley quotes someone with little or no scientific credibility who has managed to publish in Energy & Environment. Its editor Dr Sonja Boehmer-Christiansen admitted that she was quite partisan in seeking to publish ‘sceptical’ articles (which actually means, contrarian articles), as discussed here.

Yet, Ridley extensively quotes this low grade material, but could have chosen from hundreds of credible experts in the field of climate science. If he’d prefer ‘the’ textbook that will take him through all the fundamentals that he seems to struggle to understand, he could try Raymond Pierrehumbert’s seminal textbook “Principles of Planetary Climate”. But no. He chooses Eschenbach, with a BA in Psychology.

Ridley used to put up the appearance of interest in a rational discourse, albeit flying in the face of the science. That mask has now fully and finally dropped, as he is now channeling crank science. This is risible.

5. He says: The Antarctic ice cores, going back 800,000 years, then revealed that there were some great summers when the Milankovich wobbles should have produced an interglacial warming, but did not. To explain these “missing interglacials”, a recent paper in Geoscience Frontiers by Ralph Ellis and Michael Palmer argues we need carbon dioxide back on the stage, not as a greenhouse gas but as plant food.

I say:

The paper is 19 pages long, which is unusual in today’s literature. The case made is intriguing but not convincing, but I leave it to the experts to properly critique it. It is taking a complex system, where for example, we know that large movements of heat in the ocean have played a key role in variability, and tries to infer (explaining interglacials) that dust is the primary driver, while discounting the role of CO2 as a greenhouse gas.

The paper curiously does not cite the seminal paper by Hays et al (1976), yet cites a paper by Willis Eschenbach published in Energy & Environment (which I mentioned earlier). All this raised concerns in my mind about this paper.

Extraordinary claims require extraordinary evidence and scientific dialogue, and it is really too early to claim that this paper is something or nothing; even if that doesn’t mean waiting the 50 odd years that Milankovitch’s work had to endure, before it was widely accepted. Good science is slow, conservative, and rigorous, and the emergence of a consilience on the science of our climate has taken a very long time, as I explored in a previous essay.

Ralph Ellis on his website (which shows that his primary interest is the history of the life and times of Jesus) states:

“Ralph has made a detour into palaeoclimatology, resulting in a peer-review science paper on the causes of ice ages”, and after summarising the paper says,

“So the alarmists were right about CO2 being a vital forcing agent in ice age modulation – just not in the way they thought”.

So was this paper an attempt to clarify what was happening during the ice ages, or a contrivance, to take a pot shot at carbon dioxide’s influence on our contemporary climate change?

The co-author, Michael Palmer, is a biochemist, with no obvious background in climate science and provided “a little help” on the paper according to his website.

But on a blog post comment he offers a rather dubious extrapolation from the paper:

“The irony is that, if we should succeed in keeping the CO2 levels high through the next glacial maximum, we would remove the mechanism that would trigger the glacial termination, and we might end up (extreme scenario, of course) another Snowball Earth.”,

They both felt unembarrassed participating in comments on the denialist blog site WUWT. Quite the opposite, they gleefully exchanged messages with a growing band of breathless devotees.

But even if my concerns about the apparent bias and amateurism of this paper were allayed, the conclusion (which Ridley and Ellis clearly hold to) that the current increases in carbon dioxide is nothing to be concerned with, does not follow from this paper. It is a non sequitur.

If I discovered a strange behavour like, say, the Coriolis force way back when, the first conclusion would not be to throw out Newtonian mechanics.

The physics of CO2 is clear. How the greenhouse effect works is clear, including for the conditions that apply on Earth, with all remaining objections resolved since no later than the 1960s.

We have a clear idea of the warming effect of increased CO2 in the atmosphere including short term feedbacks, and we are getting an increasingly clear picture of how the Earth system as a whole will respond, including longer term feedbacks.  There is much still to learn of course, but nothing that is likely to require jettisoning fundamental physics.

The recent excellent timeline published by Carbon Brief showing the history of the climate models, illustrates the long slow process of developing these models, based on all the relevant fundamental science.

This history has shown how different elements have been included in the models as the computing power has increased – general circulation, ocean circulation, clouds, aerosols, carbon cycle, black carbon.

I think it is really because Ridley still doesn’t understand how an increase from 0.03% to 0.04% over 150 years or so, in the atmospheric concentration of CO2, is something to be concerned about (or as I state it in talks, a 33% rise in the principal greenhouse gas; which avoids Ridley’s deliberately misleading formulation).

He denies that he denies the Greenhouse Effect, but every time he writes, he reveals that really, deep down, he still doesn’t get it. To be as generous as I can to him, he may suffer from a perpetual state of incredulity (a common condition I have written about before).


Matt Ridley in an interview he gave to Russ Roberts at in 2015 he reveals his inability to grasp even the most basic science:

“So, why do they say that their estimate of climate sensitivity, which is the amount of warming from a doubling, is 3 degrees? Not 1 degree? And the answer is because the models have an amplifying factor in there. They are saying that that small amount of warming will trigger a furtherwarming, through the effect mainly of water vapor and clouds. In other words, if you warm up the earth by 1 degree, you will get more water vapor in the atmosphere, and that water vapor is itself a greenhouse gas and will cause you to treble the amount of warming you are getting. Now, that’s the bit that lukewarmers like me challenge. Because we say, ‘Look, the evidence would not seem the same, the increases in water vapor in the right parts of the atmosphere–you have to know which parts of the atmosphere you are looking at–to justify that. And nor are you seeing the changes in cloud cover that justify these positive-feedback assumptions. Some clouds amplify warming; some clouds do the opposite–they would actually dampen warming. And most of the evidence would seem to suggest, to date, that clouds are actually having a dampening effect on warming. So, you know, we are getting a little bit of warming as a result of carbon dioxide. The clouds are making sure that warming isn’t very fast. And they’re certainly not exaggerating or amplifying it. So there’s very, very weak science to support that assumption of a trebling.”

He seems to be saying that the water vapour is in the form of clouds – some high altitude, some low –  have opposite effects (so far, so good), so the warming should be 1C – just the carbon dioxide component – from a doubling of CO2 concentrations (so far, so bad).  The clouds represent a condensed (but not yet precipitated) phase of water in the atmosphere, but he seems to have overlooked that water also comes in a gaseous phase (not clouds). Its is that gaseous phase that is providing the additional warming, bringing the overall warming to 3C.

The increase in water vapour concentrations is based on “a well-established physical law (the Clausius-Clapeyron relation) determines that the water-holding capacity of the atmosphere increases by about 7% for every 1°C rise in temperature” (IPCC AR4 FAQ 3.2)

T.C. Chamberlin writing in 1905 to Charles Abbott, explained this in a way that is very clear, explaining the feedback role of water vapour:

“Water vapour, confessedly the greatest thermal absorbent in the atmosphere, is dependent on temperature for its amount, and if another agent, as CO2 not so dependent, raises the temperature of the surface, it calls into function a certain amount of water vapour, which further absorbs heat, raises the temperature and calls forth more [water] vapour …”

(Ref. “Historical Perspectives On Climate Change” by James Fleming, 1998)

It is now 113 years since Chamberlin wrote those words, but poor Ridley is still struggling to understand basic physics, so instead regales us with dubious science intended to distract and confuse.

When will Matt Ridley stop feeling the need to share his perpetual incredulity and obdurate ignorance with the world?

© Richard W. Erskine, 2018

Leave a comment

Filed under Climate Science, Essay

Ending The Climate Solution Wars: A Climate Solutions Taxonomy

If you spend even a little time looking at the internet and social media in search of enlightenment on climate solutions, you will have noted that there are passionate advocates for each and every solution out there, who are also experts in the shortcomings of competing solutions!

This creates a rather unhelpful atmosphere for those of us trying to grapple with the problem of addressing the very real risks of dangerous global warming.

There are four biases – often implied but not always stated – that lie at the heart of these unproductive arguments:

  • Lack of clear evidence of the feasibility of a solution;
  • Failure to be clear and realistic about timescales;
  • Tendency to prioritize solutions in a way that marginalizes others;
  • Preference for top-down (centralization) or bottom-up (decentralization) solutions.

Let’s explore how these manifest themselves:

Feasibility: Lack of clear evidence of the feasibility of a solution

This does not mean that an idea does not have promise (and isn’t worthy of R&D investment), but refers to the tendency to champion a solution based more on wishful thinking than any proven track record. For example, small modular nuclear has been championed as the path to a new future for nuclear – small, modular, scaleable, safe, cheap – and there are an army of people shouting that this is true. We have heard recent news that the economics of small nuclear are looking a bit shaky. This doesn’t mean its dead, but it does rather put the onus on the advocates to prove their case, and cut the PR, as Richard Black has put it. Another one that comes to mind is ‘soil carbon’ as the single-handed saviour (as discussed in Incredulity, Credulity and the Carbon Cycle). The need to reform agriculture is clear, but it is also true (according to published science) that a warming earth could make soils a reinforcer of warming, rather than a cooling agent; the wisdom of resting hopes in regenerative farming as the whole of even a major contributor, is far from clear. The numbers are important.

Those who do not wish to deal with global warming (either because they deny its seriousness or because they do not like the solutions) quite like futuristic solutions, because while we are debating long-off solutions, we are distracted from focusing on implementing existing solutions.

Timescale: Failure to be clear and realistic about timescales

Often we see solutions that seem to clearly have promise and will be able to make a major contribution in the future. The issue is that even when they have passed the feasibility test, they fail to meet it on a timescale required. There is not even one timescale, as discussed in Solving Man-made Global Warming: A Reality Check, as we have an immediate need to reduce carbon emissions (say, 0-10 years), then an intermediate timeframe in which to implement an energy transition (say, 10-40 years). Renewable energy is key to the latter but cannot make sufficient contribution to the former (that can only be done by individual and community reductions in their carbon intensity). And whatever role Nuclear Fusion has for the future of humanity, it is totally irrelevant to solving the challenge we have in the next 50 years to decarbonize our economy.

The other aspect of timescale that is crucial is that the eventual warming of the planet is strongly linked to the peak atmospheric concentration, whereas the peak impacts will be delayed for decades or even centuries, before the Earth system finally reaches a new equilibrium. Therefore, while the decarbonization strategy required for solutions over, say, the 2020-2050 timeframe; the implied impacts timeframe could be 2050-2500, and this delay can make it very difficult to appreciate the urgency for action.

Priority: Tendency to prioritize solutions in a way that precludes others

I was commenting on Project Drawdown on twitter the other day and this elicited a strong response because of a dislike of a ‘list’ approach to solutions. I also do not like ‘lists’ when they imply that the top few should be implemented and the bottom ones ignored.  We are in an ‘all hands on deck’ situation, so we have to be very careful not to exclude solutions that meet the feasibility and timescale tests. Paul Hawken has been very clear that this is not the intention of Project Drawdown (because the different solutions interact and an apparently small solution can act as a catalyst for other solutions).

Centralization: Preference for top-down (centralization) or bottom-up (decentralization) solutions.

Some people like the idea of big solutions which are often underwritten at least by centralised entities like Governments. They argue that big impact require big solutions, and so they have a bias towards solutions like nuclear and an antipathy to lower-tech and less energy intensive solutions like solar and wind.

Others share quite the opposite perspective. They are suspicious of Governments and big business, and like the idea of community based, less intensive solutions. They are often characterized as being unrealistic because of the unending thirst of humanity for consumption suggests an unending need for highly intensive energy sources.

The antagonism between these world views often obscures the obvious: that we will need both top-down and bottom-up solutions. We cannot all have everything we would like. Some give and take will be essential.

This can make for strange bedfellows. Both environmentalists and Tea Party members in Florida supported renewable energy for complementary reasons, and they became allies in defeating large private utilities who were trying to kill renewables.

To counteract these biases, we need to agree on some terms of reference for solving global warming.

  • Firstly, we must of course be guided by the science (namely, the IPCC reports and its projections) in order to measure the scale of the response required. We must take a risk management approach to the potential impacts.
  • Secondly, we need to start with an ‘all hands on deck’ or inclusive philosophy because we have left it so late to tackle decarbonization, we must be very careful before we throw out any ideas.
  • Thirdly, we must agree on a relevant timeline for those solutions we will invest in and scale immediately. For example, for Project Drawdown, that means solutions that are proven, can be scaled and make an impact over the 2020-2050 timescale. Those that cannot need not be ‘thrown out’ but may need more research & development before they move to being operationally scaled.
  • Fourthly, we allow both top-down (centralized) and bottom-up (solutions), but recognise that while Governments dither, it will be up to individuals and social enterprise to act, and so in the short-medium term, it will be the bottom solutions that will have greater impact. Ironically, the much feared ‘World Government’ that right-wing conpiracy theorists most fear, is not what we need right now, and on that, the environmentalists mostly agree!

In the following Climate Solutions Taxonomy I have tried to provide a macro-level view of different solution classes. I have included some solutions which I am not sympathetic too;  such as nuclear and geo-engineering. But bear in mind that the goal here is to map out all solutions. It is not ‘my’ solutions, and is not itself a recommendation or plan.

On one axis we have the top-down versus bottom-up dimension, and on the other axis, broad classes of solution. The taxonomy is therefore not a simple hierarchy, but is multi-dimensional (here I show just two dimensions, but there are more).

Climate Solutions Taxonomy macro view

While I would need to go to a deeper level to show this more clearly, the arrows are suggestive of the system feedbacks that reflect synergies between solutions. For example, solar PV in villages in East Africa support education, which in turn supports improvments in family planning.

It is incredible to me that while we have (properly) invested a lot of intellectual and financial resources in scientific programmes to model the Earth’s climate system (and impacts), there has been dramatically less modelling effort on the economic implications that will help support policy-making (based on the damage from climate change, through what are called Integrated Assessment Models).

But what is even worse is that there seems to have been even less effort – or barely any –  modelling the full range of solutions and their interactions. Yes, there has been modelling of, for example, renewable energy supply and demand (for example in Germany), and yes, Project Drawdown is a great initiative; but I do not see a substantial programme of work, supported by Governments and Academia, that is grappling with the full range of solutions that I have tried to capture in the figure above, and providing an integrated set of tools to support those engaged in planning and implementing solutions.

This is unfortunate at many levels.

I am not here imagining some grand unified theory of climate solutions, where we end up with a spreadsheet telling us how much solar we should build by when and where.

But I do envisage a heuristic tool-kit that would help a town such as the one I was born (Hargesia in Somaliland), or the town in which I now live (Nailsworth in Gloucestershire in the UK), to be able to work through what works for them, to plan and deliver solutions. Each may arrive at different answers, but all need to be grounded in a common base of data and ‘what works’, and a more qualitative body of knowledge on synergies between solutions.

Ideally, the tool-kit would be usable at various levels of granularity, so it could be used at different various scales, and different solutions would emerge at different scales.

A wide range of both quantitative and qualitative methods may be required to grapple with the range of information covered here.

I am looking to explore this further, and am interested in any work or insights people have. Comments welcome.

(c) Richard W. Erskine, 2017

1 Comment

Filed under Uncategorized

Deficit, Debt and stalling carbon dioxide emissions

This essay is based on an extract from a talk I did recently that was well received. This specific part of the talk was described as very helpful in clarifying matters related to our carbon dioxide emissions. I hope others also find it useful. 

David Cameron said on 24 January 2013 “We’re paying down Britain’s debts” and got a lot of stick for this misleading statement. Why? Let me try to explain.

The deficit is the annual amount by which we spend more than we get in taxes. Whereas, the debt is the cumulative sum of year on year deficits.

As many politicians do, Cameron was using language designed to be, shall we say, ‘economical with the truth’. He was not the first, and he won’t be the last.

We can picture deficit being added to our debt using the following picture (or for greater dramatic effect, do it live if you are giving a talk):

Screen Shot 2017-11-23 at 17.10.49

If the deficit declines this year compared to last year, that may be of great solace to the Chancellor (and that was the situation in 2013), because maybe it’s the start of a trend that will mean that the debt may reach a peak.

Cameron could have said “Our debt keeps rising, but at least the rate at which it is rising is slightly less than last year. We’ll need to borrow some more to cover the additional deficit”, would the a honest statement, but he didn’t. It simply wouldn’t have cut it with the spin doctors.

The reality is that the only thing we can conclude from a deficit this year that is smaller than last year is that that the debt has increased by an amount less than last year. That’s it. It doesn’t sound quite so great put that way, does it?

You need year-on-year surpluses to actually bring the debt down.

Deficit and debt are useful in making an analogy with carbon dioxide in the atmosphere, because the confusion – intended or accidental – over deficit and debt, is very similar to the confusion that occurs in the mind of the public when the media report changes in our carbon emissions.

Let’s explore the analogy by replacing “Deficit” with “Emissions”, and “Debt” with “Atmospheric Concentration” …

The annual emissions add to the cumulative emissions in the atmosphere, i.e. the raised Atmospheric Concentration.

Screen Shot 2017-11-23 at 17.11.25

There are two differences with the financial analogy when we think about carbon dioxide in the atmosphere.

Firstly, when we add, say, 40 billion tonnes of carbon dioxide to the atmosphere (the green coloured area represents the added carbon dioxide) …

Screen Shot 2017-11-23 at 17.11.37

… then, within a short time (about 5 years) 50% of the added carbon dioxide (that is 20 billion tonnes, in this illustration), is absorbed in oceans and biosphere, balancing the remainder of carbon dioxide added to atmosphere, and we can visualize this balance as follows (Credit: Rabett Run, which includes a more technical description and an animation) –

Screen Shot 2017-11-23 at 17.11.52

Secondly, unlike with the economy, once the atmospheric concentration of carbon dioxide goes up, it stays up for hundred of years (and to get back to where it started, thousands of years), because for one thing, the processes to take carbon from the upper ocean to the deep ocean are very slow.

Unlike with the economy, our added carbon dioxide concentration in the atmosphere always goes in the wrong direction; it increases.

So when we see stories that talk about “emissions stalling” or other phrases that seem to offer reassurance, remember, they are talking about emissions (deficit) NOT concentrations (debt).

The story title below is just one example, taken from the Financial Times ( and I am not picking on the FT, but it shows that this is not restricted to the tabloids).

Whenever we see a graph of emissions over the years (graph on the left), the Health Warning should always be the Keeling Curve (graph on the right).

Screen Shot 2017-11-23 at 17.12.05

So the global garbon dioxide emissions in 2014 and 2015 where 36.08 and 36.02 billion tonnes, respectively. Cause for cautious rejoicing? Well, given the huge number of variables that go into this figure (the GDP of each nation; their carbon intensity; the efficiency level for equipment and transport; and so on), projecting a trend from a few years is a tricky business, and some have devoted their lives to tracking this figure. Important work for sure.

Then 2016 came along and the figure was similar but slightly raised, at 36.18 billion tonnes.

But we were said to be stalled … 36.08, 36.02 and 36.18.

I liken this to heading for the cliff edge at a steady pace, but at least no longer accelerating. Apparently that is meant to be reassuring.

Then comes the projected figure for 2017, which includes a bit of a burp of carbon dioxide from the oceans – courtesy of the strong El Nino – and this was even predicted, and horror of horrors, it makes headline news around the world.

We have jumped by 2% over the previous year (actually 1.7% to 36.79 billion tonnes). Has the ‘stall’ now unstalled? What next?

The real headline is that we are continuing to emit over 35 billion tonnes of carbon dioxide, year on year without any sign of stopping.

Only when emissions go down to 0 (zero), will the atmospheric concentration STOP rising.

So in relation to our emissions what word do we want to describe it? Not stall, not plateau, not ease back, but instead, stop, finito or end. They’ll do.

I have discovered – from talking to people who do not follow climate change on twitter, or the blogosphere, and are not fans of complex data analysis – that what I explained above was very helpful but also not widely appreciated.

But in a sense, this is probably the most important fact about climate change that everyone needs to understand, that

the carbon dioxide concentration will only stop rising when emissions completely stop.

The second most important fact is this:

whatever value the atmospheric concentration of carbon dioxide gets to – at that point in the future when we stop adding more – that it is where it will stay for my grandchild, and her grandchildren, and their grandchildren, and so on … for centuries* to come.

The Keeling Curve  – which measures the global atmospheric concentration of carbon dioxide – is the only curve that matters, because until it flattens, we will not know how much warming there will actually be, because of the third most important fact people must understand is this:

broadly speaking, the level of warming is proportional to the the peak concentration of carbon dioxide.

So when we see stories that talk about “emissions stalling” or other phrases that seem to offer hope that we’ve turned a corner, remember, they are talking about emissions (deficit) NOT concentrations (debt).

It is amazing how often the deficit/ debt confusion is played on by policitians regarding the nations finances.

The ’emissions stalling’ narrative of the last few years has led many to imagine we are, if not out of the woods, then on our way, but I think the confusion here is a failure of the media and other science communicators to always provide a clear health warning.

The truth is that we, as a species, are a long way still from showing a concerted effort to get out of the woods. Worse still, we are arguing amongst ourselves about which path to take.

(c) Richard W. Erskine, 2017


[* Unless and until we find a way to artificially extract and sequester carbon dioxide; this is still only R&D and not proven at scale yet, so does not rescue the situation we face in the period leading to 2050. We need to halt emissions, not just “stall” them.]

#carbondioxide #emissions #debt #deficit

Leave a comment

Filed under Uncategorized

Musing on the loss of European Medicines Agency (EMA) from the UK

People are arguing as to whether the loss of the EMA from the UK will hurt us or not, and I think missing some nuance.

The ICH (International Committee on Harmonization) has helped pharma to harmonize the way drugs are tested, licensed and monitored globally (albeit with variations), enabling drugs to be submitted for licensing in the largest number of countries possible.

For UK’s Big Pharma, the loss of EMA is a blow but not a fatal one, they have entities everywhere, they’ll find a way.

There are 3 key issues I see, around Network, Innovation and Influence:

  1. Network – New drug development is now more ‘ecosystem’ based, not just big pharma alone, and UK has lots of large, medium and small pharma, in both private and public institutions (Universities, Francis Crick Institute, etc.). And so do other EU countries, which form part of the extended network of collaboration. UK leaving EU will disrupt this network, and loss of EMA subtly changes the centre of power.
  2. Innovation – Further to the damage to networks, and despite ICH’s harmonization, being outside of EU inevitably creates issues for the smaller innovators with less reach, shallower pockets, and a greater challenge in adapting to the new  reality.
  3. Influence – not being at the EMA table (wherever its HQ is based) means that we cannot guide the development of regulation, which is on an inexorable path of even greater harmonization. Despite the UK’s self-loathing re. ‘not being as organized as the Germans’, the Brits have always been better than most at regulation, its deep in our culture (indeed much of the EU regulations neoliberals rail against have been gold-plated by the UK when they reach our shores). But outside the EU, and outside EMA, we won’t be in a position to apply these skills, and our influence will wane.

Unfortunately, the Brexiters have shown that they misunderstand the complexity not merely of supply chains in the automotive sector, for example, but the more subtle connections that exist in highly sophisticated development lifecycles, and highly regulated sectors, like pharmaceuticals.

A key regulatory body moving from our shores will have long term consequences we cannot yet know.

Can Britain adapt to the new reality?

Of course it can, but do not expect it to be easy, quick or cheap to do so.

Expect some pain.


Leave a comment

Filed under Uncategorized

Solving Man-made Global Warming: A Reality Check

Updated 11th November 2017 – Hopeful message following Figure added.

It seems that the we are all – or most of us – in denial about the reality of the situation we are in with relation to the need to address global warming now, rather than sometime in the future.

We display seesaw emotions, optimistic that emissions have been flattening, but aghast that we had a record jump this year (which was predicted, but was news to the news people). It seems that people forget that if we have slowed from 70 to 60 miles per hour, approaching a cliff edge, the result will be the same, albeit deferred a little. We actually need to slam on the breaks and stop! Actually, due to critical erosion of the cliff edge, we will even need to go into reverse.

I was chatting with a scientist at a conference recently:

Me: I think we need to accept that a wide portfolio of solutions will be required to address global warming. Pacala and Socolow’s ‘wedge stabilization’ concept is still pertinent.

Him: People won’t change; we won’t make it. We are at over 400 parts per million and rising, and have to bring this down, so some artificial means of carbon sequestration is the only answer.

This is just an example of many other kinds of conversations of a similar structure that dominate the blogosphere. It’s all about the future. Future impacts, future solutions. In its more extreme manifestations, people engage in displacement behaviour, talking about any and every solution that is unproven in order to avoid focusing on proven solutions we have today.

Yet nature is telling us that the impacts are now, and surely the solutions should be too; at least for implementation plans in the near term.

Professors Kevin Anderson and Alice Larkin of the Tyndall Centre have been trying to shake us out of our denial for a long time now. The essential argument is that some solutions are immediately implementable while others are some way off, and others so far off they are not relevant to the time frame we must consider (I heard a leader in Fusion Energy research on the BBC who sincerely stated his belief that it is the solution to climate change; seriously?).

The immediately implementable solution that no politician dares talk about is degrowth – less buying stuff, less travel, less waste, etc. All doable tomorrow, and since the top 10% of emitters globally are responsible for 50% of emissions (see Extreme Carbon Inequality, Oxfam), the quickest and easiest solution is for that 10% or let’s say 20%, to halve their emissions; and do so within a few years. It’s also the most ethical thing to do.

Anderson & Larkin’s credibility is enhanced by the fact that they practice what they advocate, as for example, this example of an approach to reduce the air miles associated with scientific conferences:

Screen Shot 2017-11-09 at 11.51.25

Some of people in the high energy consuming “West” have proven it can be done. Peter Kalmus, in his book Being the Change: Live Well and Spark a Climate Revolution describes how he went from a not untypical US citizen responsible for 19 tonnes of carbon dioxide emissions per year, to now something like 1 tonne; which is one fifth of the global average! It is all about what we do, how we do it, and how often we do it.

Anderson and Larkin have said that even just reaching half the European average, at least, would be a huge win: “If the top 10% of emitters were to reduce their emissions to the average for EU, that would mean a 33% in global emissions” (Kevin Andreson, Paris, Climate & Surrealism: how numbers reveal another reality, Cambridge Climate Lecture Series, March 2017).

This approach – a large reduction in consumption (in all its forms) amongst high emitters in all countries, but principally the ‘west’ – could be implemented in the short term (the shorter the better but let’s say, by 2030). Let’s call these Phase 1 solutions.

The reason we love to debate and argue about renewables and intermittency and so on is that it really helps to distract us from the blinding simplicity of the degrowth solution.

It is not that a zero or low carbon infrastructure is not needed, but that the time to fully implement it is too long – even if we managed to do it in 30 years time – to address the issue of rising atmospheric greenhouse gases. This has already started, but from a low base, but will have a large impact in the medium term (by 2050). Let’s call these Phase 2 solutions.

Project Drawdown provides many solutions relevant to both Phase 1 and 2.

And as for my discussion that started this, artificial carbon sequestration methods, such as BECCS and several others (are explored in Atmosphere of Hope by Tim Flannery) will be needed, but it is again about timing. These solutions will be national, regional and international initiatives, and are mostly unproven at present; they live in the longer term, beyond 2050. Let’s call these Phase 3 solutions.

I am not here wanting to get into geo-engineering solutions, a potential Phase 4. A Phase 4 is predicated on Phases 1 to 3 failing or failing to provide sufficient relief. However, I think we would have to accept that if, and I personally believe only if, there was some very rude shock (an unexpected burp of methane from the Arctic, and signs of a catastrophic feedback), leading to an imminent > 3C rise in global average temperature (as a possible red-line), then some form of geo-engineering would be required as a solution of last resort. But for now, we are not in that place. It is a matter for some feasibility studies but not policy and action. We need to implement Phase 1, 2 and 3 – all of which will be required – with the aim of avoiding a Phase 4.

I have illustrated the three phases in the figure which follows (Adapted from Going beyond dangerous climate change: does Paris lock out 2°C? Professors Kevin Anderson & Alice Bows-Larkin, Tyndall Centre – presentation to School of Mechanical Aerospace & Civil Engineering University of Manchester February 2016, Douglas, Isle of Man).

My adapted figure is obviously a simplification, but we need some easily digestible figures to help grapple with this complex subject; and apologies in advance to Anderson & Larkin if I have taken liberties with my colourful additions and annotations to their graphic (while trying to remain true to its intent).

Screen Shot 2017-11-09 at 12.19.57

A version of this slide on Twitter (@EssaysConcern) seemed to resonate with some people, as a stark presentation of our situation.

For me, it is actually a rather hopeful image, if as I, you have a belief in the capacity for people to work together to solve problems which so often we see in times of crisis; and this is a crisis, make no mistake.

While the climate inactivists promote a fear of big Government, controlling our lives, the irony here is that Phase 1 is all about individuals and communities, and we can do this with or without Government support. Phase 2 could certainly do with some help in the form of enabling legislation (such a price on carbon), but it does not have to be top-down solutions, although some are (industrial scale energy storage). Only when we get to Phase 3 are we seeing national solutions dominating, and then only because we have an international consensus to execute these major projects; that won’t be big government, it will be responsible government.

The message of Phases 1 and 2 is … don’t blame the conservatives, don’t blame the loss of feed-in tarifs, or … just do it! They can’t stop you!

They can’t force you to boil a full kettle when you only need one mug of tea. They can’t force you to drive to the smoke, when the train will do. They can’t force you to buy new stuff that can be repaired at a cafe.

And if your community wants a renewable energy scheme, then progressives and conservatives can find common cause, despite their other differences. Who doesn’t want greater community control of their energy, to compete with monopolistic utilities?

I think the picture contains a lot of hope, because it puts you, and me, back in charge. And it sends a message to our political leaders, that we want this high on the agenda.

(c) Richard W. Erskine, 2017




Filed under Essay, Global Warming Solutions

Incredulity, Credulity and the Carbon Cycle

Incredulity, in the face of startling claims, is a natural human reaction and is right and proper.

When I first heard the news about the detection on 14th September 2015 of the gravitational waves from two colliding black holes by the LIGO observatories I was incredulous. Not because I had any reason to disagree with the predictions of Albert Einstein that such waves should exist, rather it was my incredulity that humans had managed to detect such a small change in space-time, much smaller than the size of a proton.

How, I pondered, was the ‘noise’ from random vibrations filtered out? I had to do some studying, and discovered the amazing engineering feats used to isolate this noise.

What is not right and proper is to claim that personal incredulity equates to an error in the claims made. If I perpetuate my incredulity by failing to ask any questions, then it’s I who is culpable.

And if I were to ask questions then simply ignore the answers, and keep repeating my incredulity, who is to blame? If the answers have been sufficient to satisfy everyone skilled in the relevant art, how can a non expert claim to dispute this?

Incredulity is a favoured tactic of many who dispute scientific findings in many areas, and global warming is not immune from the clinically incredulous.

The sadly departed Professor David Mackay gives an example in his book Sustainable Energy Without the Hot Air (available online):

The burning of fossil fuels is the principal reason why CO2 concentrations have gone up. This is a fact, but, hang on: I hear a persistent buzzing noise coming from a bunch of climate-change inactivists. What are they saying? Here’s Dominic Lawson, a columnist from the Independent:  

“The burning of fossil fuels sends about seven gigatons of CO2 per year into the atmosphere, which sounds like a lot. Yet the biosphere and the oceans send about 1900 gigatons and 36000 gigatons of CO2 per year into the atmosphere – … one reason why some of us are sceptical about the emphasis put on the role of human fuel-burning in the greenhouse gas effect. Reducing man-made CO2 emissions is megalomania, exaggerating man’s significance. Politicians can’t change the weather.”

Now I have a lot of time for scepticism, and not everything that sceptics say is a crock of manure – but irresponsible journalism like Dominic Lawson’s deserves a good flushing.

Mackay goes on to explain Lawson’s error:

The first problem with Lawson’s offering is that all three numbers that he mentions (seven, 1900, and 36000) are wrong! The correct numbers are 26, 440, and 330. Leaving these errors to one side, let’s address Lawson’s main point, the relative smallness of man-made emissions. Yes, natural flows of CO2 are larger than the additional flow we switched on 200 years ago when we started burning fossil fuels in earnest. But it is terribly misleading to quantify only the large natural flows into the atmosphere, failing to mention the almost exactly equal flows out of the atmosphere back into the biosphere and the oceans. The point is that these natural flows in and out of the atmosphere have been almost exactly in balance for millenia. So it’s not relevant at all that these natural flows are larger than human emissions. The natural flows cancelled themselves out. So the natural flows, large though they were, left the concentration of CO2 in the atmosphere and ocean constant, over the last few thousand years.

Burning fossil fuels, in contrast, creates a new flow of carbon that, though small, is not cancelled.

I offer this example in some detail as an exemplar of the problem often faced in confronting incredulity.

It is natural that people will often struggle with numbers, especially large abstract sounding numbers. It is easy to get confused when trying to interpret numbers. It does not help that in Dominic Lawson’s case he is ideologically primed to see a ‘gotcha’, where none exists.

Incredulity, such as Lawson’s, is perfectly OK when initially confronting a claim that one is sceptical of; we cannot all be informed on every topic. But why then not pick up the phone, or email a Professor with skills in the particular art, to get them to sort out your confusion?  Or even, read a book, or browse the internet? But of course, Dominic Lawson, like so many others suffers from a syndrome that  many have identified. Charles Darwin noted in The Descent of Man:

“Ignorance more frequently begets confidence than does knowledge: it is those who know little, not those who know much, who so positively assert that this or that problem will never be solved by science.”

It is this failure to display any intellectual curiosity which is unforgivable in those in positions of influence, such as journalists or politicians.

However, the incredulity has a twin brother, its mirror image: credulity. And I want to take an example that also involves the carbon cycle,.

In a politically charged subject, or one where there is a topic close to one’s heart, it is very easy to uncritically accept a piece of evidence or argument. To be, in the technical sense, a victim of confirmation bias.

I have been a vegetarian since 1977, and I like the idea of organic farming, preferably local and fresh. So I have been reading Graham Harvey’s book Grass Fed Nation. I have had the pleasure of meeting Graham, as he was presenting a play he had written which was performed in Stroud. He is a passionate and sincere advocate for his ideas on regenerative farming, and I am sure that much of what he says makes sense to farmers.

The recently reported research from Germany of a 75% decline in insect numbers is deeply worrying, and many are pointing the finger at modern farming and land-use methods.

However, I found something in amongst Harvey’s interesting book that made me incredulous, on the question of carbon.

Harvey presents the argument that, firstly, we can’t do anything to reduce carbon emissions from industry etc., but that secondly, no need to worry because the soils can take up all the annual emissions with ease; and further, that all of extra carbon in the industrial era could be absorbed in soils over coming years.

He relies a lot on Savory’s work, famed for his visionary but contentious TED talk. But he also references other work that makes similar claims.

I would be lying if I said there was not a part of me that wanted this to be true. I was willing it on. But I couldn’t stop myself … I just had to track down the evidence. Being an ex-scientist, I always like to go back to the source, and find a paper, or failing that (because of paywalls), a trusted source that summarises the literature.

Talk about party pooper, but I cannot find any such credible evidence for Harvey’s claim.

I think the error in Harvey’s thinking is to confuse the equilibrium capacity of the soils with their ability to take up more, every year, for decades.

I think it is also a inability to deal with numbers. If you multiply A, B and C together, but then take the highest possible ranges for A, B and C you can easily reach a result which is hugely in error. Overestimate the realistic land that can be addressed; and the carbon dioxide sequestration rate; and the time till saturation/ equilibrium is reached … and it is quite easy to overestimate the product of these by a factor of 100 or more.

Savory is suggesting that over a period of 3 or 4 decades you can draw down the whole of the anthropogenic amount that has accumulated (which is nearly 2000 gigatonnes of carbon dioxide), whereas a realistic assessment (e.g. is suggesting a figure of 14 gigatonnes of carbon dioxide (more than 100 times less) is possible in the 2020-2050 timeframe.

There are many complex processes at work in the whole carbon cycle – the biological, chemical and geological processes covering every kind of cycle, with flows of carbon into and out of the carbon sinks. Despite this complexity, and despite the large flows of carbon (as we saw in the Lawson case), atmospheric levels had remained stable for a long time in the pre-industrial era (at 280 parts per million).  The Earth system as a whole was in equilibrium.

The deep oceans have by far the greatest carbon reservoir, so a ‘plausibility argument’ could go along the lines of: the upper ocean will absorb extra CO2 and then pass it to the deep ocean. Problem solved! But this hope was dashed by Revelle and others in the 1950s, when it was shown that the upper-to-lower ocean processes are really quite slow.

I always come back to the Keeling Curve, which reveals an inexorable rise in CO2 concentrations in the atmosphere since 1958 (and we can extend the curve further back using ice core data). And the additional CO2 humans started to put into the atmosphere since the start of the industrial revolution (mid-19th century, let us say) was not, as far as I can see, magically soaked up by soils in the pre-industrial-farming days of the mid-20th century, when presumably traditional farming methods pertained.

FCRN explored Savory’s methods and claims, and find that despite decades of trying, he has not demonstrated that his methods work.  Savory’s case is very weak, and he ends up (in his exchanges with FCRN) almost discounting science; saying his methods are not susceptible to scientific investigations. A nice cop-out there.

In an attempt to find some science to back himself up, Savory referenced Gattinger, but that doesn’t hold up either. Track down Gattinger et al’s work  and it reveals that soil organic carbon could (on average, with a large spread) capture 0.4GtC/year (nowhere near annual anthropogenic emissions of 10GtC), and if it cannot keep up with annual emissions, forget soaking up the many decades of historical emissions (the 50% of these that persists for a very long time in the atmosphere).

It is interesting what we see here.

An example of ‘incredulity’ from Lawson, who gets carbon flows mixed up with net carbon flow, and an example of ‘credulity’ from Harvey where he puts too much stock in the equilibrium capacity of carbon in the soil, and assumes this means soils can keep soaking up carbon almost without limit. Both seem to struggle with basic arithmetic.

Incredulity in the face of startling claims is a good initial response to startling claims, but should be the starting point for engaging one’s intellectual curiosity, not as a perpetual excuse for confirming one’s bias; a kind of obdurate ignorance.

And neither should hopes invested in the future be a reason for credulous acceptance of claims, however plausible on face value.

It’s boring I know – not letting either one’s hopes or prejudices hold sway – but maths, logic and scientific evidence are the true friends here.

Maths is a great leveller.


(c) Richard W. Erskine, 2017


Filed under Climate Science, Essay, Uncategorized

JFK Conspiracy Story: Another Science Fail by BBC News

It seems only yesterday that the BBC was having to apologise for not challenging the scientifically illiterate rants of Lord Lawson … oh, but it was yesterday!

So how delightful to see another example of BBC journalism that demonstrates the woeful inability of journalists to report science accurately, or at least, to use well informed counter arguments when confronted with bullshit.

A Story by Owen Amos on the BBC Website (US & Canada section), with clickbait title “JFK assassination: Questions that won’t go away”  … is a grossly ill-informed piece, repeating ignorant conspiracy theories by Jefferson Morley (amongst others), without any challenge (BBC’s emphasis):

“Look at the Zapruder film,” says Morley. “Kennedy’s head goes flying backwards.

I know there’s a theory that if you get hit by a bullet from behind, the head goes towards the source of the bullet.

But as a common sense explanation, it seems very unlikely. That sure looks like a shot from the front.” 

That’s it then, common sense.

Case settled.

If it’s good enough for Oliver Stone and Jefferson Morley, who are we to argue?

But wait a minute!

The theory in question, if Morley is really interested, is the three centuries old  theory called Newtonian Mechanics (Reference: “Philosophiæ Naturalis Principia Mathematica“, Issac Newton, 1687).

Are we to cast that aside and instead listen to a career conspiracy theorist.

You can if you must, but the BBC shouldn’t be peddling such tripe.

As Luis Alvarez, the Nobel Laureate, pointed out long ago, the head MUST kick back in order to conserve both Momentum and Energy.  You need a picture?


[I have not included the maths, but it is high school maths, trust me, you don’t need a Nobel Prize to do the calculation]

Morley would get a Nobel Prize if he disproved it. He hasn’t and won’t.

It seems that Morley has been doing the rounds in the media, and there is no problem finding gullible victims.

You might like to look at the Penn & Teller video of 2006 which demonstrates the physics in practice (with a melon), for the Newtonian sceptics like Morley.

Amos/BBC is gullible in uncritically replaying this nonsense, without mentioning Alvarez. Amos could have said something like

“this rationale (the head kick back) for a second gunman is completely unfounded as it flies in the face of basic Newtonian mechanics .. see this video

Unfortunately this fails the clickbait test for irresponsible journalism, which requires ‘debate’ by idiots in response to experts. It’s balanced reporting after all.

Why are journalists so incapable of understanding 300 years old basic physics, or so carelessly cast it aside. The same physics, by the way, that helps us design airplanes that fly, and a major pillar in climate science too (the science that so persistently eludes Lord Lawson).

I am waiting patiently for another BBC apology for crimes against scientific literacy and an inability to ask searching, informed questions of peddlars of bullshit, be they Lawson or Morley.

(c) Richard W. Erskine, 2017.

Leave a comment

Filed under Missive, Science Communications

Trust, Truth and the Assassination of Daphne Caruana Galizia 

How far do we go back to find examples of investigations of injustice or the abuse of power?

Maybe Roger Casement’s revelations on the horrors of King Leopold’s Congo, or the abuses of Peruvian Indians were heroic examples for which he received a Knighthood, even if later, his support for Irish independence earned him the noose.

Watergate was clearly not the first time that investigative journalism fired the public imagination, but it must be a high point, at least in the US, for the power of the principled and relentless pursuit of the truth by Bob Woodward and Carl Bernstein.

And then I call to mind the great days of the Sunday Times’ ‘Insight’ team that conducted many investigations. I recall the brilliant Brian Deer, who wrote for The Times and Sunday Times, and revealed the story behind Wakefield’s fake science on MMR, even while other journalists were shamelessly helping to propagate the discredited non-science.

But those days seem long ago now.

Today, you are just as likely to find The Times, The Daily Telegraph, Daily Mail and Spectator – desperate to satisfy their ageing and conservative readership, or in need of clickbait advertising revenue – to regurgitate bullshit, including the anti-expert nonsense that fills the blogosphere. This nonsense has been called out many times, such as in Climate Feedback.

Despite Michael Gove’s assertion that “Britain has had enough with experts” the IPSOS More Veracity Index of 2016 suggests differently  – It appears that nurses, doctors, lawyers and scientists are in the upper quartile of trust, whereas journalists, estate agents and politicians lurk in the lower quartile.

No wonder the right-wingers who own or write for the organs of conservatism are so keen to attack those in the upper quartile, and claim there is a crisis of trust. This is  displacement activity by politicians and journalists: claiming that there is a crisis of trust with others to deflect it from themselves. The public are not fooled.

It is a deeply cynical and pernicious to play the game of undermining evidence and institutions.

As Hannah Arendt said in The Origins of Totalitarianism:

“The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction (i.e., the reality of experience) and the distinction between true and false (i.e., the standards of thought) no longer exist.”

But investigative journalism is not dead.

In Russia there are many brave journalists who expose corruption and the abuse of power, and they have paid with their lives: 165 murdered since 1993, with about 50% of these since Putin came to power. He didn’t start the killing, but then, he didn’t stop it either.

The nexus of political, business and mafia-style corruption makes it easy from the leadership to shrug off responsibility.

And so we come to Malta, where the same nexus exists. Daphne Caruana Galizia has been exposing corruption for so long, there were no shortage of enemies, including the politicians and police that failed to protect her. Her assassination is a scar on Malta that will take a long time to heal.

The EU has produced anodyne reports on partnership with Malta and programmes continue despite a breakdown in the rule of law and governance, that have provided a haven for nepotism and racketeering. Is Malta really so different to Russia in this regard?

Is the EU able to defend the principles it espouses, and sanction those who fail to live up to them?

The purveyors of false news detest brave investigative journalists as much as they love to attack those like scientists who present evidence that challenges their interests. Strong institutions are needed to defend society against these attacks.

Remainers like myself defend the EU on many counts, but we also expect leadership when that is needed, not merely the wringing of hands.

(c) Richard W. Erskine, 2017.

Leave a comment

Filed under Uncategorized

America’s Gun Psychosis

This was originally written on 2nd October 2017 following the Las Vegas shooting where Stephen Paddock murdered 58 people and injured 851 more. The latest mass shooting (a phrase that will become out of date, almost before the ink is dry) at Florida’s Marjory Stoneman Douglas High School. This is also the 17th school shooting in the USA in the first 45 days of 2018. I have not made any changes to the essay below (because this is tragically the same psychosis), but have added Venn Diagrams to visualize the issue of mental health and guns. Mental health is not the issue here. It is people with homicidal tendencies (many of whom will indeed have mental problems) having easy access to guns. We should not stigmatise a growing number of people with mental health problems. We should reduce access to guns.

If ever one needed proof of the broken state of US politics, the failure to deal with this perpetual gun crisis is it.

After 16 children and 1 teacher were killed in the Dunblane massacre on 13th March 1996, the UK acted.

After 35 people were killed in the PortArthur massacre on 28th April 1996, Australia acted.

It’s what any responsible legislature would do.

So far in 2017, US deaths from shootings totals a staggering 11,652 (I think not including the latest mass shooting in Las Vegas, and with 3 months still to run in 2017 – see gunsviolencearchive – and note this excludes suicides).

The totals for the previous 3 years 2014, 2015 and 2016 are 12,571; 13,500; and 15,079.

The number of those injured comes in at about two times those killed (but note that the ratio for the latest Las Vegas shooting is closer to 10, with the latest Associated Press report at the time of writing, giving 58 people dead and 515 injured).

One cannot imagine the huge number of those scarred by these deaths and injuries – survivors, close families, friends, colleagues, classmates, first-responders, relatives at home and abroad. Who indeed has not been impacted by these shootings, in the US and even abroad?

I write as someone with many relatives and friends in America, and having owed my living to great American companies for much of my career. But I am also someone whose family has been touched by this never-ending obsession that America has with guns.

And still Congress and Presidents seem incapable of standing up to the gun lobby and acting.

The US, far from acting, loosens further the access to guns or controls on them.

This is a national psychosis, and an AWOL legislature.

In both the UK and Australian examples, it was actually conservative administrations that brought in the necessary legislation, so the idea that only ‘liberals’ are interested in reducing the number and severity of shootings, by introducing gun control, is simply wrong. This should not be a party political issue.

In the US some will argue against gun control, saying that a determined criminal or madman can always get hold of a gun. This is a logical fallacy, trying to make the best be the enemy of the good. Just because an action is not guaranteed to be 100% perfect, is no reason for not taking an action that could be effective, and the case of the UK and Australia, very effective. Do we fail to deliver chemotherapy to treat cancer patients because it is not guaranteed to prevent every patient from dying; to be 100% perfect? Of course not. But this is just one of the many specious arguments used by the gun lobby in the USA to defend the indefensible.

But at its root there is, of course, a deeply polarised political system in the USA. The inability to confront the guns crisis, is the same grid-locked polarisation that is preventing the US dealing with healthcare, or the justice system, or endemic racism, or indeed, climate change.

How will America – a country that has given so much to the world – overcome this debilitating polarization in the body politic?

America needs a Mandela – a visionary leader able to bring people together to have a rationale, evidence based conversation – but none is in sight.

It’s enough to make one weep.

The 3 branches of the US Government ought to be ashamed, but expect more platitudinous ‘thoughts and prayers’ … the alternative to them doing their job.

Trump is now praying for the day when evil is banished, for god’s sake! An easy but totally ineffective substitute for actually doing anything practical to stem the carnage, and protect US citizens.

Some pictures added 16th February 2018 to illustrate the problem facing the USA …

Screen Shot 2018-02-16 at 08.08.32Screen Shot 2018-02-16 at 08.08.41


Filed under Gun violence, Politics, Uncategorized

BBC Science Reporting: Evidence, Values and Pollability

In his Harveian Oration to the Royal College of Physicians on 15th October 2015, Professor Sir Mark Walport made the following observation:

“My PhD supervisor, Sir Peter Lachmann, has framed the distinction between the subjective and the objective in a different way, by considering whether questions are ‘pollable’ or ‘non- pollable’; that is, whether a question can be answered in principle by a vote (a pollable question), or whether the question has a right answer that is independent of individual preferences and opinions (a non-pollable question). This distinction can be easily illustrated by a couple of examples. It is a non-pollable question as to whether there is an anthropogenic contribution to climate change. There is a correct answer to this question and your opinion or mine is ultimately irrelevant. The fact that there may be uncertainties about the scale and the nature of the contribution does not change the basic nature of the question. In contrast, it is a pollable question as to whether nuclear energy is an acceptable solution to providing low-carbon power, and I will return to this later.”

The question presents itself: does the BBC understand the distinction between pollable and non-pollable questions related to science?

BBC Radio 4’s Today programme on Tuesday 12th September included two discussions on the nature of science reporting and how it has changed over the years, particularly at the BBC.

The first was with Steve Jones , Emeritus Professor of Human Genetics at University College, who led a  review of the way the BBC itself reports science, about the changing nature of science reporting, while the second was with Richard Dawkins, Professor of evolutionary biology and David Willetts a former science minister, considering the “public’s evolving relationship with science, evidence and truth”.

Subsequent to this I wrote a letter to the Today team at the BBC, which is reproduced below, which I am now sharing on my blog:

Dear Sir/ Madam

I wanted to thank the BBC Today team for two excellent discussions that John Humphreys had, first with Prof. Steve Jones, and then subsequently with David Willetts and Richard Dawkins.

John Humphreys posed the challenge to Prof. Jones, as to why we should ‘believe’ climate change; and I am paraphrasing his words:

A. The world is warming

B. This warming is man made, and

C. There is only one way of stopping it.

This was an alarming way to approach the topic, for two reasons.

Firstly, the science – and by virtue of that statement, scientists – unequivocally answer A and B with a resounding ‘Yes’.  There is an aggregation of scientific evidence and analysis going back at least to John Tyndall in the mid 19th Century that brought us – no later than the 1980s in fact – to a consilience of science on these questions. I discuss this history and the nature of ‘consilience’ in an essay, here: 

To question this is at the same level as questioning whether cigarettes cause lung cancer. There is no debate to be had here.  Yes, debate on how to get teenagers  to stop taking up smoking, but that’s a different question.  To say that everyone can have an opinion, and to set up a controversial ‘debate’ on these questions is the “false balance” Professor Jones identified in the report he did for the BBC. Representing opinions is not a license to misrepresent the evidence, by using ‘false balance’ in this way.

Secondly, however, scientists do NOT speak with one voice on how to stop it, as John Humphrey’s phrased his C question.  That is a why the UNFCCC takes up the question here which require policy input, and yes, the input of ‘values’.  Whilst the A and B questions are not questions where it is appropriate to bring values to bear on the answers; solutions are full of value-based inputs.  So the C that John Humphreys should be opening a dialogue on this:

C(amended): There are many solutions that can contribute to addressing the given man-made global warming – either by mitigation or adaptation – which ones do you advocate and why?

And of course many subsidiary questions arise when debating these solutions:

  • Are we too late to prevent dangerous climate change, therefore need a massive reduction in consumption – a degrowth strategy?
  • Can we solve this with a kind of Marshall Plan to decarbonise our energy supply, but also heat buildings and transport, through electrification?
  • What role does nuclear energy play?
  • Given the long time that excess carbon dioxide levels remain in the atmosphere, and the legacy of the developed worlds emissions, how can the developing world receive carbon justice?
  • Even if we decarbonised everything tomorrow, what solutions are feasible for reducing the raised levels of carbon dioxide in the atmosphere; what degree of sea-level rise are we prepared to tolerate, ‘baked in’ already to the Earth system?
  • Is a carbon tax ultimately the only way forward, and what price do we put on carbon?
  • … and so on.

Yes, science can help answer these kinds of questions, but the values play a large part too.  

The fact the BBC still gets stuck in the groove of ‘debating’ A and B, is I think woeful. As woeful as ‘debating’ if smoking causes cancer.

I think David Willetts acknowledged the difference in these classes of question, whereas Richard Dawkins was disappointingly black and white; not recognising the role of values in the C(amended) class of questions.

David Willetts made the interesting point that in social science, there is often greater difficulty in getting to the truth, and this is highly problematic for politicians, but that for the physical sciences, if we’ve discovered the Higgs Boson, it is much clearer.  He made a lot of the need to bring values to bear on decisions and ‘not being able to wait for yet another report’. However, there is a qualitative difference with climate change: it requires long term strategic thinking and it is a challenge to the normal, national political cycles.

On the question of Lord Lawson. By all means invite him to discuss the economics of decarbonising the economy. But last time he was asked on – more or less to do this – and had a discussion with Justin Webb, he was asked by Justin to comment on Al Gore’s statement that we needed to push ahead with the solutions that are already available to us. Move on, in other words.

Instead of answering this question Lord Lawson tried to poke holes in unequivocal science (A and B), instead of addressing C; he has no intention of moving on.  He lost, and seems quite bitter about it; as he went on to make personal attacks on Al Gore.  While the interviewer cannot stop Lord Lawson saying these things, he should be called out on them.

“I am not a scientist” is a statement that US Republican Congressman use to avoid confronting the fact that A and B are true, and not up for debate.  John Humphreys should not be using the same statement (but he did on this episode). 

If climate change is “the big one” as he himself noted, surely it is time he made the effort to educate himself to the point where he understands why A and B are unequivocally “Yes”, in the same way that “Does smoking cause lung cancer?” has an unequivocally “Yes” answer.  There are no shortage of scientists at the Met Office, Cambridge, Oxford, UCL and elsewhere who I am sure would be happy to help him out here.

Today was a good discussion – even a great step forward – but the BBC is still failing in its public service duty, on this topic of global warming.

Kind regards,

Richard Erskine

What seems to be clear to me is that John Humphreys is not alone amongst journalists in failing to distinguish between non-pollable (where evidence accumulated over many years holds sway, and values have no place) and pollable questions (where values can have as big a part to play as the science).

It is about time they started.

o o O o o

Leave a comment

Filed under Uncategorized