Communicating Key Figures from IPCC Reports to a Wider Public

If you were to think about ranking the most important Figures from the IPCC Fifth Assessment Report, I would not be surprised if the following one (SPM.10) did not emerge as a strong candidate for the number one slot:

IPCC AR5 Figure SPM.10

This is how the Figure appears in the main report, on page 28 (in the Summary for Policymakers) of The Physical Basis Report (see References: IPCC, 2013). The Synthesis Report includes a similar figure with additional annotations.

Many have used it in talks because of its fundamental importance (for example, Sir David King in his Walker Institute Annual Lecture (10th June 2015), ahead of COP21 in Paris). I have followed this lead, and am sure that I am not alone.

This Figure shows an approximately linear1 relationship between the cumulative carbon dioxide we emit2, and the rise in global average surface temperature3 up to 2100. It was crucial to discussions on carbon budgets held in Paris and the goal of stabilising the climate.

I am not proposing animating this Figure in the way discussed in my previous essay, but I do think its importance warrants additional attention to get it out there to a wider audience (beyond the usual climate geeks!).

So my question is:

“Does it warrant some kind of pedagogic treatment for a general audience (and dare I say, for policy-makers who may themselves struggle with the density of information conveyed)?”

My answer is yes, and I believe that the IPCC, as guardians of the integrity of the report findings, are best placed to lead such an effort, albeit supported by skills to support the science communications.

The IPCC should not leave it to bloggers and other commentators to furnish such content, as key Figures such as this are fundamental to the report’s findings, and need to be as widely understood as possible.

While I am conscious of Tufte’s wariness regarding Powerpoint, I think that the ‘build’ technique – when used well – can be extremely useful in unfolding the information, in biteable chunks. This is what I have tried to do with the above Figure in a recent talk. I thought I would share my draft attempt.

It can obviously do with more work, and the annotations represent my emphasis and use of  language4. Nevertheless, I believe I was able to truthfully convey the key information from the original IPCC Figure more successfully than I have before; taking the audience with me, rather than scaring them off.

So here goes, taken from a segment of my talk … my narrative, to accompany the ‘builds’, is in italics …

Where are we now?

“There is a key question: what is the relationship between the peak atmospheric concentration and the level of warming, compared to a late 19th century baseline, that will result, by the end of the 21st century?”

“Let’s start with seeing where we are now, which is marked by a X in the Figure below.” 

Unpacking SYR2.3 - Build 1

“Our cumulative man-made emissions of carbon dioxide (CO2) have to date been nearly 2000 billion tonnes (top scale above)”

“After noting that 50% of this remains in the atmosphere, this has given rise to an increase in the atmospheric concentration from its long-standing pre-industrial value of 280 parts per million to it current value which is now about 400 parts per million (bottom scale above).”

“This in turn has led to an increase in averaged global surface temperature of  1oC above the baseline of 1861 to 1880 (vertical scale above).”

Where might we be in 2100?

“As we add additional carbon dioxide, the temperature will rise broadly in proportion to the increased concentration in the atmosphere. There is some uncertainty between “best case” and “worst case” margins of error (shown by the dashed lines).” 

Unpacking SYR2.3 - Build 2

“By the end of the century, depending on how much we emit and allowing for uncertainties, we can end up anywhere within the grey area shown here. The question marks (“?”) illustrate where we might be by 2100.”

Can we stay below 2C?

“The most optimistic scenario included in the IPCC’s Fifth Assessment Report (AR5) was based on the assumption of a rapid reduction in emissions, and a growing role for the artificial capture of carbon dioxide from the atmosphere (using a technology called BECCS).” 

Unpacking SYR2.3 - Build 3

“This optimistic scenario would meet the target agreed by the nations in Paris, which is to limit the temperature rise to 2oC.”

“We effectively have a ‘carbon budget’; an amount of fossil fuels that can be burned and for us to stay below 2oC”. 

“The longer we delay dramatically reducing emissions, the faster the drop would need to be in our emissions later, as we approach the end of the ‘carbon budget’.” 

“Some argue that we are already beyond the point where we can realistically move fast enough to make this transition.” 

“Generally, experts agree it is extremely challenging, but still not impossible.”

Where will we be in 2100?  – Paris Commitments

“The nationally determined contributions (or NDCs) – the amounts by which carbon dioxide emissions will fall – that the parties to the Paris Agreement put forward have been totted up and they would, if implemented fully, bring us to a temperature rise of between 2.5 and 3.5 oC (and an atmospheric concentration about twice that of pre-industrial levels).”

Unpacking SYR2.3 - Build 4

 “Now, the nations are committed to increase their ‘ambition’, so we expect that NDCs should get better, but it is deeply concerning that at present, the nations’ current targets are (1) not keeping us unambiguously clear of catastrophe, and (2) struggling to be met. More ambition, and crucially more achievement, is urgent.”

“I have indicated the orange scenarios as “globally severe”, but for many regions “catastrophic” (but some, for example, Xu and Ramanathan5, would use the term “Catastrophic” for any warming over 3oC, and “Unknown” for warming above 5oC). The IPCC are much more conservative in the language they use.”

Where will we be in 2100? – Business As Usual Scenario

“The so-called ‘business as usual’ scenario represents on-going use of fossil fuels, continuing to meet the majority of our energy needs, in a world with an increasing population and increasing GDP per capita, and consequently a continuing growth in CO2 emissions.”

Unpacking SYR2.3 - Build 5

”This takes global warming to an exceptionally bad place, with a (globally averaged) temperature rise of between 4 and 6 oC; where atmospheric concentrations will have risen to between 2.5 and 3 times the pre-industrial levels.”

“The red indicates that this is globally catastrophic.”

“If we go above 5oC warming we move, according to Xu and Ramanathan,  from a “catastrophic” regime to an “unknown” one. I have not tried to indicate this extended vocabulary on the diagram, but what is clear is that the ‘business as usual’ scenario is really not an option, if we are paying attention to what the science is telling us.”

That’s it. My draft attempt to convey the substance and importance of Figure SPM.10, which I have tried to do faithfully; albeit adding the adjectives “optimistic” etc. to characterise the scenarios.

I am sure the IPCC could do a much better job than me at providing a more accessible presentation of Figure SPM.10 and indeed, a number of high ranking Figures from their reports, that deserve and need a broader audience.

© Richard W. Erskine

Footnotes

  1. The linearity of this relationship was originally discussed in Myles Allen et al (2009), and this and other work has been incorporated in the IPCC reports. Also see Technical Note A below.
  1. About half of which remains in the atmosphere, for a very long time
  1. Eventually, after the planet reaches a new equilibrium, a long time in the future. Also see Technical Note B below.
  1. There are different opinions are what language to use – ‘dangerous’, ‘catastrophic’, etc. – and at what levels of warming to apply this language. The IPCC is conservative in its use of language, as is customary in the scientific literature. Some would argue that in wanting to avoid the charge of being alarmist, it is in danger of obscuring the seriousness of the risks faced. In my graphics I have tried to remain reasonably conservative in the use of language, because I believe things are serious enough; even when a conservative approach is taken.
  1. Now, Elizabeth Kolbert has written in the New Yorker:

In a recent paper in the Proceedings of the National Academy of Sciences, two climate scientists—Yangyang Xu, of Texas A. & M., and Veerabhadran Ramanathan, of the Scripps Institution of Oceanography—proposed that warming greater than three degrees Celsius be designated as “catastrophic” and warming greater than five degrees as “unknown??” The “unknown??” designation, they wrote, comes “with the understanding that changes of this magnitude, not experienced in the last 20+ million years, pose existential threats to a majority of the population.”

References

  • IPCC, 2013: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovern- mental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 1535 pp.
  • IPCC, 2001: Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change [Houghton, J.T., Y. Ding, D.J. Griggs, M. Noguer, P.J. van der Linden, X. Dai, K. Maskell, and C.A. Johnson (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 881pp.
  • Myles Allen at al (2009), “Warming caused by cumulative carbon emissions towards the trillionth tonne”,Nature 458, 1163-1166
  • Kirsten Zickfeld et al (2016), “On the proportionality between global temperature change and cumulative CO2 emissions during periods of net negative CO2 emissions”, Environ. Res. Lett. 11 055006

Technical Notes

A. Logarithmic relationship?

For those who know about the logarithmic relationship between added CO2 concentration and the ‘radiative forcing’ (giving rise to warming) – and many well meaning contrarians seem to take succour from this fact – the linear relationship in this figure may at first sight seem surprising.

The reason for the linearity is nicely explained by Marcin Popkiewicz in his piece “If growth of COconcentration causes only logarithmic temperature increase – why worry?”

The relative warming (between one level of emissions and another) is related to the ratio of this logarithmic function, and that is approximately linear over the concentration range of interest.

In any case, it is worth noting that CO2 concentrations have been increasing exponentially, and a logarithm of an exponential function is a linear function.

There is on-going work on wider questions. For example, to what extent ‘negative emissions technology’ can counteract warming that is in the pipeline?

Kirsten Zickfield et al (2016), is one such paper, “…[suggests that] positive CO2 emissions are more effective at warming than negative emissions are at subsequently cooling”. So we need to be very careful in assuming we can reverse warming that is in the pipeline.

B. Transient Climate Response and Additional Warming Commitment

The ‘Transient Climate Response’ (TCR) reflects the warming that results when CO2 is added at 1% per year, which for a doubling of the concentration takes 70 years. This is illustrated quite well in a figure from a previous report (Reference: IPCC, 2001):

TAR Figure 9.1

The warming that results from this additional concentration of CO2 occurs over the same time frame. However, this does not include all the the warming that will eventually result because the earth system (principally the oceans and atmosphere) will take a long time to reach a new equilibrium where all the flows of energy are brought back into a (new) balance. This will take at least 200 years (for lower emission scenarios) or much longer for higher emission levels.  This additional warming commitment must be added to the TCR. However, the TCR nevertheless does represent perhaps 70% of the overall warming, and remains a useful measure when discussing policy options over the 21st Century.

This discussion excludes more uncertain and much longer term feedbacks involving, for example, changes to the polar ice sheets (and consequentially, the Earth’s albedo), release of methane from northern latitudes or methane clathrates from the oceans. These are not part of the ‘additional warming commitment’, even in the IPCC 2013 report, as they are considered too speculative and uncertain to be quantified.

. . o O o . .

Leave a comment

Filed under Climate Science

Animating IPCC Climate Data

The IPCC (Intergovernmental Panel on Climate Change) is exploring ways to improve the communication of its findings, particularly to a more general  audience. They are not alone in having identified a need to think again about clear ‘science communications’. For example, the EU’s HELIX project (High-End Climate Impacts and Extremes), produced some guidelines a while ago on better use of language and diagrams.

Coming out of the HELIX project, and through a series of workshops, a collaboration with the Tyndall Centre and Climate Outreach, has produced a comprehensive guide (Guide With Practical Exercises to Train Researchers In the Science of  Climate Change Communication)

The idea is not to say ‘communicate like THIS’ but more to share good practice amongst scientists and to ensure all scientists are aware of the communication issues, and then to address them.

Much of this guidance concerns the ‘soft’ aspects of communication: how the communicator views themself; understanding the audience; building trust; coping with uncertainty; etc.

Some of this reflects ideas that are useful not just to scientific communication, but almost any technical presentation in any sector, but that does not diminish its importance.

This has now been distilled into a Communications Handbook for IPCC Scientists; not an official publication of the IPCC but a contribution to the conversation on how to improve communications.

I want to take a slightly different tack, which is not a response to the handbook per se, but covers a complementary issue.

In many years of being involved in presenting complex material (in my case, in enterprise information management) to audiences unfamiliar with the subject at hand, I have often been aware of the communication potential but also risks of diagrams. They say that a picture is worth a thousand words, but this is not true if you need a thousand words to explain the picture!

The unwritten rules related to the visual syntax and semantics of diagrams is a fascinating topic, and one which many – and most notably Edward Tufte –  have explored. In chapter 2 of his insightful and beautiful book Visual Explanations, Tufte argues:

“When we reason about quantityative evidence, certain methods for displaying and analysing data are better than others. Superior methods are more likely to produce truthful, credible, and precise findings. The difference between an excellent analysis and a faulty one can sometimes have momentous consequences”

He then describes how data can be used and abused. He illustrates this with two examples: the 1854 Cholera epidemic in London and the 1986 Challenger space shuttle disaster.

Tufte has been highly critical of the over reliance on Powerpoint for technical reporting (not just presentations) in NASA, because the form of the content degrades the narrative that should have been an essential part of any report (with or without pictures). Bulletized data can destroy context, clarity and meaning.

There could be no more ‘momentous consequences’ than those that arise from man-made global warming, and therefore, there could hardly be a more important case where a Tuftian eye, if I may call it that, needs to be brought to bear on how the information is described and visualised.

The IPCC, and the underlying science on which it relies, is arguably the greatest scientific collaboration ever undertaken, and rightly recognised with a Nobel Prize. It includes a level of interdisciplinary cooperation that is frankly awe-inspiring; unique in its scope and depth.

It is not surprising therefore that it has led to very large and dense reports, covering the many areas that are unavoidably involved: the cryosphere, sea-level rise, crops, extreme weather, species migration, etc.. It might seem difficult to condense this material without loss of important information. For example, Volume 1 of the IPCC Fifth Assessment Report, which covered the Physical Basis of Climate Change, was over 1500 pages long.

Nevertheless, the IPCC endeavours to help policy-makers by providing them with summaries and also a synthesis report, to provide the essential underlying knowledge that policy-makers need to inform their discussions on actions in response to the science.

However, in its summary reports the IPCC will often reuse key diagrams, taken from the full reports. There are good reasons for this, because the IPCC is trying to maintain mutual consistency between different products covering the same findings at different levels of detail.

This exercise is fraught with risks of over-simplification or misrepresentation of the main report’s findings, and this might limit the degree to which the IPCC can become ‘creative’ with compelling visuals that ‘simplify’ the original diagrams. Remember too that these reports need to be agreed by reviewers from national representatives, and the language will often seem to combine the cautiousness of a scientist with the dryness of a lawyer.

So yes, it can be problematic to use artistic flair to improve the comprehensibility of the findings, but risk losing the nuance and caution that is a hallmark of science. The countervailing risk is that people do not really ‘get it’; and do not appreciate what they are seeing.

We have seen with the Challenger reports, that people did not appreciate the issue with the O rings, especially when key facts were buried in 5 levels of indented bullet points in a tiny font, for example or, hidden in plain sight, in a figure so complex that the key findings are lost in a fog of complexity.

That is why any attempt to improve the summaries for policy makers and the general public must continue to involve those who are responsible for the overall integrity and consistency of the different products, not simply hived off to a separate group of ‘creatives’ who would lack knowledge and insight of the nuance that needs to be respected.  But those complementary skills – data visualizers, graphics artists, and others – need to be included in this effort to improve science communications. There is also a need for those able to critically evaluate the pedagogic value of the output (along the lines of Tufte), to ensure they really inform, and do not confuse.

Some individuals have taken to social media to present their own examples of how to present information, which often employs animation (something that is clearly not possible for the printed page, or its digital analogue, a PDF document). Perhaps the most well known example to date was Professor Ed Hawkin’s spiral picture showing the increase in global mean surface temperature:

spiral_2017_large

This animation went viral, and was even featured as part of the Rio Olympics Opening Ceremony. This and other spiral animations can be found at the Climate Lab Book site.

There are now a number of other great producers of animations. Here follows a few examples.

Here, Kevin Pluck (@kevpluck) illustrates the link between the rising carbon dioxide levels and the rising mean surface temperature, since 1958 (the year when direct and continuous measurements of carbon dioxide were pioneered by Keeling)

Kevin Pluck has many other animations which are informative, particularly in relation to sea ice.

Another example, from Antti Lipponen (@anttilip), visualises the increase in surface warming from 1900 to 2017, by country, grouped according to continent. We see the increasing length/redness of the radial bars, showing an overall warming trend, but at different rates according to region and country.

A final example along the same lines is from John Kennedy (@micefearboggis), which is slightly more elaborate but rich in interesting information. It shows temperature changes over the years, at different latitudes, for both ocean (left side) and land (right side). The longer/redder the bar the higher the increase in temperature at that location, relative to the temperature baseline at that location (which scientists call the ‘anomaly’). This is why we see the greatest warming in the Arctic, as it is warming proportionally faster than the rest of the planet; this is one of the big takeaways from this animation.

These examples of animation are clearly not dumbing down the data, far from it. They  improve the chances of the general public engaging with the data. This kind of animation of the data provides an entry point for those wanting to learn more. They can then move onto a narrative treatment, placing the animation in context, confident that they have grasped the essential information.

If the IPCC restricts itself to static media (i.e. PDF files), it will miss many opportunities to enliven the data in the ways illustrated above that reveal the essential knowledge that needs to be communicated.

(c) Richard W. Erskine, 2018

3 Comments

Filed under Climate Science, Essay, Science Communications

When did you learn about the Holocaust?

“Where were you when Kennedy was shot?”,

used to be the question everyone asked, but of course is an increasingly irrelevant question, in an ageing population.

But a question that should never age, and should stay with us forever, is

“When did you learn about the holocaust?”.

I remember when I first learned about the holocaust, and it remains seared into my consciousness, thanks to a passionate and dedicated teacher, Mr Cromie.

I was a young child at a boarding school Stouts Hill Preparatory School, in the little village of Uley in Gloucestershire. The school no longer exists but that memory never fades. You cannot ‘unlearn’ something like that.

I was no more than 12 at the time, so this would have been 1965 or earlier, and our teacher told us about the mass murder of the Jews in Nazi Germany, but with a sense of anger and resentment at the injustice of this monstrous episode in history. And it has often occurred to me since that the peak of this programme of murder was just 10 years before I was born.

But what did I learn and what did I remember? I learned about the gas chambers, and the burning of bodies, but it was all a kind of vague memory of an atrocity, difficult to properly make sense of at that age.

What we did not really learn was the process by which a civilised country like Germany could turn from being at the centre of European culture to a murderous genocidal regime in just a decade.

For British viewers, this story of inhumanity was often framed through the lens of Bergen-Belsen, because it was the Brits that liberated this Concentration Camp, and the influential Richard Dimbleby was there to deliver his sonorous commentary on the horrors of the skeletal survivors and piles of corpses.

But it is curious how this story is still the reflex image that many Britons have of the holocaust, and I have often wondered why.  The Conversation tried to provide an answer:

“But even though many, if not most, of those involved in the rescue and relief effort were aware of the fact that Jews made up the largest number of the victims, the evolving official British narrative sidestepped this issue. The liberation of Bergen-Belsen became separated from what the people held in this camp had had to endure, and why they had been incarcerated in the first place.

Instead, the liberation of Bergen-Belsen was transformed into a British triumph over “evil”. The event was used to confirm to the wider British public that the British Army had fought a morally and ethically justified war, that all the personal and collective sacrifices made to win the war had now been vindicated. Bergen-Belsen gave sense and meaning to the British military campaign against Nazi Germany and the Allied demand for an unconditional surrender. The liberation of the camp became Britain’s finest hour.”

Each country, each culture, and each person, constructs their own narrative to try to make sense of the horror.

But despite the horror of Bergen-Belsen, and the 35,000 who died there, it is barely a footnote in the industrialised murder campaign that the Nazi leadership planned and executed.

Despite the fact that most people are vaguely aware of a figure of several million Jews and others dying, they are rather less aware of the distinction between Concentration Camps and Death Camps (also know as Extermination Camps).

Many died in the numerous Concentration Camps, as Wikipedia describes:

“Many of the prisoners died in the concentration camps due to deliberate maltreatment, disease, starvation, and overwork, or they were executed as unfit for labor. Prisoners were transported in inhumane conditions by rail freight cars, in which many died before reaching their final destination. The prisoners were confined in the boxcars for days or even weeks, with little or no food or water. Many died of dehydration in the intense heat of summer or froze to death in winter. Concentration camps also existed in Germany itself, and while they were not specifically designed for systematic extermination, many of their inmates perished because of harsh conditions or they were executed.”

The death camps at Chełmno, Treblinka, Sobibór and Belzec were designed purely as places of murder.  It is not simply about the arithmetic of the holocaust. After all, the death squads and related actions in the east accounted for 2.5 million murders, and the death camps over 3 million. But it is the sheer refinement of the industrialization of murder at the Extermination Camps that is difficult to comprehend:

“Visitors to the sites of Belzec, Sobibor and Treblinka (of who there are far, far fewer than travel to Auschwitz) are shocked by how tiny these killing camps were. A total of around 1.7 million people were murdered in these three camps – 600,000 more than the murder toll of Auschwitz – and yet all three could fit into the area of Auschwitz-Birkenau with room to spare. In a murder process that is an affront to human dignity at almost every level, one of the greatest affronts – and this may seem illiogical unless you have actually been there – is that so many people were killed in such a small area.”

Auschwitz: The Nazis & The ‘Final Solution’ – Laurence Rees, BBC Books, 2005

Majdanek and Auschwitz also became Extermination Camps, but were dual purpose, also being used as Concentration Camps, so they had accommodation, bunks, and so forth that where not needed in the small camps designed purely for murder.

It is helpful to those who deny the holocaust or its full horror that Belzec, Sobibor and Treblinka have not entered into the public imagination in the way that Auschwitz has. Being dual use it is then easier to play on this apparent ambiguity, to construct a denial narrative along the lines of: many died from hard labour, it was not systematic murder.

And of course, not knowing about Belzec, Sobibor, Treblinka and Chełmno is a lot easier than knowing, because they expose the full, unadulterated horror.

Remember that the Final Solution came after a decade of murderous projects – the death squads in the east, the euthanasia programmes, and early experiments with gassing – which led to the final horror of the Extermination Camps.

You can never stop learning, because you will never hear all the details, read all the books, or hear all the testimonies.

But if you ever find yourself not feeling deeply uncomfortable (as well as deeply moved) by the horrors of the Holocaust, then it is time to not turn away. To take another look.

For us today, the most important lesson is that it is possible for even a sophisticated and educated country to succumb to a warped philosophy that blames the ‘other’ for  problems in society, and to progressively desensitize the people to greater and greater levels of dehumanisation.

While nothing on the scale of the holocaust has occurred again, can we be confident that it never could? When we see what has happened under Pol Pot, or in Srebrenica, or in Rwanda, we know that the capacity of people to dehumanise ‘others’ for reasons of ethnicity or politics, and to murder them in large numbers, has not gone away.

The price of freedom, and decency in a society, is eternal vigilance.

Calling out hate speech is therefore, in a small way, honouring the 6 million – the great majority of whom were Jews – who died in the holocaust. It is stamping out that first step in that process of dehumanisation that is the common precursor of all genocidal episodes in history. It is always lurking there, waiting to consume a society that is looking for simple answers, and for someone to blame.

When did I learn about the holocaust?

I never stop learning.

 

#HolocaustMemorialDay #WeRemember

2 Comments

Filed under Holocaust, Uncategorized

Matt Ridley shares his ignorance of climate science (again)

Ridley trots out a combination of long-refuted myths that are much loved by contrarians; bad or crank science; or misunderstandings as to the current state of knowledge. In the absence of a Climate Feedback dissection of Ridley’s latest opinion piece, here is my response to some of his nonsense …

Here are five statements he makes that I will refute in turn.

1. He says: Forty-five years ago a run of cold winters caused a “global cooling” scare.

I say:

Stop repeating this myth Matt! A few articles in popular magazines in the 70s speculated about an impending ice age, and so according to dissemblers like Ridley, they state or imply that this was the scientific consensus at the time (snarky message: silly scientists can’t make your mind up). This is nonsense, but so popular amongst contrarians it is repeated frequently to this day.

If you want to know what scientists were really thinking and publishing in scientific papers read “The Myth of the 1970s Global Cooling Scientific Consensus”, by Thomas Peterson at al (2008), American Meteorological Society.

Warming, not cooling was the greater concern. It is astonishing that Ridley and others continue to repeat this myth. Has he really been unable – in the ten years since it was published – to read this oft cited article and so disabuse himself of the myth? Or does he deliberately repeat it because he thinks his readers are too lazy or too dumb to check the facts? How arrogant would that be?

2. He says: Valentina Zharkova of Northumbria University has suggested that a quiescent sun presages another Little Ice Age like that of 1300-1850. I’m not persuaded. Yet the argument that the world is slowly slipping back into a proper ice age after 10,000 years of balmy warmth is in essence true.

I say:

Oh dear, he cites the work of Zharkova, saying he is not persuaded, but then talks of ‘slowly slipping into a proper ice age’. A curious non sequitur. While we are on Zharkova, her work suffered from being poorly communicated.

And quantitatively, her work has no relevance to the current global warming we are observing. The solar minimum might create a -0.3C contribution over a limited period, but that would hardly put a dent in the +0.2C per decade rate of warming.

But, let’s return to the ice age cycle. What Ridley obdurately refuses to acknowledge is that the current warming is occurring due to less than 200 years of man-made changes to the Earth’s atmosphere, raising CO2 to levels not seen for nearly 1 million years (equal to 10 ice age cycles), is raising the global mean surface temperature at an unprecedented rate.

Therefore, talking about the long slow descent over thousands of years into an ice age that ought to be happening (based on the prior cycles), is frankly bizarre, especially given that the man-made warming is now very likely to delay a future ice age. As the a paper by Ganopolski et al, Nature (2016) has estimated:

“Additionally, our analysis suggests that even in the absence of human perturbations no substantial build-up of ice sheets would occur within the next several thousand years and that the current interglacial would probably last for another 50,000 years. However, moderate anthropogenic cumulative CO2 emissions of 1,000 to 1,500 gigatonnes of carbon will postpone the next glacial inception by at least 100,000 years.”

And why stop there, Matt? Our expanding sun will boil away the oceans in a billion years time, so why worry about Brexit; and don’t get me started on the heat death of the universe. It’s hopeless, so we might as well have a great hedonistic time and go to hell in a handcart! Ridiculous, yes, but no less so than Ridley conflating current man-made global warming with a far, far off ice age, that recedes with every year we fail to address man-made emissions of CO2.

3. He says: Well, not so fast. Inconveniently, the correlation implies causation the wrong way round: at the end of an interglacial, such as the Eemian period, over 100,000 years ago, carbon dioxide levels remain high for many thousands of years while temperature fell steadily. Eventually CO2 followed temperature downward.

I say:

The ice ages have indeed been a focus of study since Louis Agassiz coined the term in 1837, and there have been many twists and turns in our understanding of them even up to the present day, but Ridley’s over-simplification shows his ignorance of the evolution of this understanding.

The Milankovitch Cycles are key triggers for entering, an ice age (and indeed, leaving it), but the changes in atmospheric concentrations of carbon dioxide drives the cooling (entering) and warming (leaving) of an ice age, something that was finally accepted by the science community following Hays et al’s 1976 seminal paper (Variations in the Earth’s orbit: Pacemake of the ice ages) , over 50 years since Milankovitch first did his work.

But the ice core data that Ridley refers to confirms that carbon dioxide is the driver, or ‘control knob’, as Professor Richard Alley explains it; and if you need a very readable and scientifically literate history of our understanding of the ice cores and what they are telling us, his book “The Two-Mile Time Machine: Ice Cores, Abrupt Climate Change, and Our Future” is a peerless, and unputdownable introduction.

Professor Alley offers an analogy. Suppose you take out a small loan, but then after this interest is added, and keeps being added, so that after some years you owe a lot of money. Was it the small loan, or the interest rate that created the large debt? You might say both, but it is certainly ridiculous to say the the interest rate is unimportant because the small loan came first.

But despite its complexity, and despite the fact that the so-called ‘lag’ does not refute the dominant role of CO2, scientists are interested in explaining such details and have indeed studied the ‘lag’. In 2012, Shakun and others published a paper doing just that “Global warming preceded by increasing carbon dioxide concentrations during the last deglaciation”(Jeremy D. Shakun et al, Nature 484, 49–54, 5 April 2012). Since you may struggle to see a copy of this paywalled paper, a plain-English summary is available.

Those who read headlines and not contents – like the US Politician Joe Barton – might think this paper is challenging the dominant role of CO2, but the paper does not say that.  This paper showed that some warming occurred prior to increased CO2, but this is explained as an interaction between Northern and Southern hemispheres, following the Milankovitch original ‘forcing’.

The role of the oceans is crucial in fully explaining the temperature record, and can add significant delays in reaching a new equilibrium. There are interactions between the oceans in Northern and Southern hemispheres that are implicated in some abrupt climate change events (e.g.  “North Atlantic ocean circulation and abrupt climate change during the last glaciation”, L. G. Henry et al, Science,  29 July 2016 • Vol. 353 Issue 6298).

4. He says: Here is an essay by Willis Eschenbach discussing this issue. He comes to five conclusions as to why CO2 cannot be the main driver

I say:

So Ridley quotes someone with little or no scientific credibility who has managed to publish in Energy & Environment. Its editor Dr Sonja Boehmer-Christiansen admitted that she was quite partisan in seeking to publish ‘sceptical’ articles (which actually means, contrarian articles), as discussed here.

Yet, Ridley extensively quotes this low grade material, but could have chosen from hundreds of credible experts in the field of climate science. If he’d prefer ‘the’ textbook that will take him through all the fundamentals that he seems to struggle to understand, he could try Raymond Pierrehumbert’s seminal textbook “Principles of Planetary Climate”. But no. He chooses Eschenbach, with a BA in Psychology.

Ridley used to put up the appearance of interest in a rational discourse, albeit flying in the face of the science. That mask has now fully and finally dropped, as he is now channeling crank science. This is risible.

5. He says: The Antarctic ice cores, going back 800,000 years, then revealed that there were some great summers when the Milankovich wobbles should have produced an interglacial warming, but did not. To explain these “missing interglacials”, a recent paper in Geoscience Frontiers by Ralph Ellis and Michael Palmer argues we need carbon dioxide back on the stage, not as a greenhouse gas but as plant food.

I say:

The paper is 19 pages long, which is unusual in today’s literature. The case made is intriguing but not convincing, but I leave it to the experts to properly critique it. It is taking a complex system, where for example, we know that large movements of heat in the ocean have played a key role in variability, and tries to infer (explaining interglacials) that dust is the primary driver, while discounting the role of CO2 as a greenhouse gas.

The paper curiously does not cite the seminal paper by Hays et al (1976), yet cites a paper by Willis Eschenbach published in Energy & Environment (which I mentioned earlier). All this raised concerns in my mind about this paper.

Extraordinary claims require extraordinary evidence and scientific dialogue, and it is really too early to claim that this paper is something or nothing; even if that doesn’t mean waiting the 50 odd years that Milankovitch’s work had to endure, before it was widely accepted. Good science is slow, conservative, and rigorous, and the emergence of a consilience on the science of our climate has taken a very long time, as I explored in a previous essay.

Ralph Ellis on his website (which shows that his primary interest is the history of the life and times of Jesus) states:

“Ralph has made a detour into palaeoclimatology, resulting in a peer-review science paper on the causes of ice ages”, and after summarising the paper says,

“So the alarmists were right about CO2 being a vital forcing agent in ice age modulation – just not in the way they thought”.

So was this paper an attempt to clarify what was happening during the ice ages, or a contrivance, to take a pot shot at carbon dioxide’s influence on our contemporary climate change?

The co-author, Michael Palmer, is a biochemist, with no obvious background in climate science and provided “a little help” on the paper according to his website.

But on a blog post comment he offers a rather dubious extrapolation from the paper:

“The irony is that, if we should succeed in keeping the CO2 levels high through the next glacial maximum, we would remove the mechanism that would trigger the glacial termination, and we might end up (extreme scenario, of course) another Snowball Earth.”,

They both felt unembarrassed participating in comments on the denialist blog site WUWT. Quite the opposite, they gleefully exchanged messages with a growing band of breathless devotees.

But even if my concerns about the apparent bias and amateurism of this paper were allayed, the conclusion (which Ridley and Ellis clearly hold to) that the current increases in carbon dioxide is nothing to be concerned with, does not follow from this paper. It is a non sequitur.

If I discovered a strange behavour like, say, the Coriolis force way back when, the first conclusion would not be to throw out Newtonian mechanics.

The physics of CO2 is clear. How the greenhouse effect works is clear, including for the conditions that apply on Earth, with all remaining objections resolved since no later than the 1960s.

We have a clear idea of the warming effect of increased CO2 in the atmosphere including short term feedbacks, and we are getting an increasingly clear picture of how the Earth system as a whole will respond, including longer term feedbacks.  There is much still to learn of course, but nothing that is likely to require jettisoning fundamental physics.

The recent excellent timeline published by Carbon Brief showing the history of the climate models, illustrates the long slow process of developing these models, based on all the relevant fundamental science.

This history has shown how different elements have been included in the models as the computing power has increased – general circulation, ocean circulation, clouds, aerosols, carbon cycle, black carbon.

I think it is really because Ridley still doesn’t understand how an increase from 0.03% to 0.04% over 150 years or so, in the atmospheric concentration of CO2, is something to be concerned about (or as I state it in talks, a 33% rise in the principal greenhouse gas; which avoids Ridley’s deliberately misleading formulation).

He denies that he denies the Greenhouse Effect, but every time he writes, he reveals that really, deep down, he still doesn’t get it. To be as generous as I can to him, he may suffer from a perpetual state of incredulity (a common condition I have written about before).

Conclusion

Matt Ridley in an interview he gave to Russ Roberts at EconTalk.org in 2015 he reveals his inability to grasp even the most basic science:

“So, why do they say that their estimate of climate sensitivity, which is the amount of warming from a doubling, is 3 degrees? Not 1 degree? And the answer is because the models have an amplifying factor in there. They are saying that that small amount of warming will trigger a furtherwarming, through the effect mainly of water vapor and clouds. In other words, if you warm up the earth by 1 degree, you will get more water vapor in the atmosphere, and that water vapor is itself a greenhouse gas and will cause you to treble the amount of warming you are getting. Now, that’s the bit that lukewarmers like me challenge. Because we say, ‘Look, the evidence would not seem the same, the increases in water vapor in the right parts of the atmosphere–you have to know which parts of the atmosphere you are looking at–to justify that. And nor are you seeing the changes in cloud cover that justify these positive-feedback assumptions. Some clouds amplify warming; some clouds do the opposite–they would actually dampen warming. And most of the evidence would seem to suggest, to date, that clouds are actually having a dampening effect on warming. So, you know, we are getting a little bit of warming as a result of carbon dioxide. The clouds are making sure that warming isn’t very fast. And they’re certainly not exaggerating or amplifying it. So there’s very, very weak science to support that assumption of a trebling.”

He seems to be saying that the water vapour is in the form of clouds – some high altitude, some low –  have opposite effects (so far, so good), so the warming should be 1C – just the carbon dioxide component – from a doubling of CO2 concentrations (so far, so bad).  The clouds represent a condensed (but not yet precipitated) phase of water in the atmosphere, but he seems to have overlooked that water also comes in a gaseous phase (not clouds). Its is that gaseous phase that is providing the additional warming, bringing the overall warming to 3C.

The increase in water vapour concentrations is based on “a well-established physical law (the Clausius-Clapeyron relation) determines that the water-holding capacity of the atmosphere increases by about 7% for every 1°C rise in temperature” (IPCC AR4 FAQ 3.2)

T.C. Chamberlin writing in 1905 to Charles Abbott, explained this in a way that is very clear, explaining the feedback role of water vapour:

“Water vapour, confessedly the greatest thermal absorbent in the atmosphere, is dependent on temperature for its amount, and if another agent, as CO2 not so dependent, raises the temperature of the surface, it calls into function a certain amount of water vapour, which further absorbs heat, raises the temperature and calls forth more [water] vapour …”

(Ref. “Historical Perspectives On Climate Change” by James Fleming, 1998)

It is now 113 years since Chamberlin wrote those words, but poor Ridley is still struggling to understand basic physics, so instead regales us with dubious science intended to distract and confuse.

When will Matt Ridley stop feeling the need to share his perpetual incredulity and obdurate ignorance with the world?

© Richard W. Erskine, 2018

Leave a comment

Filed under Climate Science, Essay

Ending The Climate Solution Wars: A Climate Solutions Taxonomy

If you spend even a little time looking at the internet and social media in search of enlightenment on climate solutions, you will have noted that there are passionate advocates for each and every solution out there, who are also experts in the shortcomings of competing solutions!

This creates a rather unhelpful atmosphere for those of us trying to grapple with the problem of addressing the very real risks of dangerous global warming.

There are four biases – often implied but not always stated – that lie at the heart of these unproductive arguments:

  • Lack of clear evidence of the feasibility of a solution;
  • Failure to be clear and realistic about timescales;
  • Tendency to prioritize solutions in a way that marginalizes others;
  • Preference for top-down (centralization) or bottom-up (decentralization) solutions.

Let’s explore how these manifest themselves:

Feasibility: Lack of clear evidence of the feasibility of a solution

This does not mean that an idea does not have promise (and isn’t worthy of R&D investment), but refers to the tendency to champion a solution based more on wishful thinking than any proven track record. For example, small modular nuclear has been championed as the path to a new future for nuclear – small, modular, scaleable, safe, cheap – and there are an army of people shouting that this is true. We have heard recent news that the economics of small nuclear are looking a bit shaky. This doesn’t mean its dead, but it does rather put the onus on the advocates to prove their case, and cut the PR, as Richard Black has put it. Another one that comes to mind is ‘soil carbon’ as the single-handed saviour (as discussed in Incredulity, Credulity and the Carbon Cycle). The need to reform agriculture is clear, but it is also true (according to published science) that a warming earth could make soils a reinforcer of warming, rather than a cooling agent; the wisdom of resting hopes in regenerative farming as the whole of even a major contributor, is far from clear. The numbers are important.

Those who do not wish to deal with global warming (either because they deny its seriousness or because they do not like the solutions) quite like futuristic solutions, because while we are debating long-off solutions, we are distracted from focusing on implementing existing solutions.

Timescale: Failure to be clear and realistic about timescales

Often we see solutions that seem to clearly have promise and will be able to make a major contribution in the future. The issue is that even when they have passed the feasibility test, they fail to meet it on a timescale required. There is not even one timescale, as discussed in Solving Man-made Global Warming: A Reality Check, as we have an immediate need to reduce carbon emissions (say, 0-10 years), then an intermediate timeframe in which to implement an energy transition (say, 10-40 years). Renewable energy is key to the latter but cannot make sufficient contribution to the former (that can only be done by individual and community reductions in their carbon intensity). And whatever role Nuclear Fusion has for the future of humanity, it is totally irrelevant to solving the challenge we have in the next 50 years to decarbonize our economy.

The other aspect of timescale that is crucial is that the eventual warming of the planet is strongly linked to the peak atmospheric concentration, whereas the peak impacts will be delayed for decades or even centuries, before the Earth system finally reaches a new equilibrium. Therefore, while the decarbonization strategy required for solutions over, say, the 2020-2050 timeframe; the implied impacts timeframe could be 2050-2500, and this delay can make it very difficult to appreciate the urgency for action.

Priority: Tendency to prioritize solutions in a way that precludes others

I was commenting on Project Drawdown on twitter the other day and this elicited a strong response because of a dislike of a ‘list’ approach to solutions. I also do not like ‘lists’ when they imply that the top few should be implemented and the bottom ones ignored.  We are in an ‘all hands on deck’ situation, so we have to be very careful not to exclude solutions that meet the feasibility and timescale tests. Paul Hawken has been very clear that this is not the intention of Project Drawdown (because the different solutions interact and an apparently small solution can act as a catalyst for other solutions).

Centralization: Preference for top-down (centralization) or bottom-up (decentralization) solutions.

Some people like the idea of big solutions which are often underwritten at least by centralised entities like Governments. They argue that big impact require big solutions, and so they have a bias towards solutions like nuclear and an antipathy to lower-tech and less energy intensive solutions like solar and wind.

Others share quite the opposite perspective. They are suspicious of Governments and big business, and like the idea of community based, less intensive solutions. They are often characterized as being unrealistic because of the unending thirst of humanity for consumption suggests an unending need for highly intensive energy sources.

The antagonism between these world views often obscures the obvious: that we will need both top-down and bottom-up solutions. We cannot all have everything we would like. Some give and take will be essential.

This can make for strange bedfellows. Both environmentalists and Tea Party members in Florida supported renewable energy for complementary reasons, and they became allies in defeating large private utilities who were trying to kill renewables.

To counteract these biases, we need to agree on some terms of reference for solving global warming.

  • Firstly, we must of course be guided by the science (namely, the IPCC reports and its projections) in order to measure the scale of the response required. We must take a risk management approach to the potential impacts.
  • Secondly, we need to start with an ‘all hands on deck’ or inclusive philosophy because we have left it so late to tackle decarbonization, we must be very careful before we throw out any ideas.
  • Thirdly, we must agree on a relevant timeline for those solutions we will invest in and scale immediately. For example, for Project Drawdown, that means solutions that are proven, can be scaled and make an impact over the 2020-2050 timescale. Those that cannot need not be ‘thrown out’ but may need more research & development before they move to being operationally scaled.
  • Fourthly, we allow both top-down (centralized) and bottom-up (solutions), but recognise that while Governments dither, it will be up to individuals and social enterprise to act, and so in the short-medium term, it will be the bottom solutions that will have greater impact. Ironically, the much feared ‘World Government’ that right-wing conpiracy theorists most fear, is not what we need right now, and on that, the environmentalists mostly agree!

In the following Climate Solutions Taxonomy I have tried to provide a macro-level view of different solution classes. I have included some solutions which I am not sympathetic too;  such as nuclear and geo-engineering. But bear in mind that the goal here is to map out all solutions. It is not ‘my’ solutions, and is not itself a recommendation or plan.

On one axis we have the top-down versus bottom-up dimension, and on the other axis, broad classes of solution. The taxonomy is therefore not a simple hierarchy, but is multi-dimensional (here I show just two dimensions, but there are more).

Climate Solutions Taxonomy macro view

While I would need to go to a deeper level to show this more clearly, the arrows are suggestive of the system feedbacks that reflect synergies between solutions. For example, solar PV in villages in East Africa support education, which in turn supports improvments in family planning.

It is incredible to me that while we have (properly) invested a lot of intellectual and financial resources in scientific programmes to model the Earth’s climate system (and impacts), there has been dramatically less modelling effort on the economic implications that will help support policy-making (based on the damage from climate change, through what are called Integrated Assessment Models).

But what is even worse is that there seems to have been even less effort – or barely any –  modelling the full range of solutions and their interactions. Yes, there has been modelling of, for example, renewable energy supply and demand (for example in Germany), and yes, Project Drawdown is a great initiative; but I do not see a substantial programme of work, supported by Governments and Academia, that is grappling with the full range of solutions that I have tried to capture in the figure above, and providing an integrated set of tools to support those engaged in planning and implementing solutions.

This is unfortunate at many levels.

I am not here imagining some grand unified theory of climate solutions, where we end up with a spreadsheet telling us how much solar we should build by when and where.

But I do envisage a heuristic tool-kit that would help a town such as the one I was born (Hargesia in Somaliland), or the town in which I now live (Nailsworth in Gloucestershire in the UK), to be able to work through what works for them, to plan and deliver solutions. Each may arrive at different answers, but all need to be grounded in a common base of data and ‘what works’, and a more qualitative body of knowledge on synergies between solutions.

Ideally, the tool-kit would be usable at various levels of granularity, so it could be used at different various scales, and different solutions would emerge at different scales.

A wide range of both quantitative and qualitative methods may be required to grapple with the range of information covered here.

I am looking to explore this further, and am interested in any work or insights people have. Comments welcome.

(c) Richard W. Erskine, 2017

Leave a comment

Filed under Uncategorized

Deficit, Debt and stalling carbon dioxide emissions

This essay is based on an extract from a talk I did recently that was well received. This specific part of the talk was described as very helpful in clarifying matters related to our carbon dioxide emissions. I hope others also find it useful. 

David Cameron said on 24 January 2013 “We’re paying down Britain’s debts” and got a lot of stick for this misleading statement. Why? Let me try to explain.

The deficit is the annual amount by which we spend more than we get in taxes. Whereas, the debt is the cumulative sum of year on year deficits.

As many politicians do, Cameron was using language designed to be, shall we say, ‘economical with the truth’. He was not the first, and he won’t be the last.

We can picture deficit being added to our debt using the following picture (or for greater dramatic effect, do it live if you are giving a talk):

Screen Shot 2017-11-23 at 17.10.49

If the deficit declines this year compared to last year, that may be of great solace to the Chancellor (and that was the situation in 2013), because maybe it’s the start of a trend that will mean that the debt may reach a peak.

Cameron could have said “Our debt keeps rising, but at least the rate at which it is rising is slightly less than last year. We’ll need to borrow some more to cover the additional deficit”, would the a honest statement, but he didn’t. It simply wouldn’t have cut it with the spin doctors.

The reality is that the only thing we can conclude from a deficit this year that is smaller than last year is that that the debt has increased by an amount less than last year. That’s it. It doesn’t sound quite so great put that way, does it?

You need year-on-year surpluses to actually bring the debt down.

Deficit and debt are useful in making an analogy with carbon dioxide in the atmosphere, because the confusion – intended or accidental – over deficit and debt, is very similar to the confusion that occurs in the mind of the public when the media report changes in our carbon emissions.

Let’s explore the analogy by replacing “Deficit” with “Emissions”, and “Debt” with “Atmospheric Concentration” …

The annual emissions add to the cumulative emissions in the atmosphere, i.e. the raised Atmospheric Concentration.

Screen Shot 2017-11-23 at 17.11.25

There are two differences with the financial analogy when we think about carbon dioxide in the atmosphere.

Firstly, when we add, say, 40 billion tonnes of carbon dioxide to the atmosphere (the green coloured area represents the added carbon dioxide) …

Screen Shot 2017-11-23 at 17.11.37

… then, within a short time (about 5 years) 50% of the added carbon dioxide (that is 20 billion tonnes, in this illustration), is absorbed in oceans and biosphere, balancing the remainder of carbon dioxide added to atmosphere, and we can visualize this balance as follows (Credit: Rabett Run, which includes a more technical description and an animation) –

Screen Shot 2017-11-23 at 17.11.52

Secondly, unlike with the economy, once the atmospheric concentration of carbon dioxide goes up, it stays up for hundred of years (and to get back to where it started, thousands of years), because for one thing, the processes to take carbon from the upper ocean to the deep ocean are very slow.

Unlike with the economy, our added carbon dioxide concentration in the atmosphere always goes in the wrong direction; it increases.

So when we see stories that talk about “emissions stalling” or other phrases that seem to offer reassurance, remember, they are talking about emissions (deficit) NOT concentrations (debt).

The story title below is just one example, taken from the Financial Times ( and I am not picking on the FT, but it shows that this is not restricted to the tabloids).

Whenever we see a graph of emissions over the years (graph on the left), the Health Warning should always be the Keeling Curve (graph on the right).

Screen Shot 2017-11-23 at 17.12.05

So the global garbon dioxide emissions in 2014 and 2015 where 36.08 and 36.02 billion tonnes, respectively. Cause for cautious rejoicing? Well, given the huge number of variables that go into this figure (the GDP of each nation; their carbon intensity; the efficiency level for equipment and transport; and so on), projecting a trend from a few years is a tricky business, and some have devoted their lives to tracking this figure. Important work for sure.

Then 2016 came along and the figure was similar but slightly raised, at 36.18 billion tonnes.

But we were said to be stalled … 36.08, 36.02 and 36.18.

I liken this to heading for the cliff edge at a steady pace, but at least no longer accelerating. Apparently that is meant to be reassuring.

Then comes the projected figure for 2017, which includes a bit of a burp of carbon dioxide from the oceans – courtesy of the strong El Nino – and this was even predicted, and horror of horrors, it makes headline news around the world.

We have jumped by 2% over the previous year (actually 1.7% to 36.79 billion tonnes). Has the ‘stall’ now unstalled? What next?

The real headline is that we are continuing to emit over 35 billion tonnes of carbon dioxide, year on year without any sign of stopping.

Only when emissions go down to 0 (zero), will the atmospheric concentration STOP rising.

So in relation to our emissions what word do we want to describe it? Not stall, not plateau, not ease back, but instead, stop, finito or end. They’ll do.

I have discovered – from talking to people who do not follow climate change on twitter, or the blogosphere, and are not fans of complex data analysis – that what I explained above was very helpful but also not widely appreciated.

But in a sense, this is probably the most important fact about climate change that everyone needs to understand, that

the carbon dioxide concentration will only stop rising when emissions completely stop.

The second most important fact is this:

whatever value the atmospheric concentration of carbon dioxide gets to – at that point in the future when we stop adding more – that it is where it will stay for my grandchild, and her grandchildren, and their grandchildren, and so on … for centuries* to come.

The Keeling Curve  – which measures the global atmospheric concentration of carbon dioxide – is the only curve that matters, because until it flattens, we will not know how much warming there will actually be, because of the third most important fact people must understand is this:

broadly speaking, the level of warming is proportional to the the peak concentration of carbon dioxide.

So when we see stories that talk about “emissions stalling” or other phrases that seem to offer hope that we’ve turned a corner, remember, they are talking about emissions (deficit) NOT concentrations (debt).

It is amazing how often the deficit/ debt confusion is played on by policitians regarding the nations finances.

The ’emissions stalling’ narrative of the last few years has led many to imagine we are, if not out of the woods, then on our way, but I think the confusion here is a failure of the media and other science communicators to always provide a clear health warning.

The truth is that we, as a species, are a long way still from showing a concerted effort to get out of the woods. Worse still, we are arguing amongst ourselves about which path to take.

(c) Richard W. Erskine, 2017

 

[* Unless and until we find a way to artificially extract and sequester carbon dioxide; this is still only R&D and not proven at scale yet, so does not rescue the situation we face in the period leading to 2050. We need to halt emissions, not just “stall” them.]

#carbondioxide #emissions #debt #deficit

Leave a comment

Filed under Uncategorized

Musing on the loss of European Medicines Agency (EMA) from the UK

People are arguing as to whether the loss of the EMA from the UK will hurt us or not, and I think missing some nuance.

The ICH (International Committee on Harmonization) has helped pharma to harmonize the way drugs are tested, licensed and monitored globally (albeit with variations), enabling drugs to be submitted for licensing in the largest number of countries possible.

For UK’s Big Pharma, the loss of EMA is a blow but not a fatal one, they have entities everywhere, they’ll find a way.

There are 3 key issues I see, around Network, Innovation and Influence:

  1. Network – New drug development is now more ‘ecosystem’ based, not just big pharma alone, and UK has lots of large, medium and small pharma, in both private and public institutions (Universities, Francis Crick Institute, etc.). And so do other EU countries, which form part of the extended network of collaboration. UK leaving EU will disrupt this network, and loss of EMA subtly changes the centre of power.
  2. Innovation – Further to the damage to networks, and despite ICH’s harmonization, being outside of EU inevitably creates issues for the smaller innovators with less reach, shallower pockets, and a greater challenge in adapting to the new  reality.
  3. Influence – not being at the EMA table (wherever its HQ is based) means that we cannot guide the development of regulation, which is on an inexorable path of even greater harmonization. Despite the UK’s self-loathing re. ‘not being as organized as the Germans’, the Brits have always been better than most at regulation, its deep in our culture (indeed much of the EU regulations neoliberals rail against have been gold-plated by the UK when they reach our shores). But outside the EU, and outside EMA, we won’t be in a position to apply these skills, and our influence will wane.

Unfortunately, the Brexiters have shown that they misunderstand the complexity not merely of supply chains in the automotive sector, for example, but the more subtle connections that exist in highly sophisticated development lifecycles, and highly regulated sectors, like pharmaceuticals.

A key regulatory body moving from our shores will have long term consequences we cannot yet know.

Can Britain adapt to the new reality?

Of course it can, but do not expect it to be easy, quick or cheap to do so.

Expect some pain.

 

Leave a comment

Filed under Uncategorized