Ending The Climate Solution Wars: A Climate Solutions Taxonomy

If you spend even a little time looking at the internet and social media in search of enlightenment on climate solutions, you will have noted that there are passionate advocates for each and every solution out there, who are also experts in the shortcomings of competing solutions!

This creates a rather unhelpful atmosphere for those of us trying to grapple with the problem of addressing the very real risks of dangerous global warming.

There are four biases – often implied but not always stated – that lie at the heart of these unproductive arguments:

  • Lack of clear evidence of the feasibility of a solution;
  • Failure to be clear and realistic about timescales;
  • Tendency to prioritize solutions in a way that marginalizes others;
  • Preference for top-down (centralization) or bottom-up (decentralization) solutions.

Let’s explore how these manifest themselves:

Feasibility: Lack of clear evidence of the feasibility of a solution

This does not mean that an idea does not have promise (and isn’t worthy of R&D investment), but refers to the tendency to champion a solution based more on wishful thinking than any proven track record. For example, small modular nuclear has been championed as the path to a new future for nuclear – small, modular, scaleable, safe, cheap – and there are an army of people shouting that this is true. We have heard recent news that the economics of small nuclear are looking a bit shaky. This doesn’t mean its dead, but it does rather put the onus on the advocates to prove their case, and cut the PR, as Richard Black has put it. Another one that comes to mind is ‘soil carbon’ as the single-handed saviour (as discussed in Incredulity, Credulity and the Carbon Cycle). The need to reform agriculture is clear, but it is also true (according to published science) that a warming earth could make soils a reinforcer of warming, rather than a cooling agent; the wisdom of resting hopes in regenerative farming as the whole of even a major contributor, is far from clear. The numbers are important.

Those who do not wish to deal with global warming (either because they deny its seriousness or because they do not like the solutions) quite like futuristic solutions, because while we are debating long-off solutions, we are distracted from focusing on implementing existing solutions.

Timescale: Failure to be clear and realistic about timescales

Often we see solutions that seem to clearly have promise and will be able to make a major contribution in the future. The issue is that even when they have passed the feasibility test, they fail to meet it on a timescale required. There is not even one timescale, as discussed in Solving Man-made Global Warming: A Reality Check, as we have an immediate need to reduce carbon emissions (say, 0-10 years), then an intermediate timeframe in which to implement an energy transition (say, 10-40 years). Renewable energy is key to the latter but cannot make sufficient contribution to the former (that can only be done by individual and community reductions in their carbon intensity). And whatever role Nuclear Fusion has for the future of humanity, it is totally irrelevant to solving the challenge we have in the next 50 years to decarbonize our economy.

The other aspect of timescale that is crucial is that the eventual warming of the planet is strongly linked to the peak atmospheric concentration, whereas the peak impacts will be delayed for decades or even centuries, before the Earth system finally reaches a new equilibrium. Therefore, while the decarbonization strategy required for solutions over, say, the 2020-2050 timeframe; the implied impacts timeframe could be 2050-2500, and this delay can make it very difficult to appreciate the urgency for action.

Priority: Tendency to prioritize solutions in a way that precludes others

I was commenting on Project Drawdown on twitter the other day and this elicited a strong response because of a dislike of a ‘list’ approach to solutions. I also do not like ‘lists’ when they imply that the top few should be implemented and the bottom ones ignored.  We are in an ‘all hands on deck’ situation, so we have to be very careful not to exclude solutions that meet the feasibility and timescale tests. Paul Hawken has been very clear that this is not the intention of Project Drawdown (because the different solutions interact and an apparently small solution can act as a catalyst for other solutions).

Centralization: Preference for top-down (centralization) or bottom-up (decentralization) solutions.

Some people like the idea of big solutions which are often underwritten at least by centralised entities like Governments. They argue that big impact require big solutions, and so they have a bias towards solutions like nuclear and an antipathy to lower-tech and less energy intensive solutions like solar and wind.

Others share quite the opposite perspective. They are suspicious of Governments and big business, and like the idea of community based, less intensive solutions. They are often characterized as being unrealistic because of the unending thirst of humanity for consumption suggests an unending need for highly intensive energy sources.

The antagonism between these world views often obscures the obvious: that we will need both top-down and bottom-up solutions. We cannot all have everything we would like. Some give and take will be essential.

This can make for strange bedfellows. Both environmentalists and Tea Party members in Florida supported renewable energy for complementary reasons, and they became allies in defeating large private utilities who were trying to kill renewables.

To counteract these biases, we need to agree on some terms of reference for solving global warming.

  • Firstly, we must of course be guided by the science (namely, the IPCC reports and its projections) in order to measure the scale of the response required. We must take a risk management approach to the potential impacts.
  • Secondly, we need to start with an ‘all hands on deck’ or inclusive philosophy because we have left it so late to tackle decarbonization, we must be very careful before we throw out any ideas.
  • Thirdly, we must agree on a relevant timeline for those solutions we will invest in and scale immediately. For example, for Project Drawdown, that means solutions that are proven, can be scaled and make an impact over the 2020-2050 timescale. Those that cannot need not be ‘thrown out’ but may need more research & development before they move to being operationally scaled.
  • Fourthly, we allow both top-down (centralized) and bottom-up (solutions), but recognise that while Governments dither, it will be up to individuals and social enterprise to act, and so in the short-medium term, it will be the bottom solutions that will have greater impact. Ironically, the much feared ‘World Government’ that right-wing conpiracy theorists most fear, is not what we need right now, and on that, the environmentalists mostly agree!

In the following Climate Solutions Taxonomy I have tried to provide a macro-level view of different solution classes. I have included some solutions which I am not sympathetic too;  such as nuclear and geo-engineering. But bear in mind that the goal here is to map out all solutions. It is not ‘my’ solutions, and is not itself a recommendation or plan.

On one axis we have the top-down versus bottom-up dimension, and on the other axis, broad classes of solution. The taxonomy is therefore not a simple hierarchy, but is multi-dimensional (here I show just two dimensions, but there are more).

Climate Solutions Taxonomy macro view

While I would need to go to a deeper level to show this more clearly, the arrows are suggestive of the system feedbacks that reflect synergies between solutions. For example, solar PV in villages in East Africa support education, which in turn supports improvments in family planning.

It is incredible to me that while we have (properly) invested a lot of intellectual and financial resources in scientific programmes to model the Earth’s climate system (and impacts), there has been dramatically less modelling effort on the economic implications that will help support policy-making (based on the damage from climate change, through what are called Integrated Assessment Models).

But what is even worse is that there seems to have been even less effort – or barely any –  modelling the full range of solutions and their interactions. Yes, there has been modelling of, for example, renewable energy supply and demand (for example in Germany), and yes, Project Drawdown is a great initiative; but I do not see a substantial programme of work, supported by Governments and Academia, that is grappling with the full range of solutions that I have tried to capture in the figure above, and providing an integrated set of tools to support those engaged in planning and implementing solutions.

This is unfortunate at many levels.

I am not here imagining some grand unified theory of climate solutions, where we end up with a spreadsheet telling us how much solar we should build by when and where.

But I do envisage a heuristic tool-kit that would help a town such as the one I was born (Hargesia in Somaliland), or the town in which I now live (Nailsworth in Gloucestershire in the UK), to be able to work through what works for them, to plan and deliver solutions. Each may arrive at different answers, but all need to be grounded in a common base of data and ‘what works’, and a more qualitative body of knowledge on synergies between solutions.

Ideally, the tool-kit would be usable at various levels of granularity, so it could be used at different various scales, and different solutions would emerge at different scales.

A wide range of both quantitative and qualitative methods may be required to grapple with the range of information covered here.

I am looking to explore this further, and am interested in any work or insights people have. Comments welcome.

(c) Richard W. Erskine, 2017

Leave a comment

Filed under Uncategorized

Butterflies, Brexit & Brits

I attended an inspiring talk by Chris Packham in Stroud at the launch of Stroud Nature’s season of events. Chris was there to show his photographs but naturally ranged over many topics close to his heart.

The catastrophic drop in species numbers in the UK was one which he has recently written about. The 97% reduction in hedgehogs since the 1950s, and the Heath Fritillary has fallen by 82% in just a decade 

These are just two stats in a long list that attest to this catastrophe.

Chris talked about how brilliant amateur naturalists are in the UK – better than in any other country – in the recording of flora and fauna. They are amateur only in the sense that they do not get paid, but highly professional in the quality of their work. That is why we know about the drop in species numbers in such comprehensive detail. It appears that this love of data is not a new phenomenon.

I have been a lover of butterflies since very young. I came into possession of  a family heirloom when I was just 7 years old which gave a complete record of the natural history butterflies and moths in Great Britain in the 1870s. Part of what made this book so glorious was the intimate accounts of amateur scientists who meticulously recorded sightings and corresponded though letters and journals.

IMG_3828

The Brits it seems are crazy about nature, and have this ability to record and document. We love our tick boxes and lists, and documenting things. It’s part of our culture.

I remember once doing a consultancy for a German car manufacturer who got a little irritated by our British team’s insistence on recording all meetings and then reminding the client of agreed points later, when they tried to change the requirements late in the project: “you Brits do love to write things down, don’t you!”.

Yes we do.

But there is a puzzling contradiction here. We love nature, we love recording data, but somehow have allowed species to be harmed, and have failed to stop this? Is this a naive trust in institutions to act on our behalf, or lack of knowledge in the wider population as to the scale of the loss?

I heard it said once (but struggle to find the appropriate reference) that the Normans were delighted after conquering Britain in 1066 to find that unlike most of Europe, the British had a highly organised administration and people paid their dues. Has anything changed?

But we have our limits. Thatcher’s poll tax demonstrated her lack of understanding of the British character. We will riot when pushed too hard – and I don’t know what you think, but by god they frighten me (as someone might have said). Mind you, I can imagine British rioters forming an orderly queue to collect their Molotov Cocktails. Queue jumping is the ultimate sin. Rules must be obeyed.

I have a friend in the finance sector, and we were having a chat about regulations. I asked if it was true in his sector if Brussels ‘dictated’ unreasonable regulations – “Not at all he said. For one thing, Brits are the rule writers par excellence, and the Brits will often gold-plate a regulation from Brussels.”

Now, I am sure some will argue that yes, we Brits are rule followers and love a good rule, but would prefer it if it is always our rules, and solely our rules. Great idea except that it is a total illusion to imagine that we can trade in high value goods and services without agreeing on rules with other countries. 

In sectors like Chemicals and Pharmaceuticals where the UK excels, there are not only European regulations (concerning safety, licensing, event reporting, etc. – all very reasonable and obvious regulations by the way) but International ones. In Pharma, the ICH.org has Harmonization in its title for a reason, and is increasingly global in nature.

Innovation should be about developing the best medicines, not reinventing protocols for drug trials or the design of a drug dossier used for multi-country licensing applications. One can develop an economy on a level playing field.

The complete freedom the hard-right Brexiteers dream of rather highlights their complete lack of knowledge of how the world works. 

Do we really think we can tear up regulations such as REACH and still trade in in Chemicals, in Europe or even elsewhere? 

And are we really going to tear up the Bathing Water Directive?

Maybe Jacob Rees-Mogg fancies going to the beach and rediscovering the delights of going through the motions, but I suspect the Great British Public might well riot at the suggestion, or at least, get very cross. 

Richard Erskine, 10th July 2018

Leave a comment

Filed under Bexit, Science in Society, Uncategorized

Experiments in Art & Science

My wife and I were on our annual week-end trip to Cambridge to meet up with my old Darwinian friend Chris and his wife, for the usual round of reminiscing, punting and all that. On the Saturday (12th May) we decided to go to Kettle’s Yard to see the house and its exhibition and take in a light lunch.

As we were about to get our (free) tickets for the house visit, we saw people in T-shirts publicising a Gurdon Institute special event in partnership with Kettle’s Yard that we had been unaware of:

Experiments in Art & Science

A new collaboration between three contemporary artists 

and scientists from the Gurdon Institute, 

in partnership with Kettle’s Yard

The three artists in question were Rachel Pimm, David Blandy and Laura Wilson, looking at work being done at the labs, respectively, on:

This immediately grabbed our attention and we changed tack, and went to the presentation and discussion panel, intrigued to learn more about the project.

The Gurdon Institute do research exploring the relationship between human disease and development, through all stages of life.  They use the tools of molecular biology, including model systems that share a lot of their genetic make-up with humans. There were fascinating insights into how the environment can influence creatures, in ways that force us to relax Crick’s famous ‘Central Dogma’. But I am jumping into the science of what I saw, and the purpose of this essay is to explore the relationship between art and science.

I was interested to learn if this project was about making the science more accessible – to draw in those who may be overwhelmed by the complexities of scientific methods – and to provide at least some insight into the work of scientists. Or maybe something deeper, that might be more of an equal partnership between art and science, in a two-way exchange of insights.

I was particularly intrigued by Rachel’s exploration of the memory of trauma, and the deep past revealed in the behaviour of worms, and their role as custodians of nature; of Turing’s morphogenesis, fractals and the emergence of self-similarity at many scales. A heady mix of ideas in the early stages of seeking expression.

David’s exploratory animations of moving through neural networks was also captivating.

As the scientists there noted, the purpose of the art may not be so much as to precisely articulate new questions, but rather to help them to stand back and see their science through fresh eyes, and maybe find unexpected connections.

In our modern world it has almost become an article of faith that science and art occupy two entirely distinct ways of seeing the world, but there was a time, as my friend Chris pointed out, when this distinction would not have been recognised.

Even within a particular department – be it mathematics or molecular biology – the division and sub-division of specialities makes it harder and harder for scientists to comprehend even what is happening in the next room. The funding of science demands a kind of determinism in the production of results which promotes this specialisation. It is a worrying trend because it is something of an anathema when it comes to playfulness or inter-disciplinary collaboration. 

This makes the Wellcome Trust’s support for the Gurdon Institute and for this Science-Art collaboration all the more refreshing. 

Some mathematicians have noted that even within the arcane world of number theory, group theory and the rest, it will only be through the combining of mathematical disciplines that some of the long-standing unresolved questions of mathematics be solved.

In areas such as climate change it was recognised in the lated 1950s that we needed to bring together a diverse range of disciplines to get to grips with the causes and consequences of man-made global warming: meteorologists, atmospheric chemists, glaciologists, marine biologists, and so many more.

We see through complex questions such as land-use and human civilisation how we must broaden this even further to embrace geography, culture and even history, to really understand how to frame solutions to climate change.

In many ways those (in my days) unloved disciplines such as geography, show their true colours as great integrators of knowledge – from human geography to history, from glaciology to food production – and we begin to understand that a little humility is no bad thing when we come to try to understand complex problems. Inter-disciplinary working is not just a fad; it could be the key to unlock complex problems that no single discipline can resolve.

Leonardo da Vinci was both artist and scientist. Ok, so not a scientist in the modern sense that David Wootton explores in his book The Invention of Science that was heralded in by the Enlightenment, but surely a scientist in the sense of his ability to forensically observe the world and try to make sense of it. His art was part of his method in exploring the world, be it the sinews of the human body or birds in flight, art and science were indivisible.

Since my retirement I have started to take up painting seriously. At school I chose science over art, but over the years have dabbled in painting but never quite made progress. Now, under the watchful eye of a great teacher, Alison Vickery, I feel I am beginning to find a voice. What she tells me hasn’t really changed, but I am finally hearing her. ‘Observe the scene, more than look at the paper’; ‘Experiment and don’t be afraid of accidents, because often they are happy ones’; the list of helpful aphorisms never leaves me.

A palette knife loaded with pigment scrapped across a surface can give just the right level of variegation if not too wet and not too dry; there is a kind of science to it. The effect is to produce a kind of complexity that the human eye seems to be drawn to: imperfect symmetries of the kind we find alluring in nature even while in mathematics we seek perfection.

Scientists and artists share many attributes.

At the meeting hosted by Kettle’s Yard, there was a discussion on what was common between artists and scientists. My list adapts what was said on the day: 

  • a curiosity and playfulness in exploring the world around them; 
  • ability to acutely observe the world; 
  • a fascination with patterns;
  • not afraid of failure;
  • dedication to keep going; 
  • searching for truth; 
  • deep respect for the accumulated knowledge and tools of their ‘art’; 
  • ability to experiment with new methods or innovative ways of using old methods.

How then are art and science different?  

Well, of course, the key reason is that they are asking different questions and seeking different kinds of answers.

In art, the question is often simply ‘How do I see, how do I frame what I see. and how do I make sense of it?’ , and ‘How do I express this in a way that is interesting and compelling?’. If I see a tree, I see the sinews of the trunk and branches, and how the dappled light reveals fragmentary hints as to the form of the tree.  I observe the patterns of dark and light in the canopy. A true rendering of colour is of secondary interest (this is not a photograph), except in as much as it helps reveal the complexity of tree: making different greens by playing with mixtures of 2 yellows and 2 blues offers an infinity of greens which is much more interesting than having tubes of green paint (I hardly ever buy green).

Artists do not have definite answers to unambiguous questions. It is OK for me to argue that J M W Turner was the greatest painter of all time, even while my friend vehemently disagrees. When I look at a painting (or sculpture, or film) and feel an emotional response, there is no need to explain it, even though we often seem obliged to put words to emotions, we know these are mere approximations.

In science (or experimental science at least), we ask specific questions, which can be articulated as a hypothesis that challenges the boundaries of our knowledge. We can then design experiments to test the hypothesis, and if we are successful (in the 1% of times that maybe we are lucky), we will have advanced the knowledge of our subject. Most times this is an incremental learning, building on a body of knowledge. Other times, we may need to break something down before building it up again (but unlike the caricature of science often seen on TV, science is rarely about tearing down a whole field of knowledge, and starting from scratch). 

When I see the tree, I ask, why are the leaves of Copper Beech trees deep purple in colour rather than green? Are the energy levels in the chlorophyll molecule somehow changed to produce a different colour or is a different molecule involved?

In science, the objective is to find definite answers to definite questions. That is not to say that the definite answer is in itself a complete answer to all the questions we have. When Schrodinger asked the question ‘What is Life?’ the role and structure of DNA were not known, but there were questions that he could ask and find answers to. This is the wonder of science; this stepping stone quality.

I may find the answer as to why the Copper Beech tree’s leaves are not green, but what of the interesting question of why leaves change colour in autumn and how they change, not from one state (green) to another (brown), but through a complex process that reveals variegations of colour as Autumn unfolds? And what of a forest? How does a mature forest evolve from an immature one; how do pioneer trees give way to a complex ecology of varyingly aged trees and species over time? A leaf begs a question, and a forest may end up being the answer to a bigger question. Maybe we find that art, literature and science are in fact happy bedfellows after all.

As Feynman said, I can be both fascinated by something in the natural world (such as a rainbow) while at the same time seeking a scientific understanding of the phenomenon.

Nevertheless, it seems that while artists and scientists have so much in common, their framings struggle to align, and that in a way is a good thing. 

There is great work done in the illustration of scientific ideas, in textbooks and increasingly in scientific papers. I saw a recent paper on the impact of changes to the stratospheric polar vortex on climate, which was beautifully illustrated. But this is illustration, intended to help articulate those definite questions and answers. It is not art.

So what is the purpose of bringing artists into laboratories to inspire them; to get their response to the work being done there?

The answer, as they say, is on the tin (of this Gurdon Institute collaborative project): It is an experiment.

The hypothesis is that if you take three talented and curious young artists and show them some leading edge science that touches on diverse subjects, good things happen. Art happens.

Based on the short preview of the work being done which I attended, good things are already happening and I am excited to see how the collaboration evolves.

Here are some questions inspired in my mind by the discussion 

  • How do we understand the patterns in form in the ways that Turing wrote about, based on the latest research? Can we explore ‘emergence of form’ as a topic that is interesting, artistically and scientifically?
  • In the world of RNA epigenetics can the previously thought of ‘junk DNA’ play a part in the life of creatures, even humans, in the environment they live in? Can we explore the deep history of our shared genotype, even given our divergent phenotypes? Will the worm teach us how to live better with our environment?
  • Our identity is formed by memory and as we get older we begin to lose our ability to make new memories, but older ones often stay fast, but not always. Surely here there is a rich vein for exploring the artistic and scientific responses to diseases like Alzheimers?

Scientists are dedicated and passionate about their work, like artists. A joint curiosity drives this new collaborative Gurdon Institute project.

The big question for me is this: can art reveal to scientists new questions, or new framings of old questions, that will advance the science in novel ways? Can unexpected connections be revealed or collaborations be inspired?

I certainly hope so.

P.S. the others in my troop did get to do the house visit after all, and it was wonderful, I hear. I missed it because I was too busy chatting to the scientists and artists after the panel discussion; and I am so grateful to have spent time with them.

(c) Richard W. Erskine, 2018

 

Leave a comment

Filed under Art & Science, Essay, Molecular Biology, Uncategorized

Anatomy of a Conspiracy Theory

Normally, as with 9/11, a conspiracy theory involves convoluted chains of reasoning so torturous that it can take a while to determine how the conjuring trick was done: where the lie was implanted. But often, the anatomy of a conspiracy theory takes the following basic form:

Part 1 is a plausible but flawed technical claim that aims to refute an official account, and provides the starting point for Part 2, which is a multi-threaded stream of whataboutery. To connect Part 1 and 2 a sleight of hand is performed. This is the anatomy of a basic conspiracy theory.

I have been thinking about this because a relative of mine asked me for my opinion about a video that turns out to be a good case study in this form of conspiracy theory. It was a video posted by a Dr Chris Busby relating to the nerve gas used to poison the Skripals: 

So, against my better judgment, I sat through the video.

Dr Busby who comes across initially as quite affable proceeds to outline his experience at length. He says he was employed at the Wellcome Research Laboratories in Beckenham (see Note 1), where he worked, in his words, 

“… on the physical chemistry of pharmaceutical compounds or small organic compounds”, and he used “spectroscopic and other methods to determine the structure of these substances, as they were made by the chemists”. 

I have no reason to doubt his background, but equally have not attempted to verify it either; in any case, this is immaterial because I judge people on their arguments not their qualifications.

I want to pass over Busby’s first claim – that a state actor was not necessarily involved because (in his view):

“any synthetic organic chemist could knock up something like that without a lot of difficulty”

… which is questionable, but is not the main focus of this post. I do have a few observations on this subsidiary claim in Note 2.

He explains correctly that a Mass Spectroscopy spectrum (let’s abbreviate this as ‘spectrum’ in what follows) is a pattern of the masses of the ionised fragments created when a substance passes through the instrument. This pattern is characteristic of the molecule under investigation.

So a spectrum “identifies a material”. So far, so good.

He now makes his plausible but flawed technical claim. I don’t want to call it a lie because I will assume Dr Busby made it in good faith, but it does undermine his claim to be an ‘expert’, and was contained in the following statement he made:

“… but in order to do that, you need to have a sample of the material, you need to have synthesized the material”

In brief we can summarise the claim as follows: In order for you to identify a substance, you need to have synthesised it.

Curiously, later in the video he says that the USA manufactured the A-234 strain that is allegedly involved (see Note 3) and put the spectrum on the NIST database, but then later took it down. 

It does not occur to Dr Busby that Porton Down could have taken a copy of data from NIST before it was removed and used that as the reference spectrum, thereby blowing a huge hole in Busby’s chain of logic (also, see Note 4).

But there is a more fundamental reason why the claim is erroneous even if the data had never existed.

One of the whole points of having a technique like mass spectroscopy is precisely to help researchers in determining the structures of unknown substances, particularly in trace quantities where other structural techniques cannot be used (see Note 5).

To show you why the claim is erroneous, here is an example of a chemistry lecturer taking his students through the process of analysing the spectrum of a substance, in order to establish its structure (Credit: Identify a reasonable structure for the pictured mass spectrum of an unknown sample, Professor Heath’s Chemistry Channel, 6th October 2016).

This method uses knowledge of chemistry, logic and arithmetic to ‘reverse engineer’ the chemical structure, based on the masses of the fragments:

Now it is true that with a library of spectra for known substances, the analysis is greatly accelerated, because we can then compare a sample’s spectrum with ones in the library. This might be called ‘routine diagnostic mass spectroscopy’.

He talked about having done a lot of work on pharmaceuticals that had been synthesised “in Spain or in India”, and clearly here the mode of application would have been the comparison of known molecules manufactured by (in this case Wellcome) with other samples retrieved from other sources – possibly trying to break a patent – but giving away their source due to impurities in the sample (see Note 6).

It then struck me that he must have spent so much time doing this routine diagnostic diagnostic mass spectroscopy that he is now presenting this as the only way in which you can use mass spectroscopy to identify a substance.

He seems to have forgotten the more general use of the method by scientists.

This flawed assumption leads to the scientific and logical chain of reasoning used by Dr Busby in this video. 

The sleight of hand arrives when he uses the phrase ‘false flag’ at 6’55” into a 10’19” video.  

The chain of logic has been constructed to lead the viewer to this point. Dr Busby was in effect saying ‘to test for the agent, you need to have made it; if you can make it, maybe it got out; and maybe the UK (or US) was  responsible for using it!’.

This is an outrageous claim but he avoids directly accusing the UK or US Governments; and this is the sleight of hand. He leaves the viewer to fill in the gap.

This then paves the way for Part 2 of his conspiracy theory which now begins in earnest on the video. He cranks up the rhetoric and offers up an anti-American diatribe, full of conspiracy ideation.

He concludes the video as follows:

“There’s no way there’s any proof that that material that poisoned the Skripal’s came from Russia. That’s the take home message”

On the contrary, the message I took away is that it is sad that an ex-scientist is bending and abusing scientific knowledge to concoct conspiracy theories, to advance his political dogma, and helping to magnify the Kremlin’s whataboutery.

Now, Dr Busby might well respond by saying “but you haven’t proved the Russians did it!”.  No, but I would reply ‘you haven’t proved that they didn’t, and as things stand, it is clear that they are the prime suspect’; ask any police inspector how they would assess the situation.

My purpose here was not to prove anything, but to discuss the anatomy of conspiracy theories in general, and debunk this one in particular.

But I do want to highlight one additional point: those that are apologists for the Russian state will demand 100% proof the Russians did it, but are lazily accepting of weak arguments – including Dr Busby’s video – that attempt to point the finger at the UK or US Governments. This is, at least, double standards.

By all means present your political views and theories on world politics, Dr Busby – the UK is a country where we can express our opinions freely – but please don’t dress them up with flawed scientific reasoning masquerading as scientific expertise.

Hunting down a plausible but flawed technical claim is not always as easy as in the case study above, but remember the anatomy, because it is usually easy to spot the sleight of hand that then connects with the main body of a conspiracy theory.

We all need to be inoculated against this kind of conspiracy ideation, and I hope my dissection of this example is helpful to people.

——

© Richard W. Erskine, 2018

NOTES

Note 1: The Wellcome Research Laboratories in Beckenham closed in 1995, when the GlaxoWellcome merged company was formed, and after further mergers transformed into the current leading pharmaceutical global entity GSK.

Note 2: Busby’s first claim is that the nerve agent identified by Porton Down is a simple organic compound and therefore easy for a chemist to synthesise. Gary Aitkenhead, the chief executive of the government’s Defence Science and Technology Laboratory (DSTL) said on Sky News (here reported in The Guardian)

“It’s a military-grade nerve agent, which requires extremely sophisticated methods in order to create – something that’s probably only within the capabilities of a state actor.”

But the difficulty of synthesising a molecule is not simply based on the number of atoms in the molecule, but rather the synthetic pathway, and all that, and in the case of a nerve agent, the practical difficulties involved in making the stuff in a safe environment, then preparing it in some ‘weaponized’ formulation.

Vil Mirzayanov who was a chemist who worked on Novichok has said that  that this process is extremely difficult. Dr Busby thinks he knows better but not being a synthetic chemist (remember, he had chemists making the samples he analysed), cannot claim expertise on the ease or difficulty of nerve agent synthesis.

The UK position is that the extremely pure nature of the samples found in Salisbury point to a state actor. Most of us, and I would include Dr Busby, without experience of the synthesis of the nerve agent in question and its formulation as a weapon, cannot really comment with authority on this question.

Simply saying it is a simple molecule really doesn’t stand up as an argument.

Note 3: While the Russian Ambassador to the UK claims that the strain is A-234, neither the UK Government, nor Porton Down, nor the OPCW have stated which strain was used, and so the question regarding what strain or strains the USA might or might not have synthesized, is pure speculation.

Note 4: He says that if the USA synthesised it (the strain of nerve agent assumed to have been used), then it is possible that Porton Down did so as well. I am not arguing this point either way. The point of this post is to challenge what Dr Busby presents as an unassailable chain of logic, but which is nothing of the sort.

Note 5: There are many other techniques used in general for structuralwork, but not all are applicable in every situation. For large complex biological molecules, X-Ray Crystallography has been very successful, and more recently CryoEM has matured to the point where it is taking over this role. Neither will have used in the case of trace quantities of a nerve agent.

Note 6: He also talks about impurities that can show up in a spectrum and using these as a way to identify a laboratory of origin (in relation to his pharmaceuticals experience), but this is a separate argument, which is irrelevant if the sample is of high purity, which is what OPCW confirmed in relation to the nerve gas found in Salisbury.

.. o O o ..

 

 

Leave a comment

Filed under Conspiracy Theories, Uncategorized

Cambridge Analytica and the micro-targeting smokescreen

I have an hypothesis.

The Information Commissioner’s Office (ICO) won’t find any retained data at Cambridge Analytica (CA) gleaned from Facebook user’s. They might even find proof it was deleted in a timely manner.

So, would that mean CA did not provide an assist to the Trump campaign? No.

Because the analysis of all that data would have been used to provide knowledge and insight into which buttons to push in the minds of voters, and crucially, in which States this would be most effective.

At that point you can delete all the source Facebook data.

The knowledge and insight would have powered a broad spectrum campaign using good old fashioned media channels and social media. At this point, it is not micro-targeting, but throwing mud knowing it will stick where it matters.

Maybe the focus on micro-targeting is a smokescreen, because if the ICO don’t find retained data, then CA can say “see, we are innocent of all charges of interference”, when in fact the truth could be quite the opposite.

It is important the ICO, Select Committees in the UK Parliament and, when they get their act together, committees on Capitol Hill, ask the right questions, and do not succumb to smokescreens.

But then, that is only an hypothesis.

What do I know?

(c) Richard W. Erskine, 2018

Leave a comment

Filed under Uncategorized

The Myth of Facebook’s Free Lunch

We all know that there is no such thing as a free lunch, don’t we?

Except when we get the next offer of a free lunch. It’ll be different this time, because they are so nice and well, what could go wrong?

The Facebook offer was always the offer of a free lunch. No need to pay anything for you account, and just share and share alike.

In fact the encouragement to be as open and sharing as possible was made easier by the byzantine complexity of the access controls (to allow people to be more private). It never occurred to Facebook that humans have complex lives where the family friends was a non-overlapping set of people to the tennis club friends, or the ‘stop the fracking’ friends!

No, there is a binary reductionism to the happy clappy religion which is ‘the world is my friend’  dogma of social media, of which Facebook is the prime archetype.

Of course, the business model was always to monetise our connectivity. We view a few pages on artist materials, and suddenly we are deluged by adverts for artist materials. Basic stuff you might say, and often it is; small minded big data. But it feels like and is an intrusion. Facebook is wanting to take business away from WPP and the rest and uses the social desire to connect as the vehicle for gaining a better insight into our lives than traditional marketing can achieve. Why did Facebook not make this clear to people from the start?

The joke was always that marketing companies know that 50% of their spending is wasted but don’t know which parts make up that 50%.

Facebook will now say that they know.

Don’t get me wrong, I love Facebook, because it reunited me with a long lost ‘other’ family. That is another story but I am eternally grateful to Facebook for making that connection. It also provides the town I live in the ability to connect over local issues. It can be a force for good.

But the most egregious issue that Facebook is now facing (and seem in denial about) is that the bill for the lunch is now proving to be exceptionally high indeed.

If Facebook data effectively helped Cambridge Analytica help Trump and the Brexit campaigns to win even a marginal assist – as is now alleged – that could have been crucial, as both won by a marginal amount.

We cannot go back to a pre-digital world.

We need trust in institutions and in what will happen to our data, and not just the snaps we took of the new kitten playing on the sofa. We want the benefits that combining genomics and clinical data will do to revolutionise medicine. We want to develop ground-up social enterprises to address issues like climate change. We need to be able to move beyond primitive cloudscum fileshares or private storage devices to a truly trusted, long term repository for personal data; guaranteed to a level no less than a National Archive.

There are many reasons we need community governed, rigorously audited and regulated data, to help in many aspects of our personal lives, social enterprises, and as safe places for retention of knowledge and cultural assets in the digital world.

Even without the Cambridge Analytica scandal, the geek-driven models of Facebook, Google and the rest betray a level of naivety and lack of insight into this challenge which is breathtaking.

Call it Web 4.0 or choose a higher number if you like.

But what this episode proves is that the current generation of social media is barely a rough draft on what society needs in the digital world of the 21st Century.

Leave a comment

Filed under Social Media

Communicating Key Figures from IPCC Reports to a Wider Public

If you were to think about ranking the most important Figures from the IPCC Fifth Assessment Report, I would not be surprised if the following one (SPM.10) did not emerge as a strong candidate for the number one slot:

IPCC AR5 Figure SPM.10

This is how the Figure appears in the main report, on page 28 (in the Summary for Policymakers) of The Physical Basis Report (see References: IPCC, 2013). The Synthesis Report includes a similar figure with additional annotations.

Many have used it in talks because of its fundamental importance (for example, Sir David King in his Walker Institute Annual Lecture (10th June 2015), ahead of COP21 in Paris). I have followed this lead, and am sure that I am not alone.

This Figure shows an approximately linear1 relationship between the cumulative carbon dioxide we emit2, and the rise in global average surface temperature3 up to 2100. It was crucial to discussions on carbon budgets held in Paris and the goal of stabilising the climate.

I am not proposing animating this Figure in the way discussed in my previous essay, but I do think its importance warrants additional attention to get it out there to a wider audience (beyond the usual climate geeks!).

So my question is:

“Does it warrant some kind of pedagogic treatment for a general audience (and dare I say, for policy-makers who may themselves struggle with the density of information conveyed)?”

My answer is yes, and I believe that the IPCC, as guardians of the integrity of the report findings, are best placed to lead such an effort, albeit supported by skills to support the science communications.

The IPCC should not leave it to bloggers and other commentators to furnish such content, as key Figures such as this are fundamental to the report’s findings, and need to be as widely understood as possible.

While I am conscious of Tufte’s wariness regarding Powerpoint, I think that the ‘build’ technique – when used well – can be extremely useful in unfolding the information, in biteable chunks. This is what I have tried to do with the above Figure in a recent talk. I thought I would share my draft attempt.

It can obviously do with more work, and the annotations represent my emphasis and use of  language4. Nevertheless, I believe I was able to truthfully convey the key information from the original IPCC Figure more successfully than I have before; taking the audience with me, rather than scaring them off.

So here goes, taken from a segment of my talk … my narrative, to accompany the ‘builds’, is in italics …

Where are we now?

“There is a key question: what is the relationship between the peak atmospheric concentration and the level of warming, compared to a late 19th century baseline, that will result, by the end of the 21st century?”

“Let’s start with seeing where we are now, which is marked by a X in the Figure below.” 

Unpacking SYR2.3 - Build 1

“Our cumulative man-made emissions of carbon dioxide (CO2) have to date been nearly 2000 billion tonnes (top scale above)”

“After noting that 50% of this remains in the atmosphere, this has given rise to an increase in the atmospheric concentration from its long-standing pre-industrial value of 280 parts per million to it current value which is now about 400 parts per million (bottom scale above).”

“This in turn has led to an increase in averaged global surface temperature of  1oC above the baseline of 1861 to 1880 (vertical scale above).”

Where might we be in 2100?

“As we add additional carbon dioxide, the temperature will rise broadly in proportion to the increased concentration in the atmosphere. There is some uncertainty between “best case” and “worst case” margins of error (shown by the dashed lines).” 

Unpacking SYR2.3 - Build 2

“By the end of the century, depending on how much we emit and allowing for uncertainties, we can end up anywhere within the grey area shown here. The question marks (“?”) illustrate where we might be by 2100.”

Can we stay below 2C?

“The most optimistic scenario included in the IPCC’s Fifth Assessment Report (AR5) was based on the assumption of a rapid reduction in emissions, and a growing role for the artificial capture of carbon dioxide from the atmosphere (using a technology called BECCS).” 

Unpacking SYR2.3 - Build 3

“This optimistic scenario would meet the target agreed by the nations in Paris, which is to limit the temperature rise to 2oC.”

“We effectively have a ‘carbon budget’; an amount of fossil fuels that can be burned and for us to stay below 2oC”. 

“The longer we delay dramatically reducing emissions, the faster the drop would need to be in our emissions later, as we approach the end of the ‘carbon budget’.” 

“Some argue that we are already beyond the point where we can realistically move fast enough to make this transition.” 

“Generally, experts agree it is extremely challenging, but still not impossible.”

Where will we be in 2100?  – Paris Commitments

“The nationally determined contributions (or NDCs) – the amounts by which carbon dioxide emissions will fall – that the parties to the Paris Agreement put forward have been totted up and they would, if implemented fully, bring us to a temperature rise of between 2.5 and 3.5 oC (and an atmospheric concentration about twice that of pre-industrial levels).”

Unpacking SYR2.3 - Build 4

 “Now, the nations are committed to increase their ‘ambition’, so we expect that NDCs should get better, but it is deeply concerning that at present, the nations’ current targets are (1) not keeping us unambiguously clear of catastrophe, and (2) struggling to be met. More ambition, and crucially more achievement, is urgent.”

“I have indicated the orange scenarios as “globally severe”, but for many regions “catastrophic” (but some, for example, Xu and Ramanathan5, would use the term “Catastrophic” for any warming over 3oC, and “Unknown” for warming above 5oC). The IPCC are much more conservative in the language they use.”

Where will we be in 2100? – Business As Usual Scenario

“The so-called ‘business as usual’ scenario represents on-going use of fossil fuels, continuing to meet the majority of our energy needs, in a world with an increasing population and increasing GDP per capita, and consequently a continuing growth in CO2 emissions.”

Unpacking SYR2.3 - Build 5

”This takes global warming to an exceptionally bad place, with a (globally averaged) temperature rise of between 4 and 6 oC; where atmospheric concentrations will have risen to between 2.5 and 3 times the pre-industrial levels.”

“The red indicates that this is globally catastrophic.”

“If we go above 5oC warming we move, according to Xu and Ramanathan,  from a “catastrophic” regime to an “unknown” one. I have not tried to indicate this extended vocabulary on the diagram, but what is clear is that the ‘business as usual’ scenario is really not an option, if we are paying attention to what the science is telling us.”

That’s it. My draft attempt to convey the substance and importance of Figure SPM.10, which I have tried to do faithfully; albeit adding the adjectives “optimistic” etc. to characterise the scenarios.

I am sure the IPCC could do a much better job than me at providing a more accessible presentation of Figure SPM.10 and indeed, a number of high ranking Figures from their reports, that deserve and need a broader audience.

© Richard W. Erskine

Footnotes

  1. The linearity of this relationship was originally discussed in Myles Allen et al (2009), and this and other work has been incorporated in the IPCC reports. Also see Technical Note A below.
  1. About half of which remains in the atmosphere, for a very long time
  1. Eventually, after the planet reaches a new equilibrium, a long time in the future. Also see Technical Note B below.
  1. There are different opinions are what language to use – ‘dangerous’, ‘catastrophic’, etc. – and at what levels of warming to apply this language. The IPCC is conservative in its use of language, as is customary in the scientific literature. Some would argue that in wanting to avoid the charge of being alarmist, it is in danger of obscuring the seriousness of the risks faced. In my graphics I have tried to remain reasonably conservative in the use of language, because I believe things are serious enough; even when a conservative approach is taken.
  1. Now, Elizabeth Kolbert has written in the New Yorker:

In a recent paper in the Proceedings of the National Academy of Sciences, two climate scientists—Yangyang Xu, of Texas A. & M., and Veerabhadran Ramanathan, of the Scripps Institution of Oceanography—proposed that warming greater than three degrees Celsius be designated as “catastrophic” and warming greater than five degrees as “unknown??” The “unknown??” designation, they wrote, comes “with the understanding that changes of this magnitude, not experienced in the last 20+ million years, pose existential threats to a majority of the population.”

References

  • IPCC, 2013: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovern- mental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 1535 pp.
  • IPCC, 2001: Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change [Houghton, J.T., Y. Ding, D.J. Griggs, M. Noguer, P.J. van der Linden, X. Dai, K. Maskell, and C.A. Johnson (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 881pp.
  • Myles Allen at al (2009), “Warming caused by cumulative carbon emissions towards the trillionth tonne”,Nature 458, 1163-1166
  • Kirsten Zickfeld et al (2016), “On the proportionality between global temperature change and cumulative CO2 emissions during periods of net negative CO2 emissions”, Environ. Res. Lett. 11 055006

Technical Notes

A. Logarithmic relationship?

For those who know about the logarithmic relationship between added CO2 concentration and the ‘radiative forcing’ (giving rise to warming) – and many well meaning contrarians seem to take succour from this fact – the linear relationship in this figure may at first sight seem surprising.

The reason for the linearity is nicely explained by Marcin Popkiewicz in his piece “If growth of COconcentration causes only logarithmic temperature increase – why worry?”

The relative warming (between one level of emissions and another) is related to the ratio of this logarithmic function, and that is approximately linear over the concentration range of interest.

In any case, it is worth noting that CO2 concentrations have been increasing exponentially, and a logarithm of an exponential function is a linear function.

There is on-going work on wider questions. For example, to what extent ‘negative emissions technology’ can counteract warming that is in the pipeline?

Kirsten Zickfield et al (2016), is one such paper, “…[suggests that] positive CO2 emissions are more effective at warming than negative emissions are at subsequently cooling”. So we need to be very careful in assuming we can reverse warming that is in the pipeline.

B. Transient Climate Response and Additional Warming Commitment

The ‘Transient Climate Response’ (TCR) reflects the warming that results when CO2 is added at 1% per year, which for a doubling of the concentration takes 70 years. This is illustrated quite well in a figure from a previous report (Reference: IPCC, 2001):

TAR Figure 9.1

The warming that results from this additional concentration of CO2 occurs over the same time frame. However, this does not include all the the warming that will eventually result because the earth system (principally the oceans and atmosphere) will take a long time to reach a new equilibrium where all the flows of energy are brought back into a (new) balance. This will take at least 200 years (for lower emission scenarios) or much longer for higher emission levels.  This additional warming commitment must be added to the TCR. However, the TCR nevertheless does represent perhaps 70% of the overall warming, and remains a useful measure when discussing policy options over the 21st Century.

This discussion excludes more uncertain and much longer term feedbacks involving, for example, changes to the polar ice sheets (and consequentially, the Earth’s albedo), release of methane from northern latitudes or methane clathrates from the oceans. These are not part of the ‘additional warming commitment’, even in the IPCC 2013 report, as they are considered too speculative and uncertain to be quantified.

. . o O o . .

Leave a comment

Filed under Climate Science

Animating IPCC Climate Data

The IPCC (Intergovernmental Panel on Climate Change) is exploring ways to improve the communication of its findings, particularly to a more general  audience. They are not alone in having identified a need to think again about clear ‘science communications’. For example, the EU’s HELIX project (High-End Climate Impacts and Extremes), produced some guidelines a while ago on better use of language and diagrams.

Coming out of the HELIX project, and through a series of workshops, a collaboration with the Tyndall Centre and Climate Outreach, has produced a comprehensive guide (Guide With Practical Exercises to Train Researchers In the Science of  Climate Change Communication)

The idea is not to say ‘communicate like THIS’ but more to share good practice amongst scientists and to ensure all scientists are aware of the communication issues, and then to address them.

Much of this guidance concerns the ‘soft’ aspects of communication: how the communicator views themself; understanding the audience; building trust; coping with uncertainty; etc.

Some of this reflects ideas that are useful not just to scientific communication, but almost any technical presentation in any sector, but that does not diminish its importance.

This has now been distilled into a Communications Handbook for IPCC Scientists; not an official publication of the IPCC but a contribution to the conversation on how to improve communications.

I want to take a slightly different tack, which is not a response to the handbook per se, but covers a complementary issue.

In many years of being involved in presenting complex material (in my case, in enterprise information management) to audiences unfamiliar with the subject at hand, I have often been aware of the communication potential but also risks of diagrams. They say that a picture is worth a thousand words, but this is not true if you need a thousand words to explain the picture!

The unwritten rules related to the visual syntax and semantics of diagrams is a fascinating topic, and one which many – and most notably Edward Tufte –  have explored. In chapter 2 of his insightful and beautiful book Visual Explanations, Tufte argues:

“When we reason about quantityative evidence, certain methods for displaying and analysing data are better than others. Superior methods are more likely to produce truthful, credible, and precise findings. The difference between an excellent analysis and a faulty one can sometimes have momentous consequences”

He then describes how data can be used and abused. He illustrates this with two examples: the 1854 Cholera epidemic in London and the 1986 Challenger space shuttle disaster.

Tufte has been highly critical of the over reliance on Powerpoint for technical reporting (not just presentations) in NASA, because the form of the content degrades the narrative that should have been an essential part of any report (with or without pictures). Bulletized data can destroy context, clarity and meaning.

There could be no more ‘momentous consequences’ than those that arise from man-made global warming, and therefore, there could hardly be a more important case where a Tuftian eye, if I may call it that, needs to be brought to bear on how the information is described and visualised.

The IPCC, and the underlying science on which it relies, is arguably the greatest scientific collaboration ever undertaken, and rightly recognised with a Nobel Prize. It includes a level of interdisciplinary cooperation that is frankly awe-inspiring; unique in its scope and depth.

It is not surprising therefore that it has led to very large and dense reports, covering the many areas that are unavoidably involved: the cryosphere, sea-level rise, crops, extreme weather, species migration, etc.. It might seem difficult to condense this material without loss of important information. For example, Volume 1 of the IPCC Fifth Assessment Report, which covered the Physical Basis of Climate Change, was over 1500 pages long.

Nevertheless, the IPCC endeavours to help policy-makers by providing them with summaries and also a synthesis report, to provide the essential underlying knowledge that policy-makers need to inform their discussions on actions in response to the science.

However, in its summary reports the IPCC will often reuse key diagrams, taken from the full reports. There are good reasons for this, because the IPCC is trying to maintain mutual consistency between different products covering the same findings at different levels of detail.

This exercise is fraught with risks of over-simplification or misrepresentation of the main report’s findings, and this might limit the degree to which the IPCC can become ‘creative’ with compelling visuals that ‘simplify’ the original diagrams. Remember too that these reports need to be agreed by reviewers from national representatives, and the language will often seem to combine the cautiousness of a scientist with the dryness of a lawyer.

So yes, it can be problematic to use artistic flair to improve the comprehensibility of the findings, but risk losing the nuance and caution that is a hallmark of science. The countervailing risk is that people do not really ‘get it’; and do not appreciate what they are seeing.

We have seen with the Challenger reports, that people did not appreciate the issue with the O rings, especially when key facts were buried in 5 levels of indented bullet points in a tiny font, for example or, hidden in plain sight, in a figure so complex that the key findings are lost in a fog of complexity.

That is why any attempt to improve the summaries for policy makers and the general public must continue to involve those who are responsible for the overall integrity and consistency of the different products, not simply hived off to a separate group of ‘creatives’ who would lack knowledge and insight of the nuance that needs to be respected.  But those complementary skills – data visualizers, graphics artists, and others – need to be included in this effort to improve science communications. There is also a need for those able to critically evaluate the pedagogic value of the output (along the lines of Tufte), to ensure they really inform, and do not confuse.

Some individuals have taken to social media to present their own examples of how to present information, which often employs animation (something that is clearly not possible for the printed page, or its digital analogue, a PDF document). Perhaps the most well known example to date was Professor Ed Hawkin’s spiral picture showing the increase in global mean surface temperature:

spiral_2017_large

This animation went viral, and was even featured as part of the Rio Olympics Opening Ceremony. This and other spiral animations can be found at the Climate Lab Book site.

There are now a number of other great producers of animations. Here follows a few examples.

Here, Kevin Pluck (@kevpluck) illustrates the link between the rising carbon dioxide levels and the rising mean surface temperature, since 1958 (the year when direct and continuous measurements of carbon dioxide were pioneered by Keeling)

Kevin Pluck has many other animations which are informative, particularly in relation to sea ice.

Another example, from Antti Lipponen (@anttilip), visualises the increase in surface warming from 1900 to 2017, by country, grouped according to continent. We see the increasing length/redness of the radial bars, showing an overall warming trend, but at different rates according to region and country.

A final example along the same lines is from John Kennedy (@micefearboggis), which is slightly more elaborate but rich in interesting information. It shows temperature changes over the years, at different latitudes, for both ocean (left side) and land (right side). The longer/redder the bar the higher the increase in temperature at that location, relative to the temperature baseline at that location (which scientists call the ‘anomaly’). This is why we see the greatest warming in the Arctic, as it is warming proportionally faster than the rest of the planet; this is one of the big takeaways from this animation.

These examples of animation are clearly not dumbing down the data, far from it. They  improve the chances of the general public engaging with the data. This kind of animation of the data provides an entry point for those wanting to learn more. They can then move onto a narrative treatment, placing the animation in context, confident that they have grasped the essential information.

If the IPCC restricts itself to static media (i.e. PDF files), it will miss many opportunities to enliven the data in the ways illustrated above that reveal the essential knowledge that needs to be communicated.

(c) Richard W. Erskine, 2018

3 Comments

Filed under Climate Science, Essay, Science Communications