Category Archives: Essay

Longish piece that argues a point to a conclusion

Experiments in Art & Science

My wife and I were on our annual week-end trip to Cambridge to meet up with my old Darwinian friend Chris and his wife, for the usual round of reminiscing, punting and all that. On the Saturday (12th May) we decided to go to Kettle’s Yard to see the house and its exhibition and take in a light lunch.

As we were about to get our (free) tickets for the house visit, we saw people in T-shirts publicising a Gurdon Institute special event in partnership with Kettle’s Yard that we had been unaware of:

Experiments in Art & Science

A new collaboration between three contemporary artists 

and scientists from the Gurdon Institute, 

in partnership with Kettle’s Yard

The three artists in question were Rachel Pimm, David Blandy and Laura Wilson, looking at work being done at the labs, respectively, on:

This immediately grabbed our attention and we changed tack, and went to the presentation and discussion panel, intrigued to learn more about the project.

The Gurdon Institute do research exploring the relationship between human disease and development, through all stages of life.  They use the tools of molecular biology, including model systems that share a lot of their genetic make-up with humans. There were fascinating insights into how the environment can influence creatures, in ways that force us to relax Crick’s famous ‘Central Dogma’. But I am jumping into the science of what I saw, and the purpose of this essay is to explore the relationship between art and science.

I was interested to learn if this project was about making the science more accessible – to draw in those who may be overwhelmed by the complexities of scientific methods – and to provide at least some insight into the work of scientists. Or maybe something deeper, that might be more of an equal partnership between art and science, in a two-way exchange of insights.

I was particularly intrigued by Rachel’s exploration of the memory of trauma, and the deep past revealed in the behaviour of worms, and their role as custodians of nature; of Turing’s morphogenesis, fractals and the emergence of self-similarity at many scales. A heady mix of ideas in the early stages of seeking expression.

David’s exploratory animations of moving through neural networks was also captivating.

As the scientists there noted, the purpose of the art may not be so much as to precisely articulate new questions, but rather to help them to stand back and see their science through fresh eyes, and maybe find unexpected connections.

In our modern world it has almost become an article of faith that science and art occupy two entirely distinct ways of seeing the world, but there was a time, as my friend Chris pointed out, when this distinction would not have been recognised.

Even within a particular department – be it mathematics or molecular biology – the division and sub-division of specialities makes it harder and harder for scientists to comprehend even what is happening in the next room. The funding of science demands a kind of determinism in the production of results which promotes this specialisation. It is a worrying trend because it is something of an anathema when it comes to playfulness or inter-disciplinary collaboration. 

This makes the Wellcome Trust’s support for the Gurdon Institute and for this Science-Art collaboration all the more refreshing. 

Some mathematicians have noted that even within the arcane world of number theory, group theory and the rest, it will only be through the combining of mathematical disciplines that some of the long-standing unresolved questions of mathematics be solved.

In areas such as climate change it was recognised in the lated 1950s that we needed to bring together a diverse range of disciplines to get to grips with the causes and consequences of man-made global warming: meteorologists, atmospheric chemists, glaciologists, marine biologists, and so many more.

We see through complex questions such as land-use and human civilisation how we must broaden this even further to embrace geography, culture and even history, to really understand how to frame solutions to climate change.

In many ways those (in my days) unloved disciplines such as geography, show their true colours as great integrators of knowledge – from human geography to history, from glaciology to food production – and we begin to understand that a little humility is no bad thing when we come to try to understand complex problems. Inter-disciplinary working is not just a fad; it could be the key to unlock complex problems that no single discipline can resolve.

Leonardo da Vinci was both artist and scientist. Ok, so not a scientist in the modern sense that David Wootton explores in his book The Invention of Science that was heralded in by the Enlightenment, but surely a scientist in the sense of his ability to forensically observe the world and try to make sense of it. His art was part of his method in exploring the world, be it the sinews of the human body or birds in flight, art and science were indivisible.

Since my retirement I have started to take up painting seriously. At school I chose science over art, but over the years have dabbled in painting but never quite made progress. Now, under the watchful eye of a great teacher, Alison Vickery, I feel I am beginning to find a voice. What she tells me hasn’t really changed, but I am finally hearing her. ‘Observe the scene, more than look at the paper’; ‘Experiment and don’t be afraid of accidents, because often they are happy ones’; the list of helpful aphorisms never leaves me.

A palette knife loaded with pigment scrapped across a surface can give just the right level of variegation if not too wet and not too dry; there is a kind of science to it. The effect is to produce a kind of complexity that the human eye seems to be drawn to: imperfect symmetries of the kind we find alluring in nature even while in mathematics we seek perfection.

Scientists and artists share many attributes.

At the meeting hosted by Kettle’s Yard, there was a discussion on what was common between artists and scientists. My list adapts what was said on the day: 

  • a curiosity and playfulness in exploring the world around them; 
  • ability to acutely observe the world; 
  • a fascination with patterns;
  • not afraid of failure;
  • dedication to keep going; 
  • searching for truth; 
  • deep respect for the accumulated knowledge and tools of their ‘art’; 
  • ability to experiment with new methods or innovative ways of using old methods.

How then are art and science different?  

Well, of course, the key reason is that they are asking different questions and seeking different kinds of answers.

In art, the question is often simply ‘How do I see, how do I frame what I see. and how do I make sense of it?’ , and ‘How do I express this in a way that is interesting and compelling?’. If I see a tree, I see the sinews of the trunk and branches, and how the dappled light reveals fragmentary hints as to the form of the tree.  I observe the patterns of dark and light in the canopy. A true rendering of colour is of secondary interest (this is not a photograph), except in as much as it helps reveal the complexity of tree: making different greens by playing with mixtures of 2 yellows and 2 blues offers an infinity of greens which is much more interesting than having tubes of green paint (I hardly ever buy green).

Artists do not have definite answers to unambiguous questions. It is OK for me to argue that J M W Turner was the greatest painter of all time, even while my friend vehemently disagrees. When I look at a painting (or sculpture, or film) and feel an emotional response, there is no need to explain it, even though we often seem obliged to put words to emotions, we know these are mere approximations.

In science (or experimental science at least), we ask specific questions, which can be articulated as a hypothesis that challenges the boundaries of our knowledge. We can then design experiments to test the hypothesis, and if we are successful (in the 1% of times that maybe we are lucky), we will have advanced the knowledge of our subject. Most times this is an incremental learning, building on a body of knowledge. Other times, we may need to break something down before building it up again (but unlike the caricature of science often seen on TV, science is rarely about tearing down a whole field of knowledge, and starting from scratch). 

When I see the tree, I ask, why are the leaves of Copper Beech trees deep purple in colour rather than green? Are the energy levels in the chlorophyll molecule somehow changed to produce a different colour or is a different molecule involved?

In science, the objective is to find definite answers to definite questions. That is not to say that the definite answer is in itself a complete answer to all the questions we have. When Schrodinger asked the question ‘What is Life?’ the role and structure of DNA were not known, but there were questions that he could ask and find answers to. This is the wonder of science; this stepping stone quality.

I may find the answer as to why the Copper Beech tree’s leaves are not green, but what of the interesting question of why leaves change colour in autumn and how they change, not from one state (green) to another (brown), but through a complex process that reveals variegations of colour as Autumn unfolds? And what of a forest? How does a mature forest evolve from an immature one; how do pioneer trees give way to a complex ecology of varyingly aged trees and species over time? A leaf begs a question, and a forest may end up being the answer to a bigger question. Maybe we find that art, literature and science are in fact happy bedfellows after all.

As Feynman said, I can be both fascinated by something in the natural world (such as a rainbow) while at the same time seeking a scientific understanding of the phenomenon.

Nevertheless, it seems that while artists and scientists have so much in common, their framings struggle to align, and that in a way is a good thing. 

There is great work done in the illustration of scientific ideas, in textbooks and increasingly in scientific papers. I saw a recent paper on the impact of changes to the stratospheric polar vortex on climate, which was beautifully illustrated. But this is illustration, intended to help articulate those definite questions and answers. It is not art.

So what is the purpose of bringing artists into laboratories to inspire them; to get their response to the work being done there?

The answer, as they say, is on the tin (of this Gurdon Institute collaborative project): It is an experiment.

The hypothesis is that if you take three talented and curious young artists and show them some leading edge science that touches on diverse subjects, good things happen. Art happens.

Based on the short preview of the work being done which I attended, good things are already happening and I am excited to see how the collaboration evolves.

Here are some questions inspired in my mind by the discussion 

  • How do we understand the patterns in form in the ways that Turing wrote about, based on the latest research? Can we explore ‘emergence of form’ as a topic that is interesting, artistically and scientifically?
  • In the world of RNA epigenetics can the previously thought of ‘junk DNA’ play a part in the life of creatures, even humans, in the environment they live in? Can we explore the deep history of our shared genotype, even given our divergent phenotypes? Will the worm teach us how to live better with our environment?
  • Our identity is formed by memory and as we get older we begin to lose our ability to make new memories, but older ones often stay fast, but not always. Surely here there is a rich vein for exploring the artistic and scientific responses to diseases like Alzheimers?

Scientists are dedicated and passionate about their work, like artists. A joint curiosity drives this new collaborative Gurdon Institute project.

The big question for me is this: can art reveal to scientists new questions, or new framings of old questions, that will advance the science in novel ways? Can unexpected connections be revealed or collaborations be inspired?

I certainly hope so.

P.S. the others in my troop did get to do the house visit after all, and it was wonderful, I hear. I missed it because I was too busy chatting to the scientists and artists after the panel discussion; and I am so grateful to have spent time with them.

(c) Richard W. Erskine, 2018

 

Leave a comment

Filed under Art & Science, Essay, Molecular Biology, Uncategorized

Animating IPCC Climate Data

The IPCC (Intergovernmental Panel on Climate Change) is exploring ways to improve the communication of its findings, particularly to a more general  audience. They are not alone in having identified a need to think again about clear ‘science communications’. For example, the EU’s HELIX project (High-End Climate Impacts and Extremes), produced some guidelines a while ago on better use of language and diagrams.

Coming out of the HELIX project, and through a series of workshops, a collaboration with the Tyndall Centre and Climate Outreach, has produced a comprehensive guide (Guide With Practical Exercises to Train Researchers In the Science of  Climate Change Communication)

The idea is not to say ‘communicate like THIS’ but more to share good practice amongst scientists and to ensure all scientists are aware of the communication issues, and then to address them.

Much of this guidance concerns the ‘soft’ aspects of communication: how the communicator views themself; understanding the audience; building trust; coping with uncertainty; etc.

Some of this reflects ideas that are useful not just to scientific communication, but almost any technical presentation in any sector, but that does not diminish its importance.

This has now been distilled into a Communications Handbook for IPCC Scientists; not an official publication of the IPCC but a contribution to the conversation on how to improve communications.

I want to take a slightly different tack, which is not a response to the handbook per se, but covers a complementary issue.

In many years of being involved in presenting complex material (in my case, in enterprise information management) to audiences unfamiliar with the subject at hand, I have often been aware of the communication potential but also risks of diagrams. They say that a picture is worth a thousand words, but this is not true if you need a thousand words to explain the picture!

The unwritten rules related to the visual syntax and semantics of diagrams is a fascinating topic, and one which many – and most notably Edward Tufte –  have explored. In chapter 2 of his insightful and beautiful book Visual Explanations, Tufte argues:

“When we reason about quantityative evidence, certain methods for displaying and analysing data are better than others. Superior methods are more likely to produce truthful, credible, and precise findings. The difference between an excellent analysis and a faulty one can sometimes have momentous consequences”

He then describes how data can be used and abused. He illustrates this with two examples: the 1854 Cholera epidemic in London and the 1986 Challenger space shuttle disaster.

Tufte has been highly critical of the over reliance on Powerpoint for technical reporting (not just presentations) in NASA, because the form of the content degrades the narrative that should have been an essential part of any report (with or without pictures). Bulletized data can destroy context, clarity and meaning.

There could be no more ‘momentous consequences’ than those that arise from man-made global warming, and therefore, there could hardly be a more important case where a Tuftian eye, if I may call it that, needs to be brought to bear on how the information is described and visualised.

The IPCC, and the underlying science on which it relies, is arguably the greatest scientific collaboration ever undertaken, and rightly recognised with a Nobel Prize. It includes a level of interdisciplinary cooperation that is frankly awe-inspiring; unique in its scope and depth.

It is not surprising therefore that it has led to very large and dense reports, covering the many areas that are unavoidably involved: the cryosphere, sea-level rise, crops, extreme weather, species migration, etc.. It might seem difficult to condense this material without loss of important information. For example, Volume 1 of the IPCC Fifth Assessment Report, which covered the Physical Basis of Climate Change, was over 1500 pages long.

Nevertheless, the IPCC endeavours to help policy-makers by providing them with summaries and also a synthesis report, to provide the essential underlying knowledge that policy-makers need to inform their discussions on actions in response to the science.

However, in its summary reports the IPCC will often reuse key diagrams, taken from the full reports. There are good reasons for this, because the IPCC is trying to maintain mutual consistency between different products covering the same findings at different levels of detail.

This exercise is fraught with risks of over-simplification or misrepresentation of the main report’s findings, and this might limit the degree to which the IPCC can become ‘creative’ with compelling visuals that ‘simplify’ the original diagrams. Remember too that these reports need to be agreed by reviewers from national representatives, and the language will often seem to combine the cautiousness of a scientist with the dryness of a lawyer.

So yes, it can be problematic to use artistic flair to improve the comprehensibility of the findings, but risk losing the nuance and caution that is a hallmark of science. The countervailing risk is that people do not really ‘get it’; and do not appreciate what they are seeing.

We have seen with the Challenger reports, that people did not appreciate the issue with the O rings, especially when key facts were buried in 5 levels of indented bullet points in a tiny font, for example or, hidden in plain sight, in a figure so complex that the key findings are lost in a fog of complexity.

That is why any attempt to improve the summaries for policy makers and the general public must continue to involve those who are responsible for the overall integrity and consistency of the different products, not simply hived off to a separate group of ‘creatives’ who would lack knowledge and insight of the nuance that needs to be respected.  But those complementary skills – data visualizers, graphics artists, and others – need to be included in this effort to improve science communications. There is also a need for those able to critically evaluate the pedagogic value of the output (along the lines of Tufte), to ensure they really inform, and do not confuse.

Some individuals have taken to social media to present their own examples of how to present information, which often employs animation (something that is clearly not possible for the printed page, or its digital analogue, a PDF document). Perhaps the most well known example to date was Professor Ed Hawkin’s spiral picture showing the increase in global mean surface temperature:

spiral_2017_large

This animation went viral, and was even featured as part of the Rio Olympics Opening Ceremony. This and other spiral animations can be found at the Climate Lab Book site.

There are now a number of other great producers of animations. Here follows a few examples.

Here, Kevin Pluck (@kevpluck) illustrates the link between the rising carbon dioxide levels and the rising mean surface temperature, since 1958 (the year when direct and continuous measurements of carbon dioxide were pioneered by Keeling)

Kevin Pluck has many other animations which are informative, particularly in relation to sea ice.

Another example, from Antti Lipponen (@anttilip), visualises the increase in surface warming from 1900 to 2017, by country, grouped according to continent. We see the increasing length/redness of the radial bars, showing an overall warming trend, but at different rates according to region and country.

A final example along the same lines is from John Kennedy (@micefearboggis), which is slightly more elaborate but rich in interesting information. It shows temperature changes over the years, at different latitudes, for both ocean (left side) and land (right side). The longer/redder the bar the higher the increase in temperature at that location, relative to the temperature baseline at that location (which scientists call the ‘anomaly’). This is why we see the greatest warming in the Arctic, as it is warming proportionally faster than the rest of the planet; this is one of the big takeaways from this animation.

These examples of animation are clearly not dumbing down the data, far from it. They  improve the chances of the general public engaging with the data. This kind of animation of the data provides an entry point for those wanting to learn more. They can then move onto a narrative treatment, placing the animation in context, confident that they have grasped the essential information.

If the IPCC restricts itself to static media (i.e. PDF files), it will miss many opportunities to enliven the data in the ways illustrated above that reveal the essential knowledge that needs to be communicated.

(c) Richard W. Erskine, 2018

3 Comments

Filed under Climate Science, Essay, Science Communications

Matt Ridley shares his ignorance of climate science (again)

Ridley trots out a combination of long-refuted myths that are much loved by contrarians; bad or crank science; or misunderstandings as to the current state of knowledge. In the absence of a Climate Feedback dissection of Ridley’s latest opinion piece, here is my response to some of his nonsense …

Here are five statements he makes that I will refute in turn.

1. He says: Forty-five years ago a run of cold winters caused a “global cooling” scare.

I say:

Stop repeating this myth Matt! A few articles in popular magazines in the 70s speculated about an impending ice age, and so according to dissemblers like Ridley, they state or imply that this was the scientific consensus at the time (snarky message: silly scientists can’t make your mind up). This is nonsense, but so popular amongst contrarians it is repeated frequently to this day.

If you want to know what scientists were really thinking and publishing in scientific papers read “The Myth of the 1970s Global Cooling Scientific Consensus”, by Thomas Peterson at al (2008), American Meteorological Society.

Warming, not cooling was the greater concern. It is astonishing that Ridley and others continue to repeat this myth. Has he really been unable – in the ten years since it was published – to read this oft cited article and so disabuse himself of the myth? Or does he deliberately repeat it because he thinks his readers are too lazy or too dumb to check the facts? How arrogant would that be?

2. He says: Valentina Zharkova of Northumbria University has suggested that a quiescent sun presages another Little Ice Age like that of 1300-1850. I’m not persuaded. Yet the argument that the world is slowly slipping back into a proper ice age after 10,000 years of balmy warmth is in essence true.

I say:

Oh dear, he cites the work of Zharkova, saying he is not persuaded, but then talks of ‘slowly slipping into a proper ice age’. A curious non sequitur. While we are on Zharkova, her work suffered from being poorly communicated.

And quantitatively, her work has no relevance to the current global warming we are observing. The solar minimum might create a -0.3C contribution over a limited period, but that would hardly put a dent in the +0.2C per decade rate of warming.

But, let’s return to the ice age cycle. What Ridley obdurately refuses to acknowledge is that the current warming is occurring due to less than 200 years of man-made changes to the Earth’s atmosphere, raising CO2 to levels not seen for nearly 1 million years (equal to 10 ice age cycles), is raising the global mean surface temperature at an unprecedented rate.

Therefore, talking about the long slow descent over thousands of years into an ice age that ought to be happening (based on the prior cycles), is frankly bizarre, especially given that the man-made warming is now very likely to delay a future ice age. As the a paper by Ganopolski et al, Nature (2016) has estimated:

“Additionally, our analysis suggests that even in the absence of human perturbations no substantial build-up of ice sheets would occur within the next several thousand years and that the current interglacial would probably last for another 50,000 years. However, moderate anthropogenic cumulative CO2 emissions of 1,000 to 1,500 gigatonnes of carbon will postpone the next glacial inception by at least 100,000 years.”

And why stop there, Matt? Our expanding sun will boil away the oceans in a billion years time, so why worry about Brexit; and don’t get me started on the heat death of the universe. It’s hopeless, so we might as well have a great hedonistic time and go to hell in a handcart! Ridiculous, yes, but no less so than Ridley conflating current man-made global warming with a far, far off ice age, that recedes with every year we fail to address man-made emissions of CO2.

3. He says: Well, not so fast. Inconveniently, the correlation implies causation the wrong way round: at the end of an interglacial, such as the Eemian period, over 100,000 years ago, carbon dioxide levels remain high for many thousands of years while temperature fell steadily. Eventually CO2 followed temperature downward.

I say:

The ice ages have indeed been a focus of study since Louis Agassiz coined the term in 1837, and there have been many twists and turns in our understanding of them even up to the present day, but Ridley’s over-simplification shows his ignorance of the evolution of this understanding.

The Milankovitch Cycles are key triggers for entering, an ice age (and indeed, leaving it), but the changes in atmospheric concentrations of carbon dioxide drives the cooling (entering) and warming (leaving) of an ice age, something that was finally accepted by the science community following Hays et al’s 1976 seminal paper (Variations in the Earth’s orbit: Pacemake of the ice ages) , over 50 years since Milankovitch first did his work.

But the ice core data that Ridley refers to confirms that carbon dioxide is the driver, or ‘control knob’, as Professor Richard Alley explains it; and if you need a very readable and scientifically literate history of our understanding of the ice cores and what they are telling us, his book “The Two-Mile Time Machine: Ice Cores, Abrupt Climate Change, and Our Future” is a peerless, and unputdownable introduction.

Professor Alley offers an analogy. Suppose you take out a small loan, but then after this interest is added, and keeps being added, so that after some years you owe a lot of money. Was it the small loan, or the interest rate that created the large debt? You might say both, but it is certainly ridiculous to say the the interest rate is unimportant because the small loan came first.

But despite its complexity, and despite the fact that the so-called ‘lag’ does not refute the dominant role of CO2, scientists are interested in explaining such details and have indeed studied the ‘lag’. In 2012, Shakun and others published a paper doing just that “Global warming preceded by increasing carbon dioxide concentrations during the last deglaciation”(Jeremy D. Shakun et al, Nature 484, 49–54, 5 April 2012). Since you may struggle to see a copy of this paywalled paper, a plain-English summary is available.

Those who read headlines and not contents – like the US Politician Joe Barton – might think this paper is challenging the dominant role of CO2, but the paper does not say that.  This paper showed that some warming occurred prior to increased CO2, but this is explained as an interaction between Northern and Southern hemispheres, following the Milankovitch original ‘forcing’.

The role of the oceans is crucial in fully explaining the temperature record, and can add significant delays in reaching a new equilibrium. There are interactions between the oceans in Northern and Southern hemispheres that are implicated in some abrupt climate change events (e.g.  “North Atlantic ocean circulation and abrupt climate change during the last glaciation”, L. G. Henry et al, Science,  29 July 2016 • Vol. 353 Issue 6298).

4. He says: Here is an essay by Willis Eschenbach discussing this issue. He comes to five conclusions as to why CO2 cannot be the main driver

I say:

So Ridley quotes someone with little or no scientific credibility who has managed to publish in Energy & Environment. Its editor Dr Sonja Boehmer-Christiansen admitted that she was quite partisan in seeking to publish ‘sceptical’ articles (which actually means, contrarian articles), as discussed here.

Yet, Ridley extensively quotes this low grade material, but could have chosen from hundreds of credible experts in the field of climate science. If he’d prefer ‘the’ textbook that will take him through all the fundamentals that he seems to struggle to understand, he could try Raymond Pierrehumbert’s seminal textbook “Principles of Planetary Climate”. But no. He chooses Eschenbach, with a BA in Psychology.

Ridley used to put up the appearance of interest in a rational discourse, albeit flying in the face of the science. That mask has now fully and finally dropped, as he is now channeling crank science. This is risible.

5. He says: The Antarctic ice cores, going back 800,000 years, then revealed that there were some great summers when the Milankovich wobbles should have produced an interglacial warming, but did not. To explain these “missing interglacials”, a recent paper in Geoscience Frontiers by Ralph Ellis and Michael Palmer argues we need carbon dioxide back on the stage, not as a greenhouse gas but as plant food.

I say:

The paper is 19 pages long, which is unusual in today’s literature. The case made is intriguing but not convincing, but I leave it to the experts to properly critique it. It is taking a complex system, where for example, we know that large movements of heat in the ocean have played a key role in variability, and tries to infer (explaining interglacials) that dust is the primary driver, while discounting the role of CO2 as a greenhouse gas.

The paper curiously does not cite the seminal paper by Hays et al (1976), yet cites a paper by Willis Eschenbach published in Energy & Environment (which I mentioned earlier). All this raised concerns in my mind about this paper.

Extraordinary claims require extraordinary evidence and scientific dialogue, and it is really too early to claim that this paper is something or nothing; even if that doesn’t mean waiting the 50 odd years that Milankovitch’s work had to endure, before it was widely accepted. Good science is slow, conservative, and rigorous, and the emergence of a consilience on the science of our climate has taken a very long time, as I explored in a previous essay.

Ralph Ellis on his website (which shows that his primary interest is the history of the life and times of Jesus) states:

“Ralph has made a detour into palaeoclimatology, resulting in a peer-review science paper on the causes of ice ages”, and after summarising the paper says,

“So the alarmists were right about CO2 being a vital forcing agent in ice age modulation – just not in the way they thought”.

So was this paper an attempt to clarify what was happening during the ice ages, or a contrivance, to take a pot shot at carbon dioxide’s influence on our contemporary climate change?

The co-author, Michael Palmer, is a biochemist, with no obvious background in climate science and provided “a little help” on the paper according to his website.

But on a blog post comment he offers a rather dubious extrapolation from the paper:

“The irony is that, if we should succeed in keeping the CO2 levels high through the next glacial maximum, we would remove the mechanism that would trigger the glacial termination, and we might end up (extreme scenario, of course) another Snowball Earth.”,

They both felt unembarrassed participating in comments on the denialist blog site WUWT. Quite the opposite, they gleefully exchanged messages with a growing band of breathless devotees.

But even if my concerns about the apparent bias and amateurism of this paper were allayed, the conclusion (which Ridley and Ellis clearly hold to) that the current increases in carbon dioxide is nothing to be concerned with, does not follow from this paper. It is a non sequitur.

If I discovered a strange behavour like, say, the Coriolis force way back when, the first conclusion would not be to throw out Newtonian mechanics.

The physics of CO2 is clear. How the greenhouse effect works is clear, including for the conditions that apply on Earth, with all remaining objections resolved since no later than the 1960s.

We have a clear idea of the warming effect of increased CO2 in the atmosphere including short term feedbacks, and we are getting an increasingly clear picture of how the Earth system as a whole will respond, including longer term feedbacks.  There is much still to learn of course, but nothing that is likely to require jettisoning fundamental physics.

The recent excellent timeline published by Carbon Brief showing the history of the climate models, illustrates the long slow process of developing these models, based on all the relevant fundamental science.

This history has shown how different elements have been included in the models as the computing power has increased – general circulation, ocean circulation, clouds, aerosols, carbon cycle, black carbon.

I think it is really because Ridley still doesn’t understand how an increase from 0.03% to 0.04% over 150 years or so, in the atmospheric concentration of CO2, is something to be concerned about (or as I state it in talks, a 33% rise in the principal greenhouse gas; which avoids Ridley’s deliberately misleading formulation).

He denies that he denies the Greenhouse Effect, but every time he writes, he reveals that really, deep down, he still doesn’t get it. To be as generous as I can to him, he may suffer from a perpetual state of incredulity (a common condition I have written about before).

Conclusion

Matt Ridley in an interview he gave to Russ Roberts at EconTalk.org in 2015 he reveals his inability to grasp even the most basic science:

“So, why do they say that their estimate of climate sensitivity, which is the amount of warming from a doubling, is 3 degrees? Not 1 degree? And the answer is because the models have an amplifying factor in there. They are saying that that small amount of warming will trigger a furtherwarming, through the effect mainly of water vapor and clouds. In other words, if you warm up the earth by 1 degree, you will get more water vapor in the atmosphere, and that water vapor is itself a greenhouse gas and will cause you to treble the amount of warming you are getting. Now, that’s the bit that lukewarmers like me challenge. Because we say, ‘Look, the evidence would not seem the same, the increases in water vapor in the right parts of the atmosphere–you have to know which parts of the atmosphere you are looking at–to justify that. And nor are you seeing the changes in cloud cover that justify these positive-feedback assumptions. Some clouds amplify warming; some clouds do the opposite–they would actually dampen warming. And most of the evidence would seem to suggest, to date, that clouds are actually having a dampening effect on warming. So, you know, we are getting a little bit of warming as a result of carbon dioxide. The clouds are making sure that warming isn’t very fast. And they’re certainly not exaggerating or amplifying it. So there’s very, very weak science to support that assumption of a trebling.”

He seems to be saying that the water vapour is in the form of clouds – some high altitude, some low –  have opposite effects (so far, so good), so the warming should be 1C – just the carbon dioxide component – from a doubling of CO2 concentrations (so far, so bad).  The clouds represent a condensed (but not yet precipitated) phase of water in the atmosphere, but he seems to have overlooked that water also comes in a gaseous phase (not clouds). Its is that gaseous phase that is providing the additional warming, bringing the overall warming to 3C.

The increase in water vapour concentrations is based on “a well-established physical law (the Clausius-Clapeyron relation) determines that the water-holding capacity of the atmosphere increases by about 7% for every 1°C rise in temperature” (IPCC AR4 FAQ 3.2)

T.C. Chamberlin writing in 1905 to Charles Abbott, explained this in a way that is very clear, explaining the feedback role of water vapour:

“Water vapour, confessedly the greatest thermal absorbent in the atmosphere, is dependent on temperature for its amount, and if another agent, as CO2 not so dependent, raises the temperature of the surface, it calls into function a certain amount of water vapour, which further absorbs heat, raises the temperature and calls forth more [water] vapour …”

(Ref. “Historical Perspectives On Climate Change” by James Fleming, 1998)

It is now 113 years since Chamberlin wrote those words, but poor Ridley is still struggling to understand basic physics, so instead regales us with dubious science intended to distract and confuse.

When will Matt Ridley stop feeling the need to share his perpetual incredulity and obdurate ignorance with the world?

© Richard W. Erskine, 2018

Leave a comment

Filed under Climate Science, Essay

Solving Man-made Global Warming: A Reality Check

Updated 11th November 2017 – Hopeful message following Figure added.

It seems that the we are all – or most of us – in denial about the reality of the situation we are in with relation to the need to address global warming now, rather than sometime in the future.

We display seesaw emotions, optimistic that emissions have been flattening, but aghast that we had a record jump this year (which was predicted, but was news to the news people). It seems that people forget that if we have slowed from 70 to 60 miles per hour, approaching a cliff edge, the result will be the same, albeit deferred a little. We actually need to slam on the breaks and stop! Actually, due to critical erosion of the cliff edge, we will even need to go into reverse.

I was chatting with a scientist at a conference recently:

Me: I think we need to accept that a wide portfolio of solutions will be required to address global warming. Pacala and Socolow’s ‘wedge stabilization’ concept is still pertinent.

Him: People won’t change; we won’t make it. We are at over 400 parts per million and rising, and have to bring this down, so some artificial means of carbon sequestration is the only answer.

This is just an example of many other kinds of conversations of a similar structure that dominate the blogosphere. It’s all about the future. Future impacts, future solutions. In its more extreme manifestations, people engage in displacement behaviour, talking about any and every solution that is unproven in order to avoid focusing on proven solutions we have today.

Yet nature is telling us that the impacts are now, and surely the solutions should be too; at least for implementation plans in the near term.

Professors Kevin Anderson and Alice Larkin of the Tyndall Centre have been trying to shake us out of our denial for a long time now. The essential argument is that some solutions are immediately implementable while others are some way off, and others so far off they are not relevant to the time frame we must consider (I heard a leader in Fusion Energy research on the BBC who sincerely stated his belief that it is the solution to climate change; seriously?).

The immediately implementable solution that no politician dares talk about is degrowth – less buying stuff, less travel, less waste, etc. All doable tomorrow, and since the top 10% of emitters globally are responsible for 50% of emissions (see Extreme Carbon Inequality, Oxfam), the quickest and easiest solution is for that 10% or let’s say 20%, to halve their emissions; and do so within a few years. It’s also the most ethical thing to do.

Anderson & Larkin’s credibility is enhanced by the fact that they practice what they advocate, as for example, this example of an approach to reduce the air miles associated with scientific conferences:

Screen Shot 2017-11-09 at 11.51.25

Some of people in the high energy consuming “West” have proven it can be done. Peter Kalmus, in his book Being the Change: Live Well and Spark a Climate Revolution describes how he went from a not untypical US citizen responsible for 19 tonnes of carbon dioxide emissions per year, to now something like 1 tonne; which is one fifth of the global average! It is all about what we do, how we do it, and how often we do it.

Anderson and Larkin have said that even just reaching half the European average, at least, would be a huge win: “If the top 10% of emitters were to reduce their emissions to the average for EU, that would mean a 33% in global emissions” (Kevin Andreson, Paris, Climate & Surrealism: how numbers reveal another reality, Cambridge Climate Lecture Series, March 2017).

This approach – a large reduction in consumption (in all its forms) amongst high emitters in all countries, but principally the ‘west’ – could be implemented in the short term (the shorter the better but let’s say, by 2030). Let’s call these Phase 1 solutions.

The reason we love to debate and argue about renewables and intermittency and so on is that it really helps to distract us from the blinding simplicity of the degrowth solution.

It is not that a zero or low carbon infrastructure is not needed, but that the time to fully implement it is too long – even if we managed to do it in 30 years time – to address the issue of rising atmospheric greenhouse gases. This has already started, but from a low base, but will have a large impact in the medium term (by 2050). Let’s call these Phase 2 solutions.

Project Drawdown provides many solutions relevant to both Phase 1 and 2.

And as for my discussion that started this, artificial carbon sequestration methods, such as BECCS and several others (are explored in Atmosphere of Hope by Tim Flannery) will be needed, but it is again about timing. These solutions will be national, regional and international initiatives, and are mostly unproven at present; they live in the longer term, beyond 2050. Let’s call these Phase 3 solutions.

I am not here wanting to get into geo-engineering solutions, a potential Phase 4. A Phase 4 is predicated on Phases 1 to 3 failing or failing to provide sufficient relief. However, I think we would have to accept that if, and I personally believe only if, there was some very rude shock (an unexpected burp of methane from the Arctic, and signs of a catastrophic feedback), leading to an imminent > 3C rise in global average temperature (as a possible red-line), then some form of geo-engineering would be required as a solution of last resort. But for now, we are not in that place. It is a matter for some feasibility studies but not policy and action. We need to implement Phase 1, 2 and 3 – all of which will be required – with the aim of avoiding a Phase 4.

I have illustrated the three phases in the figure which follows (Adapted from Going beyond dangerous climate change: does Paris lock out 2°C? Professors Kevin Anderson & Alice Bows-Larkin, Tyndall Centre – presentation to School of Mechanical Aerospace & Civil Engineering University of Manchester February 2016, Douglas, Isle of Man).

My adapted figure is obviously a simplification, but we need some easily digestible figures to help grapple with this complex subject; and apologies in advance to Anderson & Larkin if I have taken liberties with my colourful additions and annotations to their graphic (while trying to remain true to its intent).

Screen Shot 2017-11-09 at 12.19.57

A version of this slide on Twitter (@EssaysConcern) seemed to resonate with some people, as a stark presentation of our situation.

For me, it is actually a rather hopeful image, if as I, you have a belief in the capacity for people to work together to solve problems which so often we see in times of crisis; and this is a crisis, make no mistake.

While the climate inactivists promote a fear of big Government, controlling our lives, the irony here is that Phase 1 is all about individuals and communities, and we can do this with or without Government support. Phase 2 could certainly do with some help in the form of enabling legislation (such a price on carbon), but it does not have to be top-down solutions, although some are (industrial scale energy storage). Only when we get to Phase 3 are we seeing national solutions dominating, and then only because we have an international consensus to execute these major projects; that won’t be big government, it will be responsible government.

The message of Phases 1 and 2 is … don’t blame the conservatives, don’t blame the loss of feed-in tarifs, or … just do it! They can’t stop you!

They can’t force you to boil a full kettle when you only need one mug of tea. They can’t force you to drive to the smoke, when the train will do. They can’t force you to buy new stuff that can be repaired at a cafe.

And if your community wants a renewable energy scheme, then progressives and conservatives can find common cause, despite their other differences. Who doesn’t want greater community control of their energy, to compete with monopolistic utilities?

I think the picture contains a lot of hope, because it puts you, and me, back in charge. And it sends a message to our political leaders, that we want this high on the agenda.

(c) Richard W. Erskine, 2017

 

 

3 Comments

Filed under Essay, Global Warming Solutions

Incredulity, Credulity and the Carbon Cycle

Incredulity, in the face of startling claims, is a natural human reaction and is right and proper.

When I first heard the news about the detection on 14th September 2015 of the gravitational waves from two colliding black holes by the LIGO observatories I was incredulous. Not because I had any reason to disagree with the predictions of Albert Einstein that such waves should exist, rather it was my incredulity that humans had managed to detect such a small change in space-time, much smaller than the size of a proton.

How, I pondered, was the ‘noise’ from random vibrations filtered out? I had to do some studying, and discovered the amazing engineering feats used to isolate this noise.

What is not right and proper is to claim that personal incredulity equates to an error in the claims made. If I perpetuate my incredulity by failing to ask any questions, then it’s I who is culpable.

And if I were to ask questions then simply ignore the answers, and keep repeating my incredulity, who is to blame? If the answers have been sufficient to satisfy everyone skilled in the relevant art, how can a non expert claim to dispute this?

Incredulity is a favoured tactic of many who dispute scientific findings in many areas, and global warming is not immune from the clinically incredulous.

The sadly departed Professor David Mackay gives an example in his book Sustainable Energy Without the Hot Air (available online):

The burning of fossil fuels is the principal reason why CO2 concentrations have gone up. This is a fact, but, hang on: I hear a persistent buzzing noise coming from a bunch of climate-change inactivists. What are they saying? Here’s Dominic Lawson, a columnist from the Independent:  

“The burning of fossil fuels sends about seven gigatons of CO2 per year into the atmosphere, which sounds like a lot. Yet the biosphere and the oceans send about 1900 gigatons and 36000 gigatons of CO2 per year into the atmosphere – … one reason why some of us are sceptical about the emphasis put on the role of human fuel-burning in the greenhouse gas effect. Reducing man-made CO2 emissions is megalomania, exaggerating man’s significance. Politicians can’t change the weather.”

Now I have a lot of time for scepticism, and not everything that sceptics say is a crock of manure – but irresponsible journalism like Dominic Lawson’s deserves a good flushing.

Mackay goes on to explain Lawson’s error:

The first problem with Lawson’s offering is that all three numbers that he mentions (seven, 1900, and 36000) are wrong! The correct numbers are 26, 440, and 330. Leaving these errors to one side, let’s address Lawson’s main point, the relative smallness of man-made emissions. Yes, natural flows of CO2 are larger than the additional flow we switched on 200 years ago when we started burning fossil fuels in earnest. But it is terribly misleading to quantify only the large natural flows into the atmosphere, failing to mention the almost exactly equal flows out of the atmosphere back into the biosphere and the oceans. The point is that these natural flows in and out of the atmosphere have been almost exactly in balance for millenia. So it’s not relevant at all that these natural flows are larger than human emissions. The natural flows cancelled themselves out. So the natural flows, large though they were, left the concentration of CO2 in the atmosphere and ocean constant, over the last few thousand years.

Burning fossil fuels, in contrast, creates a new flow of carbon that, though small, is not cancelled.

I offer this example in some detail as an exemplar of the problem often faced in confronting incredulity.

It is natural that people will often struggle with numbers, especially large abstract sounding numbers. It is easy to get confused when trying to interpret numbers. It does not help that in Dominic Lawson’s case he is ideologically primed to see a ‘gotcha’, where none exists.

Incredulity, such as Lawson’s, is perfectly OK when initially confronting a claim that one is sceptical of; we cannot all be informed on every topic. But why then not pick up the phone, or email a Professor with skills in the particular art, to get them to sort out your confusion?  Or even, read a book, or browse the internet? But of course, Dominic Lawson, like so many others suffers from a syndrome that  many have identified. Charles Darwin noted in The Descent of Man:

“Ignorance more frequently begets confidence than does knowledge: it is those who know little, not those who know much, who so positively assert that this or that problem will never be solved by science.”

It is this failure to display any intellectual curiosity which is unforgivable in those in positions of influence, such as journalists or politicians.

However, the incredulity has a twin brother, its mirror image: credulity. And I want to take an example that also involves the carbon cycle,.

In a politically charged subject, or one where there is a topic close to one’s heart, it is very easy to uncritically accept a piece of evidence or argument. To be, in the technical sense, a victim of confirmation bias.

I have been a vegetarian since 1977, and I like the idea of organic farming, preferably local and fresh. So I have been reading Graham Harvey’s book Grass Fed Nation. I have had the pleasure of meeting Graham, as he was presenting a play he had written which was performed in Stroud. He is a passionate and sincere advocate for his ideas on regenerative farming, and I am sure that much of what he says makes sense to farmers.

The recently reported research from Germany of a 75% decline in insect numbers is deeply worrying, and many are pointing the finger at modern farming and land-use methods.

However, I found something in amongst Harvey’s interesting book that made me incredulous, on the question of carbon.

Harvey presents the argument that, firstly, we can’t do anything to reduce carbon emissions from industry etc., but that secondly, no need to worry because the soils can take up all the annual emissions with ease; and further, that all of extra carbon in the industrial era could be absorbed in soils over coming years.

He relies a lot on Savory’s work, famed for his visionary but contentious TED talk. But he also references other work that makes similar claims.

I would be lying if I said there was not a part of me that wanted this to be true. I was willing it on. But I couldn’t stop myself … I just had to track down the evidence. Being an ex-scientist, I always like to go back to the source, and find a paper, or failing that (because of paywalls), a trusted source that summarises the literature.

Talk about party pooper, but I cannot find any such credible evidence for Harvey’s claim.

I think the error in Harvey’s thinking is to confuse the equilibrium capacity of the soils with their ability to take up more, every year, for decades.

I think it is also a inability to deal with numbers. If you multiply A, B and C together, but then take the highest possible ranges for A, B and C you can easily reach a result which is hugely in error. Overestimate the realistic land that can be addressed; and the carbon dioxide sequestration rate; and the time till saturation/ equilibrium is reached … and it is quite easy to overestimate the product of these by a factor of 100 or more.

Savory is suggesting that over a period of 3 or 4 decades you can draw down the whole of the anthropogenic amount that has accumulated (which is nearly 2000 gigatonnes of carbon dioxide), whereas a realistic assessment (e.g. www.drawdown.org) is suggesting a figure of 14 gigatonnes of carbon dioxide (more than 100 times less) is possible in the 2020-2050 timeframe.

There are many complex processes at work in the whole carbon cycle – the biological, chemical and geological processes covering every kind of cycle, with flows of carbon into and out of the carbon sinks. Despite this complexity, and despite the large flows of carbon (as we saw in the Lawson case), atmospheric levels had remained stable for a long time in the pre-industrial era (at 280 parts per million).  The Earth system as a whole was in equilibrium.

The deep oceans have by far the greatest carbon reservoir, so a ‘plausibility argument’ could go along the lines of: the upper ocean will absorb extra CO2 and then pass it to the deep ocean. Problem solved! But this hope was dashed by Revelle and others in the 1950s, when it was shown that the upper-to-lower ocean processes are really quite slow.

I always come back to the Keeling Curve, which reveals an inexorable rise in CO2 concentrations in the atmosphere since 1958 (and we can extend the curve further back using ice core data). And the additional CO2 humans started to put into the atmosphere since the start of the industrial revolution (mid-19th century, let us say) was not, as far as I can see, magically soaked up by soils in the pre-industrial-farming days of the mid-20th century, when presumably traditional farming methods pertained.

FCRN explored Savory’s methods and claims, and find that despite decades of trying, he has not demonstrated that his methods work.  Savory’s case is very weak, and he ends up (in his exchanges with FCRN) almost discounting science; saying his methods are not susceptible to scientific investigations. A nice cop-out there.

In an attempt to find some science to back himself up, Savory referenced Gattinger, but that doesn’t hold up either. Track down Gattinger et al’s work  and it reveals that soil organic carbon could (on average, with a large spread) capture 0.4GtC/year (nowhere near annual anthropogenic emissions of 10GtC), and if it cannot keep up with annual emissions, forget soaking up the many decades of historical emissions (the 50% of these that persists for a very long time in the atmosphere).

It is interesting what we see here.

An example of ‘incredulity’ from Lawson, who gets carbon flows mixed up with net carbon flow, and an example of ‘credulity’ from Harvey where he puts too much stock in the equilibrium capacity of carbon in the soil, and assumes this means soils can keep soaking up carbon almost without limit. Both seem to struggle with basic arithmetic.

Incredulity in the face of startling claims is a good initial response to startling claims, but should be the starting point for engaging one’s intellectual curiosity, not as a perpetual excuse for confirming one’s bias; a kind of obdurate ignorance.

And neither should hopes invested in the future be a reason for credulous acceptance of claims, however plausible on face value.

It’s boring I know – not letting either one’s hopes or prejudices hold sway – but maths, logic and scientific evidence are the true friends here.

Maths is a great leveller.

 

(c) Richard W. Erskine, 2017

6 Comments

Filed under Climate Science, Essay, Uncategorized

The Zeitgeist of the Coder

When I go to see a film with my wife, we always stick around for the credits, and the list has got longer and longer over the years … Director, Producer, Cinematographer, Stuntman, Grips, Special Effects … and we’ve only just started. Five minutes later and we are still watching the credits! There is something admirable about this respect for the different contributions made to the end product. The degree of differentiation of competence in a film’s credits is something that few other projects can match.

Now imagine the film reel for a typical IT project … Project Manager, Business Analyst, Systems Architect, Coder, Tester and we’re almost done, get your coat. Here, there is the opposite extreme; a complete failure to identify, recognise and document the different competencies that surely must exist in something as complex as a software project. Why is this?

For many, the key role on this very short credits list is the ‘coder’. There is this zeitgeist of the coders – a modern day priesthood – that conflates their role with every other conceivable role that could or should exist on the roll of honour.

A good analogy for this would be the small scale general builder. They imagine they can perform any skill: they can fit a waterproof membrance on a flat roof; they can repair the leadwork around the chimney; they can mend the lime mortar on that Cotswold stone property. Of course, each of these requires deep knowledge and experience of the materials, tools and methods needed to plan and execute them right.  A generalist will overestimate their abilities and underestimate the difficulties, and so they will always make mistakes.

The all purpose ‘coder’ is no different, but has become the touchstone for our digital rennaissance. ‘Coding’ is the skill that trumps all others in the minds of the commentariat.

Politicians, always keen to jump on the next bandwagon, have for some years now been falling over themselves to extol the virtues of coding as a skill that should be promoted in schools, in order to advance the economy.  Everyone talks about it, imagining it offers a kind of holy grail for growing the digital economy.  But can it be true? Is coding really the path to wealth and glory, for our children and our economy?

Forgetting for a moment that coding is just one of the skills required on a longer list of credits, why do we all need to become one?

Not everyone is an automotive engineer, even though cars are ubiquitous, so why would driving a car mean we all have to be able to design and build one? Surely only a few of us need that skill. In fact, whilst cars – in the days when we called them old bangers – did require a lot of roadside fixing, they are now so good we are discouraged from tinkering with them at all.  We the consumers have become de-skilled, while the cars have become super-skilled.

But apparently, every kid now needs to be able to code, because we all use Apps. Of course, it’s nonsense, for much the same reasons it is nonsense that all car drivers need to be automotive engineers. And as we decarbonise our economy Electric Vehicles will take over, placing many of the automotive skills in the dustbin. Battery engineers anyone?

So why is this even worth discussing in the context of the knowledge economy? We do need to understand if coding has any role in the management of our information and knowledge, and if not, what are the skills we require. We need to know how many engineers are required, and crucially, what type of engineers.

But lets stick with ‘coding’ for a little while longer. I would like to take you back to the very birth of computing, to deconstruct the wording ‘coding’ and place into context. The word coding originates the time when programming a computer meant knowing the very basic operations expressed as ‘machine code’ – Move a byte to this memory location, Add these two bytes, Shift everything left by 2 bytes – which was completely indecipherable to the uninitiated. It also had a serious drawback in that a program would have to be re-written to run on another machine, with its own particular machine code. Since computers were evolving fast, and software needed to be migrated from old to new machines, this was clearly problematic.

Grace Hooper came up with the idea of a compiler in 1952, quite early in the development of computers. Programs would then be written in a machine-agnostic ‘high level language’ (which was designed to be readable, almost like a natural language, but with a simple syntax to  allow logic to be expressed … If (A = B) Then [do-this] Else [do-that]). A compiler on a machine would take a program written in a high-level language and ‘compile’ it into the machine code that could run on that machine.  The same program could thereby run on all machines.

In place of ‘coders’ writing programs in machine code, there were now ‘programmers’ doing this in high-level language such as Cobol or FORTRAN (both of which were invented in the 1950s), and later ones as they evolved.

So why people still talk about ‘coders’ rather than ‘programmers’ is a mystery to me. Were it just an annoying misnomer, one could perhaps ignore it as an irritant, but it reveals a deeper and more serious misunderstanding.

Coding … I mean Programming … is not enough, in so many ways.  When the politician pictures a youthful ‘coder’ in their bedroom, they imagine the next billionaire creating an App that will revolutionize another area of our lives, like Amazon and Uber have done.

But it is by no means clear that programming as currently understood, is the right skill  for the knowledge economy.  As Gottfried Sehringer wrote in an article “Should we really try to teach everyone to code?” in WiRED, even within the narrow context of building Apps:

“In order to empower everyone to build apps, we need to focus on bringing greater abstraction and automation to the app development process. We need to remove code — and all its complexity — from the equation.”

In other words, just as Grace Hooper saw the need to move from Coding to Programming, we need to move from Programming to something else. Let’s call it Composing: a visually interactive way to construct Apps with minimal need to write lines of text to express logical operations. Of course, just as Hooper faced resistance from the Coders, who poured scorn on the dumbing down of their art, the same will happen with the Programmers, who will claim it cannot be done.

But the world of digital is much greater than the creation of ‘Apps’. The vast majority of the time spent doing IT in this world is in implementing pre-built commercial packages.  If one is implementing them as intended, then they are configured using quite simple configuration tools that aim to eliminate the need to do any programming at all. Ok, so someone in SAP or Oracle or elsewhere had to program the applications in the software package, but they are a relatively small population of technical staff when compared to the numbers who go out to implement these solutions in the field.

Of course it can all go wrong, and often does. I am thinking of a bank that was in trouble because their creaking old core banking system – written in COBOL decades ago by programmers in the bank – was no longer fit for purpose. Every time changes were made to financial legislation, such as tax, the system needed tweaking. But it was now a mess, and when one bug was fixed, another took its place.

So the company decided to implement an off-the-shelf package, which would do everything they needed, and more. The promise was the ability to become a  really ‘agile’ bank. They would be able to introduce new products to market rapidly in response to market needs or to respond to new legislation. It would take just a few weeks, rather than the 6 months it was currently taking. All they needed to do was to do some configurations of the package so that it would work just as they needed it too.

The big bosses approved the big budget then left everyone to it. They kept on being told everything was going well, and so much wanted to believe this, so failed to ask the right questions of the team. Well, guess what, it was a complete disaster. After 18 months and everything running over time and over budget, what emerged?  The departmental managers had insisted on keeping all the functionality from their beloved but creaking old system; the big consultancy was being paid for man-hours of programming so did not seem to mind that the off-shored programmers were having to stretch and bend the new package out of shape to make it look like the old system. And the internal project management was so weak, they were unable to call out the issues, even if they had fully understood them.

Instead of merely configuration, the implementation had large chunks of custom programming bolted onto the package, making it just as unstable and difficult to maintain as the old system. Worse still, it made it very difficult to upgrade the package; to install the latest version (to derive benefits from new features), given the way it had been implemented. There was now a large support bill just to keep the new behmoth alive.

In a sense, nothing had changed.

Far from ‘coding’ being the great advance for our economy, it is often, as in this sorry tale, a great drag on it, because this is how many large system implementations fail.

Schools, Colleges and Universities train everyone to ‘code’, so what will they do when in the field? Like a man with a hammer, every problem looks like a nail, even when a precision milling machine was the right tool to use.

Shouldn’t the student be taught how to reframe their thinking to use different skills that are appropriate to the task in hand? Today we have too many Coders and not enough Composers, and its seems everyone is to blame, because we are all seduced by this zeitgeist of the ‘coder’.

When we consider the actual skills needed to implement, say, a large, data-oriented software package – like that banking package – one finds that activities needed are, for example: Requirements Analysis, Data Modelling, Project Management, Testing, Training, and yes of course, Composing.  Programming should be restricted to those areas such as data interfaces to other systems, where it must be quarantined, so as not to undermine the upgradeability of the software package that has been deployed.

So what are the skills required to define and deploy information management solutions, which are document-oriented, aimed at capturing, preserving and reusing the knowledge within an organization?

Let the credits roll: Project Manager; Information Strategist; Business Analyst; Process Architect; Information Architect; Taxonomist; Meta-Data Manager; Records Manager; Archivist; Document Management Expert; Document Designer; Data Visualizer; Package Configurer; Website Composer; … and not a Coder, or even a Programmer, in sight.

The vision of everyone becoming coders is not only the wrong answer to the question; its also the wrong question. The diversity of backgrounds needed to build a knowledge economy is very great. It is a world beyond ‘coding’ which is richer and more interesting, open to those with backgrounds in software of course, but also in science and the humanities. We need linguists as much as it we need engineers; philosophers as much we need data analysts; lawyers as much as we need graphics artists.

To build a true ‘knowledge economy’ worthy of that name, we need to differentiate and explore a much richer range of competencies to address all the issues we will face than the way in which information professionals are narrowly defined today.

(C) Richard W. Erskine, 2017

——

Note:

In his essay I am referring to business and institutional applications of information management. Of course there will be areas such as scientific research or military systems which will always require heavy duty, specialist software engineering; but this is another world when compared to the vast need in institutions for repeatable solutions to common problems, where other skills are argued to be much more relevant and important to success.

Leave a comment

Filed under Essay, Information Management, Software Development

Beyond Average: Why we should worry about a 1 degree C rise in average global temperature

When I go to the Netherlands I feel small next to men from that country, but then I am 3 inches smaller than the average Brit, and the average Dutchman is 2 inches taller than the average Brit. So I am seeing 5 inches of height difference in the crowd around me when surrounded by Dutch men. No wonder I am feeling an effect that is much greater than what the average difference in height seems to be telling me on paper.

Averages are important. They help us determine if there is a real effect overall. Yes, men from the Netherlands are taller than men from Britain, and so my impressions are not merely anecdotal. They are real, and backed up by data.

If we are wanting to know if there are changes occurring, averages help too, as they ensure we are not focusing on outliers, but on a statistically significant trend. That’s not to say that it is always easy to handle the data correctly or to separate different factors, but once this hard work is done, the science and statistics together can lead us to knowing important things, with confidence.

For example, we know that smoking causes lung cancer and that adding carbon dioxide into the atmosphere leads to increased global warming.

But, you might say, correlation doesn’t prove causation! Stated boldly like that, no it doesn’t. Work is required to establish the link.

Interestingly, we knew the fundamental physics of why carbon dioxide (CO2) is a causative agent for warming our atmosphere – not merely correlated – since as early as Tyndall’s experiments which he started in 1859, but certainly no later than 1967, when Manabe & Wetherald’s seminal paper resolved some residual physics questions related to possible saturation of the infra-red  absorption in the atmosphere and the co-related effect of water vapour. That’s almost 110 years of probing, questioning and checking. Not exactly a tendency on the part of scientists to rush to judgment! And in terms of the correlation being actually observed in our atmosphere, it was Guy Callendar in 1938 who first published a paper showing rising surface temperature linked to rising levels of CO2.

Whereas, in the case of lung cancer and cigarettes correlation came first, not fundamental science. It required innovations in statistical methods to prove that it was not merely correlation but was indeed causation, even while the fundamental biological mechanisms were barely understood.

In any case, the science and statistics are always mutually supportive.

Average Global Warming

In the discussions on global warming, I have been struck over the few years that I have been engaging with the subject how much air time is given to the rise in atmospheric temperature, averaged for the whole of the Earth’s surface, or GMST as the experts call it (Global Mean Surface Temperature).  While it is a crucial measure, this can seem a very arcane discussion to the person in the street.

So far, it has risen by about 1 degree Centigrade (1oC) compared to the middle of the 19th Century.

There are regular twitter storms and blogs ‘debating’ a specific year, and last year’s El Nino caused a huge debate as to what this meant. As it turns out, the majority of recent warming is due to man-made global warming, and this turbo-charged the also strong El Nino event.

Anyone daring to take a look at the blogosphere or twitter will find climate scientists arguing with opinion formers ill equipped to ‘debate’ the science of climate change, or indeed, the science of anything.

What is the person in the street supposed to make of it? They probably think “this is not helping me – it is not answering the questions puzzling me – I can do without the agro thanks very much”.

To be fair, many scientists do spend a lot of time on outreach and in other kinds of science communications, and that is to be applauded. A personal favourite of mine is Katharine Hayhoe, who always brings an openness and sense of humility to her frequent science communications and discussions, but you sense also, a determined and focused strategy to back it up.

However, I often feel that the science ‘debate’ generally gets sucked into overly technical details, while basic, or one might say, simple questions remain unexplored, or perhaps assumed to be so obvious they don’t warrant discussion.

The poor person in the street might like to ask (but dare not for fear of being mocked or being overwhelmed with data), simply:

“Why should we worry about an average rise of 1oC temperature, it doesn’t seem that much, and with all the ups and downs in the temperature curve; the El Nino; the alleged pause; the 93% of extra heat going into the ocean I heard about … well, how can I really be sure that the surface of the Earth is getting warmer?”

There is a lot to unpick here and I think the whole question of ‘averages’ is part of the key to approaching why we should worry.

Unequivocally Warming World

Climate Scientists will often show graphs which include the observed and predicted annual temperature (GMST) over a period of 100 years or more.

Now, I ask, why do they do that?

Surely we have been told to that in order to discern a climate change trend, it is crucial to look at the temperature averaged over a period of at least 10 years, and actually much better to look at a 30-year average?

In this way we smooth out all the ups and downs that are a result of the energy exchanges that occur between the moving parts of the earth system, and the events such as volcanic eruptions or humans pumping less sulphur into the atmosphere from industry. We are interested in the overall trend, so we can see the climate change signal amongst the ‘noise’.

We also emphasis to people – for example, “the Senator with a snowball” – that climate change is about averages and trends, as distinct from weather (which is about the here and now).

So this is why the curve I use – when asked “What is the evidence that the world is warming?” – is a 30-year smoothed curve (red line) such as the one shown below (which used the GISS tool):

30 yr rolling average of GMST

[also see the Met Office explainer on global surface temperature]

The red line shows inexorable warming from early in the 20th Century, no ifs, no buts.

End of argument.

When I challenged a climate scientist on Twitter, why don’t we just show this graph and not get pulled into silly arguments with a Daily Mail journalist or whoever, I was told that annual changes are interesting and need to be understood.

Well sure, for climate scientists everything is interesting! They should absolutely try to answer the detailed questions, such as the contribution global warming made to the 2016 GMST. But to conflate that with the simpler and broader question does rather obscure the fundamental message for the curious but confused public who have not even reached base camp.

They may well conclude there is a ‘debate’ about global warming when there is none to be had.

There is debate amongst scientists about many things: regional impact and attribution; different feedback mechanisms and when they might kick in; models of the Antarctic ice sheet; etc. But not about rising GMST, because that is settled science, and given Tyndall et al, it would be incredible if it were not so; Nobel Prize winning incredible!

If one needs a double knock-out, then how about a triple or quadruple knock-out?

When we add the graphs showing sea level rise, loss of glaciers, mass loss from Greenland and Antarctica, and upper ocean temperature, we have multiple trend lines all pointing in one direction: A warming world. It ain’t rocket science.

We know the world has warmed – it is unequivocal.

Now if a the proverbial drunk, duly floored, still decides to get up and wants to rerun the fight, maybe we should be choosing not to play his games!?

So why do arguments about annual variability get so frequently aired on the blogosphere and twitter?

I don’t know, but I feel it is a massive own goal for science communication.

Surely the choice of audience needs to be the poor dazed and confused ‘person in the street’, not the obdurately ignorant opinion columnists (opinion being the operative word).

Why worry about a 1oC rise?

I want to address the question “Why worry about a 1oC rise (in global mean surface temperature)?”, and do so with the help of a dialogue. It is not a transcript, but along the lines of conversations I have had in the last year. In this dialogue, I am the ClimateCoach and I am in conversation with a Neighbour who is curious about climate change, but admits to being rather overwhelmed by it; they have got as far as reading the material above and accept that the world is warming.

Neighbour:  Ok, so the world is warming, but I still don’t get why we should worry about a measly 1oC warming?

ClimateCoach: That’s an average, over the whole world, and there are big variations hidden in there. Firstly, two thirds of the surface of the planet is ocean, and so over land we are already talking about a global land mean surface temperature in excess of 1oC, about 1.5oC. That’s the first unwelcome news, the first kicker.

Neighbour: So, even if it is 5oC somewhere, I still don’t get it. Living in England I’d quite like a few more Mediterranean summers!

ClimateCoach: Ok, so let’s break this down (and I may just need to use some pictures).  Firstly we have an increase in the mean, globally. But due to meteorological patterns there will be variations in temperature and also changes in precipitation patterns around the world, such as droughts in California and increased Monsoon rain in India. This  regionality of the warming is the second kicker.

Here is an illustration of how the temperature increase looks regionally across the world.

GISTEMP global regional

Neighbour: Isn’t more rain good for Indian farmers?

ClimateCoach: Well, that depends on timing. It has started to be late, and if it doesn’t arrive in time for certain crops, that has serious impacts. So the date or timing of impacts is the third kicker.

Here is an illustration.

Screen Shot 2017-04-15 at 08.45.34.png

Neighbour: I noticed earlier that the Arctic is warming the most. Is that a threat to us?

ClimateCoach: Depends what you mean by ‘us’. There is proportionally much greater warming in the Arctic, due to a long-predicted effect called ‘polar amplification’, in places as much as 10oC of warming. As shown in this map of the arctic. But what happens in the Arctic doesn’t stay in the Arctic.

Arctic extremes

Neighbour: I appreciate that a warming Arctic is bad for ecosystems in the Arctic – Polar Bears and so on – but why will that effect us?

ClimateCoach: You’ve heard about the jet stream on the weather reports, I am sure [strictly, the arctic polar jet stream]. Well, as the Arctic is warmed differentially compared to latitudes below the Arctic, this causes the jet stream to become more wiggly than before, which can be very disruptive. This can create, for example, fixed highs over Europe, and very hot summers.

Neighbour: But we’ve had very hot summers before, why would this be different?

ClimateCoach: It’s not about something qualitatively different (yet), but it is quantitatively. Very hot summers in Europe are now much more likely due to global warming, and that has real impacts. 70,000 people died in Europe during the 2003 heatwave.  Let me show you an illustrative graph. Here is a simple distribution curve and it indicates a temperature at and above which (blue arrow) high impacts are expected, but have a low chance. Suppose this represents the situation in 1850.

Normal distribution

Neighbour: Ok, so I understand the illustration … and?

ClimateCoach: So, look at what happens when we increase the average by just a little bit to a higher temperature, say, by 1oC to represent where we are today. The whole curve shifts right. The ‘onset of high impact’ temperature is fixed, but the area under the curve to the right of this has increased (the red area has increased), meaning a greater chance than before. This is the fourth kicker.

In our real world example, a region like Europe, the chance of high impact hot summers has increased within only 10 to 15 years from being a one in 50 year event to being a 1 in 5 year event; a truly remarkable increase in risk.   

Shifted Mean and extremes

Neighbour: It’s like loading the dice!

ClimateCoach: Exactly. We (humans) are loading the dice. As we add more CO2 to the atmosphere, we load the dice even more. 

Neighbour: Even so, we have learned to cope with very hot summers, haven’t we? If not, we can adapt, surely?

ClimateCoach: To an extent yes, and we’ll have to get better at it in the future. But consider plants and animals, or people who are vulnerable or have to work outside, like the millions of those from the Indian sub-continent who work in construction in the Middle East.  It doesn’t take much (average) warming to make it impossible (for increasingly long periods) to work outside without heat exhaustion. And take plants. A recent paper in Nature Communications showed that crop yields in the USA would be very vulnerable to excessive heat.

Neighbour: Can’t the farmers adapt by having advanced irrigation systems. And didn’t I read somewhere that extra CO2 acts like a fertiliser for plants?

ClimateCoach: To a point, but what that research paper showed was that the warming effect wins out, especially as the period of excessive heat increases, and by the way the fertilisation effect has been overstated. The extended duration of the warming will overwhelm these and other ameliorating factors. This is the fifth kicker.

This can mean crop failures and hence impacts on prices of basic food commodities, even shortages as impacts increase over time.

Neighbour: And what if we get to 2oC?  (meaning 2oC GMST rise above pre-industrial)

ClimateCoach: Changes are not linear. Take the analogy of car speed and pedestrian fatalities. After 20 miles per hour the curve rises sharply, because the car’s energy is a function of the square of the speed, but also the vulnerability thresholds in the human frame. Global warming will cross thresholds for both natural and human systems, which have been in balance for a long time, so extremes get increasingly disruptive. Take an impact to a natural species or habitat: one very bad year, and there may be recovery in the following 5-10 years, which is ok if the frequency of very bad years is 1 in 25-50 years. But suppose very bad years come 1 in every 5 years? That would mean no time to recover. Nature is awash with non-linearities and thresholds like this.

Neighbour: Is that what is happening with the Great Barrier Reef – I heard something fleetingly on BBC Newsnight the other night?

ClimateCoach: I think that could be a very good example of what I mean. We should talk again soon. Bring friends. If they want some background, you might ask them to have a read of my piece Demystifying Global Warming & Its Implications, which is along the lines of a talk I give.

Putting it together for the person in the street.

I have explored one of many possible conversations I could have had. I am sure it could be improved upon, but I hope it illustrates the approach. We should be engaging those people (the majority of the population) who are curious about climate change but have not involved themselves so far, perhaps because they feel a little intimidated by the subject.

When they do ask for help, the first thing they need to understand is that indeed global warming is real, and is demonstrated by those average measures like GMST, and the other ones mentioned such as sea-level rise, ice sheet mass loss, and ocean temperature; not to mention the literally thousands of indicators from the natural world (as documented in the IPCC 5th Assessment Report).

There are also other long-term unusual sources of evidence to add to this list, as Dr Ed Hawkins has discussed, such as the date at which Cherry blossom flowers in Kyoto, which is trending earlier and earlier.  Actually, examples such as these, are in many ways easier for people to relate to.

Gardeners the world over can relate to evidence of cherry blossom, wine growers to impacts on wine growing regions in France, etc. These diverse and rich examples are in many ways the most powerful for a lay audience.

The numerous lines of evidence are overwhelming.

So averages are crucial, because they demonstrate a long-term trend.

When we do raise GMST, make sure you show the right curve. If it is to show unequivocal global warming at the surface, then why not show one that reflects the average over a rolling 30 year period; the ‘smoothed’ curve. This avoids getting into debates with ‘contrarians’ on the minutae of annual variations, which can come across as both abstract and arcane, and puts people off.

This answers the first question people will be asking, simply: “Is the world warming?”. The short answer is “Unequivocally, yes it is”. And that is what the IPCC 5th Assessment Report concluded.

But averages are not the whole story.

There is the second but equally important question “Why worry about a 1oC rise (in global mean surface temperature)?”

I suspect many people are too coy to ask such a simple question. I think it deserves an answer and the dialogue above tried to provide one.

Here and now, people and ecosystems experience weather, not climate change, and when it is an extreme event, the impacts are viscerally real in time and place, and are far from being apparently arcane debating points.

So while a GMST rise of 1oC sounds like nothing to the untutored reader, when translated into extreme weather events, it can be highly significant.  The average has been magnified to yield a significant effect, as evidenced by the increasing chance of extreme events of different kinds, in different localities, which can increasingly be attributed to man-made global warming.

The kickers highlighted in the dialogue were:

  • Firstly, people live on land so experience a higher ‘GMST’ rise (this is not to discount the impacts on oceans);
  • Secondly, geographical and meteorological patterns mean that there are a wide range of regional variations;
  • Thirdly, the timing (or date) at which an impact is felt is critical for ecosystems and agriculture, and bad timing will magnify the effect greatly;
  • Fourthly, as the average increases, so does the chance of extremes. The dice are getting loaded, and as we increase CO2, we load the dice more.
  • Fifthly, the duration of an extreme event will overwhelm defences, and an extended duration can cross dangerous thresholds, moving from increasing harm into fatal impacts, such as crop failure.

I have put together a graphic to try to illustrate this sequence of kickers:

Screen Shot 2017-04-15 at 08.36.37.png

As noted on this graphic (which I used in some climate literacy workshops I ran recently), the same logic used for GMST can be applied to other seemingly ‘small’ changes in global averages such as rainfall, sea-level rise, ocean temperature and ocean acidification. To highlight just two of these other examples:

  • an average global sea-level rise translates into impacts such as extreme storm surges, damaging low-lying cities such as New York and Miami (as recently reported and discussed).
  • an average ocean temperature rise, translates into damage to coral reefs (two successive years of extreme events have caused serious damage to two thirds of the Great Barrier Reef, as a recent study has confirmed).

Even in the relatively benign context of the UK’s temperate climate, the Royal Horticultural Society (RHS), in a report just released, is advising gardeners on climate change impacts and adaptation. The instinctively conservative ‘middle England’ may yet wake up to the realities of climate change when it comes home to roost, and bodies such as the RHS reminds them of the reasons why.

The impacts of man-made global warming are already with us, and it will only get worse.

How much worse depends on all of us.

Not such a stupid question

There was a very interesting event hosted by CSaP (Centre for Science and Policy) in Cambridge recently. It introduced some new work being done to bring together climate science and ‘big data analytics’. Dr Emily Schuckburgh’s talk looked precisely at the challenge of understanding local risks; the report of the talk included the following observation:

“Climate models can predict the impacts of climate change on global systems but they are not suitable for local systems. The data may have systematic biases and different models produce slightly different projections which sometimes differ from observed data. A significant element of uncertainty with these predictions is that they are based on our future reduction of emissions; the extent to which is yet unknown.

To better understand present and future climate risks we need to account for high impact but low probability events. Using more risk-based approaches which look at extremes and changes in certain climate thresholds may tell us how climate change will affect whole systems rather than individual climate variables and therefore, aid in decision making. Example studies using these methods have looked at the need for air conditioning in Cairo to cope with summer heatwaves and the subsequent impact on the Egyptian power network.”

This seems to be breaking new ground.

So maybe the eponimous ‘person in the street’ is right to ask stupid questions, because they turn out not to be so stupid after all.

Changing the Conversation

I assume that the person in the street is curious and has lots of questions; and I certainly don’t judge them based on what newspaper they read. That is my experience. We must try to anticipate and answer those questions, and as far as possible, face to face. We must expect simple questions, which aren’t so stupid after all.

We need to change the focus from the so-called ‘deniers’ or ‘contrarians’ – who soak up so much effort and time from hard pressed scientists – and devote more effort to informing the general public, by going back to the basics. By which I mean, not explaining ‘radiative transfer’ and using technical terms like ‘forcing’, ‘anomaly’, or ‘error’, but using plain English to answer those simple questions.

Those embarrasingly stupid questions that will occur to anyone who first encounters the subject of man-made global warming; the ones that don’t seem to get asked and so never get answered.

Maybe let’s start by going beyond averages.

No one will think you small for doing so, not even a Dutchman.

[updated 15th April]

1 Comment

Filed under Climate Science, Essay, Science Communications