Category Archives: Essay

Longish piece that argues a point to a conclusion

Increasing Engineering Complexity and the Role of Software

Two recent stories from the world of ‘big’ engineering got me thinking: the massive delays in the Crossrail Project and the fatal errors in the Boeing 737 Max, both of which seem to have been blighted by issues related to software.

Crossrail, prior to the announcement of delays and overspend, was being lauded as an example of an exemplar on-time, on-budget complex project; a real feather in the cap for British engineering. There were documentaries celebrating the amazing care with which the tunnelling was done to avoid damage at the surface, using precise monitoring and accurately positioned webs of hydraulic grouting to stabilise the ground beneath buildings. Even big data was used to help interpret signals received from a 3D array of monitoring stations, to help to actively manage operations during tunnelling and construction. A truly awesome example of advanced engineering, on an epic scale.

The post-mortem has not yet been done on why the delays came so suddenly upon the project, although the finger is being pointed not at the physical construction, but the digital one. To operate the rail service there must be advanced control systems in place, and to ensure these operate safely, a huge number of tests need to be carried out ‘virtually’ in the first instance, to ensure safety is not compromised.

Software is something that the senior management of traditional engineering companies are uncomfortable with; in the old days you could hit a machine with a hammer, but not a virtual machine. They knew intuitively if someone told them nonsense within their chosen engineering discipline; for example, if a junior engineer planned to pour 1000 cubic metres of cement into a hole and believed it would be set in the morning. But if told that testing of a software sub-system will take 15 days, they wouldn’t have a clue as to whether this was realistic or not; they might even ask “can we push to get this done in 10 days?”.

In the world of software, when budgets and timelines press, the most dangerous word used in projects is ‘hope’. “We hope to be finished by the end of the month”; “we hope to have that bug fixed soon”; and so on  Testing is often the first victim of pressurised plans. Junior staff say “we hope to finish”, but by the time the message rises up through the management hierarchy to Board level, there is a confident “we will be finished” inserted into the Powerpoint. Anyone asking tough questions might be seen as slowing the project down when progress needs to be demonstrated.

You can blame the poor (software) engineer, but the real fault lies with the incurious senior management who seem to request an answer they want, rather than try to understand the reality on the ground.

The investigations of the Boeing 737 Max tragedy are also unresolved, but of course, everyone is focusing on the narrow question of the technical design issue related to a critical new feature. There is a much bigger issue at work here.

Arguably, Airbus has pursued the ‘fly by wire’ approach much earlier than Boeing, whose culture has tended to resist over automation of the piloting. Active controls to overcome adverse events has now become part of the design of many modern aircraft, but the issue with the Boeing 737 Max seems to have been that this came along without much in the way of training; and the interaction between the automated controls and the human controls is at the heart of the problem. Was there also a lack of realistic human-centric testing to assess the safety of the combined automated/ human control systems? We will no doubt learn this in due course.

Electronics is of course not new to aerospace industries, but programmable software has grown in importance and increasingly it seems that the issue of growing complexity and how to handle the consequent growth in testing complexity, has perhaps overtaken the abilities of traditional engineering management systems. This is extending to almost every product or project – small and large – as the internet of everything emerges.

This takes me to a scribbled diagram I found in an old notebook – made on a train back in 2014, travelling to London, while I debated the issue of product complexity with a project director for a major engineering project. I have turned this into the Figure below.

Screenshot 2019-08-14 at 19.30.09

There are two aspects of complexity identified for products: 

  • Firstly, the ‘design complexity’, which can be thought of as the number of components making up the product, but also the configurability and connectivity of those components. If printed on paper, you can thinking of how high the pile of paper would be that identified every component, with a description of their configuration and connection. This would apply to physical aspects but also software too; and all the implied test cases. There is a rapid escalation in complexity as we move from car to airliner to military platform.
  • Secondly, the ‘production automation complexity’, which represents the level of automation involved in delivering the required products. Cars as they have become, are seen as having the highest level of production automation complexity. 

You can order a specific build of car, with desired ‘extras’, and colour, and then later see it travelling down the assembly line with over 50% of the tasks completely automated; the resulting product with potentially a nearly unique selection of options chosen by you. It is at the pinnacle of production automation complexity but it also has a significant level of design complexity, albeit well short of others shown in the figure. 

Whereas an aircraft carrier will in each case be collectively significantly different from any other in existence (even when originally conceived as a copy of an existing model) – with changes being made even during its construction – so does not score so high on ‘production automation complexity’. But in terms of ‘design complexity’ it is extremely high (there are only about 20 aircraft carriers in operation globally and half of these are in the US Navy, which perhaps underlines this point).

As we add more software and greater automation, the complexity grows, and arguably, the physical frame of the product is the least complex part of the design or production process. 

I wonder is there a gap between the actual complexity of the final products and an engineering culture that is still heavily weighted towards the physical elements – bonnet of a car, hull of a ship, turbine of a jet engine – and is this gap widening as the software elements grow in scope and ambition? 

Government Ministers, like senior managers, will be happy being photographed next to the wing of a new model of airliner – and talk earnestly about workers riveting steel – but what may be more pivotal to success is some software sub-system buried deep in millions of lines of ‘code’; no photo opportunities here.

Screenshot 2019-08-14 at 19.30.27

As we move from traditional linear ‘deterministic’ programming to non-deterministic algorithms – other questions arise about the increasing role of software. 

Given incomplete, ambiguous or contradictory inputs the software must make a choice about how to act in real time. It may have to take a virtual vote between independently written algorithms. It cannot necessarily rely on supplementary data from external sources (“no, you are definitely nose diving not stalling!”), for system security reasons if not external data bandwidth reasons.

And so we continue to add further responsibility, onto the shoulders of the non-physical elements of the system.

Are Crossrail and the 737 Max representative of a widening gap, reflected in an inability of existing management structures to manage the complexity and associated risks of the software embedded in complex engineering products and projects? 

© Richard W. Erskine, 2019

1 Comment

Filed under Engineering Complexity, Essay, Uncategorized

Experiments in Art & Science

My wife and I were on our annual week-end trip to Cambridge to meet up with my old Darwinian friend Chris and his wife, for the usual round of reminiscing, punting and all that. On the Saturday (12th May) we decided to go to Kettle’s Yard to see the house and its exhibition and take in a light lunch.

As we were about to get our (free) tickets for the house visit, we saw people in T-shirts publicising a Gurdon Institute special event in partnership with Kettle’s Yard that we had been unaware of:

Experiments in Art & Science

A new collaboration between three contemporary artists 

and scientists from the Gurdon Institute, 

in partnership with Kettle’s Yard

The three artists in question were Rachel Pimm, David Blandy and Laura Wilson, looking at work being done at the labs, respectively, on:

This immediately grabbed our attention and we changed tack, and went to the presentation and discussion panel, intrigued to learn more about the project.

The Gurdon Institute do research exploring the relationship between human disease and development, through all stages of life.  They use the tools of molecular biology, including model systems that share a lot of their genetic make-up with humans. There were fascinating insights into how the environment can influence creatures, in ways that force us to relax Crick’s famous ‘Central Dogma’. But I am jumping into the science of what I saw, and the purpose of this essay is to explore the relationship between art and science.

I was interested to learn if this project was about making the science more accessible – to draw in those who may be overwhelmed by the complexities of scientific methods – and to provide at least some insight into the work of scientists. Or maybe something deeper, that might be more of an equal partnership between art and science, in a two-way exchange of insights.

I was particularly intrigued by Rachel’s exploration of the memory of trauma, and the deep past revealed in the behaviour of worms, and their role as custodians of nature; of Turing’s morphogenesis, fractals and the emergence of self-similarity at many scales. A heady mix of ideas in the early stages of seeking expression.

David’s exploratory animations of moving through neural networks was also captivating.

As the scientists there noted, the purpose of the art may not be so much as to precisely articulate new questions, but rather to help them to stand back and see their science through fresh eyes, and maybe find unexpected connections.

In our modern world it has almost become an article of faith that science and art occupy two entirely distinct ways of seeing the world, but there was a time, as my friend Chris pointed out, when this distinction would not have been recognised.

Even within a particular department – be it mathematics or molecular biology – the division and sub-division of specialities makes it harder and harder for scientists to comprehend even what is happening in the next room. The funding of science demands a kind of determinism in the production of results which promotes this specialisation. It is a worrying trend because it is something of an anathema when it comes to playfulness or inter-disciplinary collaboration. 

This makes the Wellcome Trust’s support for the Gurdon Institute and for this Science-Art collaboration all the more refreshing. 

Some mathematicians have noted that even within the arcane world of number theory, group theory and the rest, it will only be through the combining of mathematical disciplines that some of the long-standing unresolved questions of mathematics be solved.

In areas such as climate change it was recognised in the lated 1950s that we needed to bring together a diverse range of disciplines to get to grips with the causes and consequences of man-made global warming: meteorologists, atmospheric chemists, glaciologists, marine biologists, and so many more.

We see through complex questions such as land-use and human civilisation how we must broaden this even further to embrace geography, culture and even history, to really understand how to frame solutions to climate change.

In many ways those (in my days) unloved disciplines such as geography, show their true colours as great integrators of knowledge – from human geography to history, from glaciology to food production – and we begin to understand that a little humility is no bad thing when we come to try to understand complex problems. Inter-disciplinary working is not just a fad; it could be the key to unlock complex problems that no single discipline can resolve.

Leonardo da Vinci was both artist and scientist. Ok, so not a scientist in the modern sense that David Wootton explores in his book The Invention of Science that was heralded in by the Enlightenment, but surely a scientist in the sense of his ability to forensically observe the world and try to make sense of it. His art was part of his method in exploring the world, be it the sinews of the human body or birds in flight, art and science were indivisible.

Since my retirement I have started to take up painting seriously. At school I chose science over art, but over the years have dabbled in painting but never quite made progress. Now, under the watchful eye of a great teacher, Alison Vickery, I feel I am beginning to find a voice. What she tells me hasn’t really changed, but I am finally hearing her. ‘Observe the scene, more than look at the paper’; ‘Experiment and don’t be afraid of accidents, because often they are happy ones’; the list of helpful aphorisms never leaves me.

A palette knife loaded with pigment scrapped across a surface can give just the right level of variegation if not too wet and not too dry; there is a kind of science to it. The effect is to produce a kind of complexity that the human eye seems to be drawn to: imperfect symmetries of the kind we find alluring in nature even while in mathematics we seek perfection.

Scientists and artists share many attributes.

At the meeting hosted by Kettle’s Yard, there was a discussion on what was common between artists and scientists. My list adapts what was said on the day: 

  • a curiosity and playfulness in exploring the world around them; 
  • ability to acutely observe the world; 
  • a fascination with patterns;
  • not afraid of failure;
  • dedication to keep going; 
  • searching for truth; 
  • deep respect for the accumulated knowledge and tools of their ‘art’; 
  • ability to experiment with new methods or innovative ways of using old methods.

How then are art and science different?  

Well, of course, the key reason is that they are asking different questions and seeking different kinds of answers.

In art, the question is often simply ‘How do I see, how do I frame what I see. and how do I make sense of it?’ , and ‘How do I express this in a way that is interesting and compelling?’. If I see a tree, I see the sinews of the trunk and branches, and how the dappled light reveals fragmentary hints as to the form of the tree.  I observe the patterns of dark and light in the canopy. A true rendering of colour is of secondary interest (this is not a photograph), except in as much as it helps reveal the complexity of tree: making different greens by playing with mixtures of 2 yellows and 2 blues offers an infinity of greens which is much more interesting than having tubes of green paint (I hardly ever buy green).

Artists do not have definite answers to unambiguous questions. It is OK for me to argue that J M W Turner was the greatest painter of all time, even while my friend vehemently disagrees. When I look at a painting (or sculpture, or film) and feel an emotional response, there is no need to explain it, even though we often seem obliged to put words to emotions, we know these are mere approximations.

In science (or experimental science at least), we ask specific questions, which can be articulated as a hypothesis that challenges the boundaries of our knowledge. We can then design experiments to test the hypothesis, and if we are successful (in the 1% of times that maybe we are lucky), we will have advanced the knowledge of our subject. Most times this is an incremental learning, building on a body of knowledge. Other times, we may need to break something down before building it up again (but unlike the caricature of science often seen on TV, science is rarely about tearing down a whole field of knowledge, and starting from scratch). 

When I see the tree, I ask, why are the leaves of Copper Beech trees deep purple in colour rather than green? Are the energy levels in the chlorophyll molecule somehow changed to produce a different colour or is a different molecule involved?

In science, the objective is to find definite answers to definite questions. That is not to say that the definite answer is in itself a complete answer to all the questions we have. When Schrodinger asked the question ‘What is Life?’ the role and structure of DNA were not known, but there were questions that he could ask and find answers to. This is the wonder of science; this stepping stone quality.

I may find the answer as to why the Copper Beech tree’s leaves are not green, but what of the interesting question of why leaves change colour in autumn and how they change, not from one state (green) to another (brown), but through a complex process that reveals variegations of colour as Autumn unfolds? And what of a forest? How does a mature forest evolve from an immature one; how do pioneer trees give way to a complex ecology of varyingly aged trees and species over time? A leaf begs a question, and a forest may end up being the answer to a bigger question. Maybe we find that art, literature and science are in fact happy bedfellows after all.

As Feynman said, I can be both fascinated by something in the natural world (such as a rainbow) while at the same time seeking a scientific understanding of the phenomenon.

Nevertheless, it seems that while artists and scientists have so much in common, their framings struggle to align, and that in a way is a good thing. 

There is great work done in the illustration of scientific ideas, in textbooks and increasingly in scientific papers. I saw a recent paper on the impact of changes to the stratospheric polar vortex on climate, which was beautifully illustrated. But this is illustration, intended to help articulate those definite questions and answers. It is not art.

So what is the purpose of bringing artists into laboratories to inspire them; to get their response to the work being done there?

The answer, as they say, is on the tin (of this Gurdon Institute collaborative project): It is an experiment.

The hypothesis is that if you take three talented and curious young artists and show them some leading edge science that touches on diverse subjects, good things happen. Art happens.

Based on the short preview of the work being done which I attended, good things are already happening and I am excited to see how the collaboration evolves.

Here are some questions inspired in my mind by the discussion 

  • How do we understand the patterns in form in the ways that Turing wrote about, based on the latest research? Can we explore ‘emergence of form’ as a topic that is interesting, artistically and scientifically?
  • In the world of RNA epigenetics can the previously thought of ‘junk DNA’ play a part in the life of creatures, even humans, in the environment they live in? Can we explore the deep history of our shared genotype, even given our divergent phenotypes? Will the worm teach us how to live better with our environment?
  • Our identity is formed by memory and as we get older we begin to lose our ability to make new memories, but older ones often stay fast, but not always. Surely here there is a rich vein for exploring the artistic and scientific responses to diseases like Alzheimers?

Scientists are dedicated and passionate about their work, like artists. A joint curiosity drives this new collaborative Gurdon Institute project.

The big question for me is this: can art reveal to scientists new questions, or new framings of old questions, that will advance the science in novel ways? Can unexpected connections be revealed or collaborations be inspired?

I certainly hope so.

P.S. the others in my troop did get to do the house visit after all, and it was wonderful, I hear. I missed it because I was too busy chatting to the scientists and artists after the panel discussion; and I am so grateful to have spent time with them.

(c) Richard W. Erskine, 2018

 

1 Comment

Filed under Art & Science, Essay, Molecular Biology, Uncategorized

Animating IPCC Climate Data

The IPCC (Intergovernmental Panel on Climate Change) is exploring ways to improve the communication of its findings, particularly to a more general  audience. They are not alone in having identified a need to think again about clear ‘science communications’. For example, the EU’s HELIX project (High-End Climate Impacts and Extremes), produced some guidelines a while ago on better use of language and diagrams.

Coming out of the HELIX project, and through a series of workshops, a collaboration with the Tyndall Centre and Climate Outreach, has produced a comprehensive guide (Guide With Practical Exercises to Train Researchers In the Science of  Climate Change Communication)

The idea is not to say ‘communicate like THIS’ but more to share good practice amongst scientists and to ensure all scientists are aware of the communication issues, and then to address them.

Much of this guidance concerns the ‘soft’ aspects of communication: how the communicator views themself; understanding the audience; building trust; coping with uncertainty; etc.

Some of this reflects ideas that are useful not just to scientific communication, but almost any technical presentation in any sector, but that does not diminish its importance.

This has now been distilled into a Communications Handbook for IPCC Scientists; not an official publication of the IPCC but a contribution to the conversation on how to improve communications.

I want to take a slightly different tack, which is not a response to the handbook per se, but covers a complementary issue.

In many years of being involved in presenting complex material (in my case, in enterprise information management) to audiences unfamiliar with the subject at hand, I have often been aware of the communication potential but also risks of diagrams. They say that a picture is worth a thousand words, but this is not true if you need a thousand words to explain the picture!

The unwritten rules related to the visual syntax and semantics of diagrams is a fascinating topic, and one which many – and most notably Edward Tufte –  have explored. In chapter 2 of his insightful and beautiful book Visual Explanations, Tufte argues:

“When we reason about quantityative evidence, certain methods for displaying and analysing data are better than others. Superior methods are more likely to produce truthful, credible, and precise findings. The difference between an excellent analysis and a faulty one can sometimes have momentous consequences”

He then describes how data can be used and abused. He illustrates this with two examples: the 1854 Cholera epidemic in London and the 1986 Challenger space shuttle disaster.

Tufte has been highly critical of the over reliance on Powerpoint for technical reporting (not just presentations) in NASA, because the form of the content degrades the narrative that should have been an essential part of any report (with or without pictures). Bulletized data can destroy context, clarity and meaning.

There could be no more ‘momentous consequences’ than those that arise from man-made global warming, and therefore, there could hardly be a more important case where a Tuftian eye, if I may call it that, needs to be brought to bear on how the information is described and visualised.

The IPCC, and the underlying science on which it relies, is arguably the greatest scientific collaboration ever undertaken, and rightly recognised with a Nobel Prize. It includes a level of interdisciplinary cooperation that is frankly awe-inspiring; unique in its scope and depth.

It is not surprising therefore that it has led to very large and dense reports, covering the many areas that are unavoidably involved: the cryosphere, sea-level rise, crops, extreme weather, species migration, etc.. It might seem difficult to condense this material without loss of important information. For example, Volume 1 of the IPCC Fifth Assessment Report, which covered the Physical Basis of Climate Change, was over 1500 pages long.

Nevertheless, the IPCC endeavours to help policy-makers by providing them with summaries and also a synthesis report, to provide the essential underlying knowledge that policy-makers need to inform their discussions on actions in response to the science.

However, in its summary reports the IPCC will often reuse key diagrams, taken from the full reports. There are good reasons for this, because the IPCC is trying to maintain mutual consistency between different products covering the same findings at different levels of detail.

This exercise is fraught with risks of over-simplification or misrepresentation of the main report’s findings, and this might limit the degree to which the IPCC can become ‘creative’ with compelling visuals that ‘simplify’ the original diagrams. Remember too that these reports need to be agreed by reviewers from national representatives, and the language will often seem to combine the cautiousness of a scientist with the dryness of a lawyer.

So yes, it can be problematic to use artistic flair to improve the comprehensibility of the findings, but risk losing the nuance and caution that is a hallmark of science. The countervailing risk is that people do not really ‘get it’; and do not appreciate what they are seeing.

We have seen with the Challenger reports, that people did not appreciate the issue with the O rings, especially when key facts were buried in 5 levels of indented bullet points in a tiny font, for example or, hidden in plain sight, in a figure so complex that the key findings are lost in a fog of complexity.

That is why any attempt to improve the summaries for policy makers and the general public must continue to involve those who are responsible for the overall integrity and consistency of the different products, not simply hived off to a separate group of ‘creatives’ who would lack knowledge and insight of the nuance that needs to be respected.  But those complementary skills – data visualizers, graphics artists, and others – need to be included in this effort to improve science communications. There is also a need for those able to critically evaluate the pedagogic value of the output (along the lines of Tufte), to ensure they really inform, and do not confuse.

Some individuals have taken to social media to present their own examples of how to present information, which often employs animation (something that is clearly not possible for the printed page, or its digital analogue, a PDF document). Perhaps the most well known example to date was Professor Ed Hawkin’s spiral picture showing the increase in global mean surface temperature:

spiral_2017_large

This animation went viral, and was even featured as part of the Rio Olympics Opening Ceremony. This and other spiral animations can be found at the Climate Lab Book site.

There are now a number of other great producers of animations. Here follows a few examples.

Here, Kevin Pluck (@kevpluck) illustrates the link between the rising carbon dioxide levels and the rising mean surface temperature, since 1958 (the year when direct and continuous measurements of carbon dioxide were pioneered by Keeling)

Kevin Pluck has many other animations which are informative, particularly in relation to sea ice.

Another example, from Antti Lipponen (@anttilip), visualises the increase in surface warming from 1900 to 2017, by country, grouped according to continent. We see the increasing length/redness of the radial bars, showing an overall warming trend, but at different rates according to region and country.

A final example along the same lines is from John Kennedy (@micefearboggis), which is slightly more elaborate but rich in interesting information. It shows temperature changes over the years, at different latitudes, for both ocean (left side) and land (right side). The longer/redder the bar the higher the increase in temperature at that location, relative to the temperature baseline at that location (which scientists call the ‘anomaly’). This is why we see the greatest warming in the Arctic, as it is warming proportionally faster than the rest of the planet; this is one of the big takeaways from this animation.

These examples of animation are clearly not dumbing down the data, far from it. They  improve the chances of the general public engaging with the data. This kind of animation of the data provides an entry point for those wanting to learn more. They can then move onto a narrative treatment, placing the animation in context, confident that they have grasped the essential information.

If the IPCC restricts itself to static media (i.e. PDF files), it will miss many opportunities to enliven the data in the ways illustrated above that reveal the essential knowledge that needs to be communicated.

(c) Richard W. Erskine, 2018

3 Comments

Filed under Climate Science, Essay, Science Communications

Matt Ridley shares his ignorance of climate science (again)

Ridley trots out a combination of long-refuted myths that are much loved by contrarians; bad or crank science; or misunderstandings as to the current state of knowledge. In the absence of a Climate Feedback dissection of Ridley’s latest opinion piece, here is my response to some of his nonsense …

Here are five statements he makes that I will refute in turn.

1. He says: Forty-five years ago a run of cold winters caused a “global cooling” scare.

I say:

Stop repeating this myth Matt! A few articles in popular magazines in the 70s speculated about an impending ice age, and so according to dissemblers like Ridley, they state or imply that this was the scientific consensus at the time (snarky message: silly scientists can’t make your mind up). This is nonsense, but so popular amongst contrarians it is repeated frequently to this day.

If you want to know what scientists were really thinking and publishing in scientific papers read “The Myth of the 1970s Global Cooling Scientific Consensus”, by Thomas Peterson at al (2008), American Meteorological Society.

Warming, not cooling was the greater concern. It is astonishing that Ridley and others continue to repeat this myth. Has he really been unable – in the ten years since it was published – to read this oft cited article and so disabuse himself of the myth? Or does he deliberately repeat it because he thinks his readers are too lazy or too dumb to check the facts? How arrogant would that be?

2. He says: Valentina Zharkova of Northumbria University has suggested that a quiescent sun presages another Little Ice Age like that of 1300-1850. I’m not persuaded. Yet the argument that the world is slowly slipping back into a proper ice age after 10,000 years of balmy warmth is in essence true.

I say:

Oh dear, he cites the work of Zharkova, saying he is not persuaded, but then talks of ‘slowly slipping into a proper ice age’. A curious non sequitur. While we are on Zharkova, her work suffered from being poorly communicated.

And quantitatively, her work has no relevance to the current global warming we are observing. The solar minimum might create a -0.3C contribution over a limited period, but that would hardly put a dent in the +0.2C per decade rate of warming.

But, let’s return to the ice age cycle. What Ridley obdurately refuses to acknowledge is that the current warming is occurring due to less than 200 years of man-made changes to the Earth’s atmosphere, raising CO2 to levels not seen for nearly 1 million years (equal to 10 ice age cycles), is raising the global mean surface temperature at an unprecedented rate.

Therefore, talking about the long slow descent over thousands of years into an ice age that ought to be happening (based on the prior cycles), is frankly bizarre, especially given that the man-made warming is now very likely to delay a future ice age. As the a paper by Ganopolski et al, Nature (2016) has estimated:

“Additionally, our analysis suggests that even in the absence of human perturbations no substantial build-up of ice sheets would occur within the next several thousand years and that the current interglacial would probably last for another 50,000 years. However, moderate anthropogenic cumulative CO2 emissions of 1,000 to 1,500 gigatonnes of carbon will postpone the next glacial inception by at least 100,000 years.”

And why stop there, Matt? Our expanding sun will boil away the oceans in a billion years time, so why worry about Brexit; and don’t get me started on the heat death of the universe. It’s hopeless, so we might as well have a great hedonistic time and go to hell in a handcart! Ridiculous, yes, but no less so than Ridley conflating current man-made global warming with a far, far off ice age, that recedes with every year we fail to address man-made emissions of CO2.

3. He says: Well, not so fast. Inconveniently, the correlation implies causation the wrong way round: at the end of an interglacial, such as the Eemian period, over 100,000 years ago, carbon dioxide levels remain high for many thousands of years while temperature fell steadily. Eventually CO2 followed temperature downward.

I say:

The ice ages have indeed been a focus of study since Louis Agassiz coined the term in 1837, and there have been many twists and turns in our understanding of them even up to the present day, but Ridley’s over-simplification shows his ignorance of the evolution of this understanding.

The Milankovitch Cycles are key triggers for entering, an ice age (and indeed, leaving it), but the changes in atmospheric concentrations of carbon dioxide drives the cooling (entering) and warming (leaving) of an ice age, something that was finally accepted by the science community following Hays et al’s 1976 seminal paper (Variations in the Earth’s orbit: Pacemake of the ice ages) , over 50 years since Milankovitch first did his work.

But the ice core data that Ridley refers to confirms that carbon dioxide is the driver, or ‘control knob’, as Professor Richard Alley explains it; and if you need a very readable and scientifically literate history of our understanding of the ice cores and what they are telling us, his book “The Two-Mile Time Machine: Ice Cores, Abrupt Climate Change, and Our Future” is a peerless, and unputdownable introduction.

Professor Alley offers an analogy. Suppose you take out a small loan, but then after this interest is added, and keeps being added, so that after some years you owe a lot of money. Was it the small loan, or the interest rate that created the large debt? You might say both, but it is certainly ridiculous to say the the interest rate is unimportant because the small loan came first.

But despite its complexity, and despite the fact that the so-called ‘lag’ does not refute the dominant role of CO2, scientists are interested in explaining such details and have indeed studied the ‘lag’. In 2012, Shakun and others published a paper doing just that “Global warming preceded by increasing carbon dioxide concentrations during the last deglaciation”(Jeremy D. Shakun et al, Nature 484, 49–54, 5 April 2012). Since you may struggle to see a copy of this paywalled paper, a plain-English summary is available.

Those who read headlines and not contents – like the US Politician Joe Barton – might think this paper is challenging the dominant role of CO2, but the paper does not say that.  This paper showed that some warming occurred prior to increased CO2, but this is explained as an interaction between Northern and Southern hemispheres, following the Milankovitch original ‘forcing’.

The role of the oceans is crucial in fully explaining the temperature record, and can add significant delays in reaching a new equilibrium. There are interactions between the oceans in Northern and Southern hemispheres that are implicated in some abrupt climate change events (e.g.  “North Atlantic ocean circulation and abrupt climate change during the last glaciation”, L. G. Henry et al, Science,  29 July 2016 • Vol. 353 Issue 6298).

4. He says: Here is an essay by Willis Eschenbach discussing this issue. He comes to five conclusions as to why CO2 cannot be the main driver

I say:

So Ridley quotes someone with little or no scientific credibility who has managed to publish in Energy & Environment. Its editor Dr Sonja Boehmer-Christiansen admitted that she was quite partisan in seeking to publish ‘sceptical’ articles (which actually means, contrarian articles), as discussed here.

Yet, Ridley extensively quotes this low grade material, but could have chosen from hundreds of credible experts in the field of climate science. If he’d prefer ‘the’ textbook that will take him through all the fundamentals that he seems to struggle to understand, he could try Raymond Pierrehumbert’s seminal textbook “Principles of Planetary Climate”. But no. He chooses Eschenbach, with a BA in Psychology.

Ridley used to put up the appearance of interest in a rational discourse, albeit flying in the face of the science. That mask has now fully and finally dropped, as he is now channeling crank science. This is risible.

5. He says: The Antarctic ice cores, going back 800,000 years, then revealed that there were some great summers when the Milankovich wobbles should have produced an interglacial warming, but did not. To explain these “missing interglacials”, a recent paper in Geoscience Frontiers by Ralph Ellis and Michael Palmer argues we need carbon dioxide back on the stage, not as a greenhouse gas but as plant food.

I say:

The paper is 19 pages long, which is unusual in today’s literature. The case made is intriguing but not convincing, but I leave it to the experts to properly critique it. It is taking a complex system, where for example, we know that large movements of heat in the ocean have played a key role in variability, and tries to infer (explaining interglacials) that dust is the primary driver, while discounting the role of CO2 as a greenhouse gas.

The paper curiously does not cite the seminal paper by Hays et al (1976), yet cites a paper by Willis Eschenbach published in Energy & Environment (which I mentioned earlier). All this raised concerns in my mind about this paper.

Extraordinary claims require extraordinary evidence and scientific dialogue, and it is really too early to claim that this paper is something or nothing; even if that doesn’t mean waiting the 50 odd years that Milankovitch’s work had to endure, before it was widely accepted. Good science is slow, conservative, and rigorous, and the emergence of a consilience on the science of our climate has taken a very long time, as I explored in a previous essay.

Ralph Ellis on his website (which shows that his primary interest is the history of the life and times of Jesus) states:

“Ralph has made a detour into palaeoclimatology, resulting in a peer-review science paper on the causes of ice ages”, and after summarising the paper says,

“So the alarmists were right about CO2 being a vital forcing agent in ice age modulation – just not in the way they thought”.

So was this paper an attempt to clarify what was happening during the ice ages, or a contrivance, to take a pot shot at carbon dioxide’s influence on our contemporary climate change?

The co-author, Michael Palmer, is a biochemist, with no obvious background in climate science and provided “a little help” on the paper according to his website.

But on a blog post comment he offers a rather dubious extrapolation from the paper:

“The irony is that, if we should succeed in keeping the CO2 levels high through the next glacial maximum, we would remove the mechanism that would trigger the glacial termination, and we might end up (extreme scenario, of course) another Snowball Earth.”,

They both felt unembarrassed participating in comments on the denialist blog site WUWT. Quite the opposite, they gleefully exchanged messages with a growing band of breathless devotees.

But even if my concerns about the apparent bias and amateurism of this paper were allayed, the conclusion (which Ridley and Ellis clearly hold to) that the current increases in carbon dioxide is nothing to be concerned with, does not follow from this paper. It is a non sequitur.

If I discovered a strange behavour like, say, the Coriolis force way back when, the first conclusion would not be to throw out Newtonian mechanics.

The physics of CO2 is clear. How the greenhouse effect works is clear, including for the conditions that apply on Earth, with all remaining objections resolved since no later than the 1960s.

We have a clear idea of the warming effect of increased CO2 in the atmosphere including short term feedbacks, and we are getting an increasingly clear picture of how the Earth system as a whole will respond, including longer term feedbacks.  There is much still to learn of course, but nothing that is likely to require jettisoning fundamental physics.

The recent excellent timeline published by Carbon Brief showing the history of the climate models, illustrates the long slow process of developing these models, based on all the relevant fundamental science.

This history has shown how different elements have been included in the models as the computing power has increased – general circulation, ocean circulation, clouds, aerosols, carbon cycle, black carbon.

I think it is really because Ridley still doesn’t understand how an increase from 0.03% to 0.04% over 150 years or so, in the atmospheric concentration of CO2, is something to be concerned about (or as I state it in talks, a 33% rise in the principal greenhouse gas; which avoids Ridley’s deliberately misleading formulation).

He denies that he denies the Greenhouse Effect, but every time he writes, he reveals that really, deep down, he still doesn’t get it. To be as generous as I can to him, he may suffer from a perpetual state of incredulity (a common condition I have written about before).

Conclusion

Matt Ridley in an interview he gave to Russ Roberts at EconTalk.org in 2015 he reveals his inability to grasp even the most basic science:

“So, why do they say that their estimate of climate sensitivity, which is the amount of warming from a doubling, is 3 degrees? Not 1 degree? And the answer is because the models have an amplifying factor in there. They are saying that that small amount of warming will trigger a furtherwarming, through the effect mainly of water vapor and clouds. In other words, if you warm up the earth by 1 degree, you will get more water vapor in the atmosphere, and that water vapor is itself a greenhouse gas and will cause you to treble the amount of warming you are getting. Now, that’s the bit that lukewarmers like me challenge. Because we say, ‘Look, the evidence would not seem the same, the increases in water vapor in the right parts of the atmosphere–you have to know which parts of the atmosphere you are looking at–to justify that. And nor are you seeing the changes in cloud cover that justify these positive-feedback assumptions. Some clouds amplify warming; some clouds do the opposite–they would actually dampen warming. And most of the evidence would seem to suggest, to date, that clouds are actually having a dampening effect on warming. So, you know, we are getting a little bit of warming as a result of carbon dioxide. The clouds are making sure that warming isn’t very fast. And they’re certainly not exaggerating or amplifying it. So there’s very, very weak science to support that assumption of a trebling.”

He seems to be saying that the water vapour is in the form of clouds – some high altitude, some low –  have opposite effects (so far, so good), so the warming should be 1C – just the carbon dioxide component – from a doubling of CO2 concentrations (so far, so bad).  The clouds represent a condensed (but not yet precipitated) phase of water in the atmosphere, but he seems to have overlooked that water also comes in a gaseous phase (not clouds). Its is that gaseous phase that is providing the additional warming, bringing the overall warming to 3C.

The increase in water vapour concentrations is based on “a well-established physical law (the Clausius-Clapeyron relation) determines that the water-holding capacity of the atmosphere increases by about 7% for every 1°C rise in temperature” (IPCC AR4 FAQ 3.2)

T.C. Chamberlin writing in 1905 to Charles Abbott, explained this in a way that is very clear, explaining the feedback role of water vapour:

“Water vapour, confessedly the greatest thermal absorbent in the atmosphere, is dependent on temperature for its amount, and if another agent, as CO2 not so dependent, raises the temperature of the surface, it calls into function a certain amount of water vapour, which further absorbs heat, raises the temperature and calls forth more [water] vapour …”

(Ref. “Historical Perspectives On Climate Change” by James Fleming, 1998)

It is now 113 years since Chamberlin wrote those words, but poor Ridley is still struggling to understand basic physics, so instead regales us with dubious science intended to distract and confuse.

When will Matt Ridley stop feeling the need to share his perpetual incredulity and obdurate ignorance with the world?

© Richard W. Erskine, 2018

Leave a comment

Filed under Climate Science, Essay

Solving Man-made Global Warming: A Reality Check

Updated 11th November 2017 – Hopeful message following Figure added.

It seems that the we are all – or most of us – in denial about the reality of the situation we are in with relation to the need to address global warming now, rather than sometime in the future.

We display seesaw emotions, optimistic that emissions have been flattening, but aghast that we had a record jump this year (which was predicted, but was news to the news people). It seems that people forget that if we have slowed from 70 to 60 miles per hour, approaching a cliff edge, the result will be the same, albeit deferred a little. We actually need to slam on the breaks and stop! Actually, due to critical erosion of the cliff edge, we will even need to go into reverse.

I was chatting with a scientist at a conference recently:

Me: I think we need to accept that a wide portfolio of solutions will be required to address global warming. Pacala and Socolow’s ‘wedge stabilization’ concept is still pertinent.

Him: People won’t change; we won’t make it. We are at over 400 parts per million and rising, and have to bring this down, so some artificial means of carbon sequestration is the only answer.

This is just an example of many other kinds of conversations of a similar structure that dominate the blogosphere. It’s all about the future. Future impacts, future solutions. In its more extreme manifestations, people engage in displacement behaviour, talking about any and every solution that is unproven in order to avoid focusing on proven solutions we have today.

Yet nature is telling us that the impacts are now, and surely the solutions should be too; at least for implementation plans in the near term.

Professors Kevin Anderson and Alice Larkin of the Tyndall Centre have been trying to shake us out of our denial for a long time now. The essential argument is that some solutions are immediately implementable while others are some way off, and others so far off they are not relevant to the time frame we must consider (I heard a leader in Fusion Energy research on the BBC who sincerely stated his belief that it is the solution to climate change; seriously?).

The immediately implementable solution that no politician dares talk about is degrowth – less buying stuff, less travel, less waste, etc. All doable tomorrow, and since the top 10% of emitters globally are responsible for 50% of emissions (see Extreme Carbon Inequality, Oxfam), the quickest and easiest solution is for that 10% or let’s say 20%, to halve their emissions; and do so within a few years. It’s also the most ethical thing to do.

Anderson & Larkin’s credibility is enhanced by the fact that they practice what they advocate, as for example, this example of an approach to reduce the air miles associated with scientific conferences:

Screen Shot 2017-11-09 at 11.51.25

Some of people in the high energy consuming “West” have proven it can be done. Peter Kalmus, in his book Being the Change: Live Well and Spark a Climate Revolution describes how he went from a not untypical US citizen responsible for 19 tonnes of carbon dioxide emissions per year, to now something like 1 tonne; which is one fifth of the global average! It is all about what we do, how we do it, and how often we do it.

Anderson and Larkin have said that even just reaching half the European average, at least, would be a huge win: “If the top 10% of emitters were to reduce their emissions to the average for EU, that would mean a 33% in global emissions” (Kevin Andreson, Paris, Climate & Surrealism: how numbers reveal another reality, Cambridge Climate Lecture Series, March 2017).

This approach – a large reduction in consumption (in all its forms) amongst high emitters in all countries, but principally the ‘west’ – could be implemented in the short term (the shorter the better but let’s say, by 2030). Let’s call these Phase 1 solutions.

The reason we love to debate and argue about renewables and intermittency and so on is that it really helps to distract us from the blinding simplicity of the degrowth solution.

It is not that a zero or low carbon infrastructure is not needed, but that the time to fully implement it is too long – even if we managed to do it in 30 years time – to address the issue of rising atmospheric greenhouse gases. This has already started, but from a low base, but will have a large impact in the medium term (by 2050). Let’s call these Phase 2 solutions.

Project Drawdown provides many solutions relevant to both Phase 1 and 2.

And as for my discussion that started this, artificial carbon sequestration methods, such as BECCS and several others (are explored in Atmosphere of Hope by Tim Flannery) will be needed, but it is again about timing. These solutions will be national, regional and international initiatives, and are mostly unproven at present; they live in the longer term, beyond 2050. Let’s call these Phase 3 solutions.

I am not here wanting to get into geo-engineering solutions, a potential Phase 4. A Phase 4 is predicated on Phases 1 to 3 failing or failing to provide sufficient relief. However, I think we would have to accept that if, and I personally believe only if, there was some very rude shock (an unexpected burp of methane from the Arctic, and signs of a catastrophic feedback), leading to an imminent > 3C rise in global average temperature (as a possible red-line), then some form of geo-engineering would be required as a solution of last resort. But for now, we are not in that place. It is a matter for some feasibility studies but not policy and action. We need to implement Phase 1, 2 and 3 – all of which will be required – with the aim of avoiding a Phase 4.

I have illustrated the three phases in the figure which follows (Adapted from Going beyond dangerous climate change: does Paris lock out 2°C? Professors Kevin Anderson & Alice Bows-Larkin, Tyndall Centre – presentation to School of Mechanical Aerospace & Civil Engineering University of Manchester February 2016, Douglas, Isle of Man).

My adapted figure is obviously a simplification, but we need some easily digestible figures to help grapple with this complex subject; and apologies in advance to Anderson & Larkin if I have taken liberties with my colourful additions and annotations to their graphic (while trying to remain true to its intent).

Screen Shot 2017-11-09 at 12.19.57

A version of this slide on Twitter (@EssaysConcern) seemed to resonate with some people, as a stark presentation of our situation.

For me, it is actually a rather hopeful image, if as I, you have a belief in the capacity for people to work together to solve problems which so often we see in times of crisis; and this is a crisis, make no mistake.

While the climate inactivists promote a fear of big Government, controlling our lives, the irony here is that Phase 1 is all about individuals and communities, and we can do this with or without Government support. Phase 2 could certainly do with some help in the form of enabling legislation (such a price on carbon), but it does not have to be top-down solutions, although some are (industrial scale energy storage). Only when we get to Phase 3 are we seeing national solutions dominating, and then only because we have an international consensus to execute these major projects; that won’t be big government, it will be responsible government.

The message of Phases 1 and 2 is … don’t blame the conservatives, don’t blame the loss of feed-in tarifs, or … just do it! They can’t stop you!

They can’t force you to boil a full kettle when you only need one mug of tea. They can’t force you to drive to the smoke, when the train will do. They can’t force you to buy new stuff that can be repaired at a cafe.

And if your community wants a renewable energy scheme, then progressives and conservatives can find common cause, despite their other differences. Who doesn’t want greater community control of their energy, to compete with monopolistic utilities?

I think the picture contains a lot of hope, because it puts you, and me, back in charge. And it sends a message to our political leaders, that we want this high on the agenda.

(c) Richard W. Erskine, 2017

 

 

6 Comments

Filed under Essay, Global Warming Solutions

Incredulity, Credulity and the Carbon Cycle

Incredulity, in the face of startling claims, is a natural human reaction and is right and proper.

When I first heard the news about the detection on 14th September 2015 of the gravitational waves from two colliding black holes by the LIGO observatories I was incredulous. Not because I had any reason to disagree with the predictions of Albert Einstein that such waves should exist, rather it was my incredulity that humans had managed to detect such a small change in space-time, much smaller than the size of a proton.

How, I pondered, was the ‘noise’ from random vibrations filtered out? I had to do some studying, and discovered the amazing engineering feats used to isolate this noise.

What is not right and proper is to claim that personal incredulity equates to an error in the claims made. If I perpetuate my incredulity by failing to ask any questions, then it’s I who is culpable.

And if I were to ask questions then simply ignore the answers, and keep repeating my incredulity, who is to blame? If the answers have been sufficient to satisfy everyone skilled in the relevant art, how can a non expert claim to dispute this?

Incredulity is a favoured tactic of many who dispute scientific findings in many areas, and global warming is not immune from the clinically incredulous.

The sadly departed Professor David Mackay gives an example in his book Sustainable Energy Without the Hot Air (available online):

The burning of fossil fuels is the principal reason why CO2 concentrations have gone up. This is a fact, but, hang on: I hear a persistent buzzing noise coming from a bunch of climate-change inactivists. What are they saying? Here’s Dominic Lawson, a columnist from the Independent:  

“The burning of fossil fuels sends about seven gigatons of CO2 per year into the atmosphere, which sounds like a lot. Yet the biosphere and the oceans send about 1900 gigatons and 36000 gigatons of CO2 per year into the atmosphere – … one reason why some of us are sceptical about the emphasis put on the role of human fuel-burning in the greenhouse gas effect. Reducing man-made CO2 emissions is megalomania, exaggerating man’s significance. Politicians can’t change the weather.”

Now I have a lot of time for scepticism, and not everything that sceptics say is a crock of manure – but irresponsible journalism like Dominic Lawson’s deserves a good flushing.

Mackay goes on to explain Lawson’s error:

The first problem with Lawson’s offering is that all three numbers that he mentions (seven, 1900, and 36000) are wrong! The correct numbers are 26, 440, and 330. Leaving these errors to one side, let’s address Lawson’s main point, the relative smallness of man-made emissions. Yes, natural flows of CO2 are larger than the additional flow we switched on 200 years ago when we started burning fossil fuels in earnest. But it is terribly misleading to quantify only the large natural flows into the atmosphere, failing to mention the almost exactly equal flows out of the atmosphere back into the biosphere and the oceans. The point is that these natural flows in and out of the atmosphere have been almost exactly in balance for millenia. So it’s not relevant at all that these natural flows are larger than human emissions. The natural flows cancelled themselves out. So the natural flows, large though they were, left the concentration of CO2 in the atmosphere and ocean constant, over the last few thousand years.

Burning fossil fuels, in contrast, creates a new flow of carbon that, though small, is not cancelled.

I offer this example in some detail as an exemplar of the problem often faced in confronting incredulity.

It is natural that people will often struggle with numbers, especially large abstract sounding numbers. It is easy to get confused when trying to interpret numbers. It does not help that in Dominic Lawson’s case he is ideologically primed to see a ‘gotcha’, where none exists.

Incredulity, such as Lawson’s, is perfectly OK when initially confronting a claim that one is sceptical of; we cannot all be informed on every topic. But why then not pick up the phone, or email a Professor with skills in the particular art, to get them to sort out your confusion?  Or even, read a book, or browse the internet? But of course, Dominic Lawson, like so many others suffers from a syndrome that  many have identified. Charles Darwin noted in The Descent of Man:

“Ignorance more frequently begets confidence than does knowledge: it is those who know little, not those who know much, who so positively assert that this or that problem will never be solved by science.”

It is this failure to display any intellectual curiosity which is unforgivable in those in positions of influence, such as journalists or politicians.

However, the incredulity has a twin brother, its mirror image: credulity. And I want to take an example that also involves the carbon cycle,.

In a politically charged subject, or one where there is a topic close to one’s heart, it is very easy to uncritically accept a piece of evidence or argument. To be, in the technical sense, a victim of confirmation bias.

I have been a vegetarian since 1977, and I like the idea of organic farming, preferably local and fresh. So I have been reading Graham Harvey’s book Grass Fed Nation. I have had the pleasure of meeting Graham, as he was presenting a play he had written which was performed in Stroud. He is a passionate and sincere advocate for his ideas on regenerative farming, and I am sure that much of what he says makes sense to farmers.

The recently reported research from Germany of a 75% decline in insect numbers is deeply worrying, and many are pointing the finger at modern farming and land-use methods.

However, I found something in amongst Harvey’s interesting book that made me incredulous, on the question of carbon.

Harvey presents the argument that, firstly, we can’t do anything to reduce carbon emissions from industry etc., but that secondly, no need to worry because the soils can take up all the annual emissions with ease; and further, that all of extra carbon in the industrial era could be absorbed in soils over coming years.

He relies a lot on Savory’s work, famed for his visionary but contentious TED talk. But he also references other work that makes similar claims.

I would be lying if I said there was not a part of me that wanted this to be true. I was willing it on. But I couldn’t stop myself … I just had to track down the evidence. Being an ex-scientist, I always like to go back to the source, and find a paper, or failing that (because of paywalls), a trusted source that summarises the literature.

Talk about party pooper, but I cannot find any such credible evidence for Harvey’s claim.

I think the error in Harvey’s thinking is to confuse the equilibrium capacity of the soils with their ability to take up more, every year, for decades.

I think it is also a inability to deal with numbers. If you multiply A, B and C together, but then take the highest possible ranges for A, B and C you can easily reach a result which is hugely in error. Overestimate the realistic land that can be addressed; and the carbon dioxide sequestration rate; and the time till saturation/ equilibrium is reached … and it is quite easy to overestimate the product of these by a factor of 100 or more.

Savory is suggesting that over a period of 3 or 4 decades you can draw down the whole of the anthropogenic amount that has accumulated (which is nearly 2000 gigatonnes of carbon dioxide), whereas a realistic assessment (e.g. www.drawdown.org) is suggesting a figure of 14 gigatonnes of carbon dioxide (more than 100 times less) is possible in the 2020-2050 timeframe.

There are many complex processes at work in the whole carbon cycle – the biological, chemical and geological processes covering every kind of cycle, with flows of carbon into and out of the carbon sinks. Despite this complexity, and despite the large flows of carbon (as we saw in the Lawson case), atmospheric levels had remained stable for a long time in the pre-industrial era (at 280 parts per million).  The Earth system as a whole was in equilibrium.

The deep oceans have by far the greatest carbon reservoir, so a ‘plausibility argument’ could go along the lines of: the upper ocean will absorb extra CO2 and then pass it to the deep ocean. Problem solved! But this hope was dashed by Revelle and others in the 1950s, when it was shown that the upper-to-lower ocean processes are really quite slow.

I always come back to the Keeling Curve, which reveals an inexorable rise in CO2 concentrations in the atmosphere since 1958 (and we can extend the curve further back using ice core data). And the additional CO2 humans started to put into the atmosphere since the start of the industrial revolution (mid-19th century, let us say) was not, as far as I can see, magically soaked up by soils in the pre-industrial-farming days of the mid-20th century, when presumably traditional farming methods pertained.

FCRN explored Savory’s methods and claims, and find that despite decades of trying, he has not demonstrated that his methods work.  Savory’s case is very weak, and he ends up (in his exchanges with FCRN) almost discounting science; saying his methods are not susceptible to scientific investigations. A nice cop-out there.

In an attempt to find some science to back himself up, Savory referenced Gattinger, but that doesn’t hold up either. Track down Gattinger et al’s work  and it reveals that soil organic carbon could (on average, with a large spread) capture 0.4GtC/year (nowhere near annual anthropogenic emissions of 10GtC), and if it cannot keep up with annual emissions, forget soaking up the many decades of historical emissions (the 50% of these that persists for a very long time in the atmosphere).

It is interesting what we see here.

An example of ‘incredulity’ from Lawson, who gets carbon flows mixed up with net carbon flow, and an example of ‘credulity’ from Harvey where he puts too much stock in the equilibrium capacity of carbon in the soil, and assumes this means soils can keep soaking up carbon almost without limit. Both seem to struggle with basic arithmetic.

Incredulity in the face of startling claims is a good initial response to startling claims, but should be the starting point for engaging one’s intellectual curiosity, not as a perpetual excuse for confirming one’s bias; a kind of obdurate ignorance.

And neither should hopes invested in the future be a reason for credulous acceptance of claims, however plausible on face value.

It’s boring I know – not letting either one’s hopes or prejudices hold sway – but maths, logic and scientific evidence are the true friends here.

Maths is a great leveller.

 

(c) Richard W. Erskine, 2017

8 Comments

Filed under Climate Science, Essay, Uncategorized

The Zeitgeist of the Coder

When I go to see a film with my wife, we always stick around for the credits, and the list has got longer and longer over the years … Director, Producer, Cinematographer, Stuntman, Grips, Special Effects … and we’ve only just started. Five minutes later and we are still watching the credits! There is something admirable about this respect for the different contributions made to the end product. The degree of differentiation of competence in a film’s credits is something that few other projects can match.

Now imagine the film reel for a typical IT project … Project Manager, Business Analyst, Systems Architect, Coder, Tester and we’re almost done, get your coat. Here, there is the opposite extreme; a complete failure to identify, recognise and document the different competencies that surely must exist in something as complex as a software project. Why is this?

For many, the key role on this very short credits list is the ‘coder’. There is this zeitgeist of the coders – a modern day priesthood – that conflates their role with every other conceivable role that could or should exist on the roll of honour.

A good analogy for this would be the small scale general builder. They imagine they can perform any skill: they can fit a waterproof membrance on a flat roof; they can repair the leadwork around the chimney; they can mend the lime mortar on that Cotswold stone property. Of course, each of these requires deep knowledge and experience of the materials, tools and methods needed to plan and execute them right.  A generalist will overestimate their abilities and underestimate the difficulties, and so they will always make mistakes.

The all purpose ‘coder’ is no different, but has become the touchstone for our digital rennaissance. ‘Coding’ is the skill that trumps all others in the minds of the commentariat.

Politicians, always keen to jump on the next bandwagon, have for some years now been falling over themselves to extol the virtues of coding as a skill that should be promoted in schools, in order to advance the economy.  Everyone talks about it, imagining it offers a kind of holy grail for growing the digital economy.  But can it be true? Is coding really the path to wealth and glory, for our children and our economy?

Forgetting for a moment that coding is just one of the skills required on a longer list of credits, why do we all need to become one?

Not everyone is an automotive engineer, even though cars are ubiquitous, so why would driving a car mean we all have to be able to design and build one? Surely only a few of us need that skill. In fact, whilst cars – in the days when we called them old bangers – did require a lot of roadside fixing, they are now so good we are discouraged from tinkering with them at all.  We the consumers have become de-skilled, while the cars have become super-skilled.

But apparently, every kid now needs to be able to code, because we all use Apps. Of course, it’s nonsense, for much the same reasons it is nonsense that all car drivers need to be automotive engineers. And as we decarbonise our economy Electric Vehicles will take over, placing many of the automotive skills in the dustbin. Battery engineers anyone?

So why is this even worth discussing in the context of the knowledge economy? We do need to understand if coding has any role in the management of our information and knowledge, and if not, what are the skills we require. We need to know how many engineers are required, and crucially, what type of engineers.

But lets stick with ‘coding’ for a little while longer. I would like to take you back to the very birth of computing, to deconstruct the wording ‘coding’ and place into context. The word coding originates the time when programming a computer meant knowing the very basic operations expressed as ‘machine code’ – Move a byte to this memory location, Add these two bytes, Shift everything left by 2 bytes – which was completely indecipherable to the uninitiated. It also had a serious drawback in that a program would have to be re-written to run on another machine, with its own particular machine code. Since computers were evolving fast, and software needed to be migrated from old to new machines, this was clearly problematic.

Grace Hooper came up with the idea of a compiler in 1952, quite early in the development of computers. Programs would then be written in a machine-agnostic ‘high level language’ (which was designed to be readable, almost like a natural language, but with a simple syntax to  allow logic to be expressed … If (A = B) Then [do-this] Else [do-that]). A compiler on a machine would take a program written in a high-level language and ‘compile’ it into the machine code that could run on that machine.  The same program could thereby run on all machines.

In place of ‘coders’ writing programs in machine code, there were now ‘programmers’ doing this in high-level language such as Cobol or FORTRAN (both of which were invented in the 1950s), and later ones as they evolved.

So why people still talk about ‘coders’ rather than ‘programmers’ is a mystery to me. Were it just an annoying misnomer, one could perhaps ignore it as an irritant, but it reveals a deeper and more serious misunderstanding.

Coding … I mean Programming … is not enough, in so many ways.  When the politician pictures a youthful ‘coder’ in their bedroom, they imagine the next billionaire creating an App that will revolutionize another area of our lives, like Amazon and Uber have done.

But it is by no means clear that programming as currently understood, is the right skill  for the knowledge economy.  As Gottfried Sehringer wrote in an article “Should we really try to teach everyone to code?” in WiRED, even within the narrow context of building Apps:

“In order to empower everyone to build apps, we need to focus on bringing greater abstraction and automation to the app development process. We need to remove code — and all its complexity — from the equation.”

In other words, just as Grace Hooper saw the need to move from Coding to Programming, we need to move from Programming to something else. Let’s call it Composing: a visually interactive way to construct Apps with minimal need to write lines of text to express logical operations. Of course, just as Hooper faced resistance from the Coders, who poured scorn on the dumbing down of their art, the same will happen with the Programmers, who will claim it cannot be done.

But the world of digital is much greater than the creation of ‘Apps’. The vast majority of the time spent doing IT in this world is in implementing pre-built commercial packages.  If one is implementing them as intended, then they are configured using quite simple configuration tools that aim to eliminate the need to do any programming at all. Ok, so someone in SAP or Oracle or elsewhere had to program the applications in the software package, but they are a relatively small population of technical staff when compared to the numbers who go out to implement these solutions in the field.

Of course it can all go wrong, and often does. I am thinking of a bank that was in trouble because their creaking old core banking system – written in COBOL decades ago by programmers in the bank – was no longer fit for purpose. Every time changes were made to financial legislation, such as tax, the system needed tweaking. But it was now a mess, and when one bug was fixed, another took its place.

So the company decided to implement an off-the-shelf package, which would do everything they needed, and more. The promise was the ability to become a  really ‘agile’ bank. They would be able to introduce new products to market rapidly in response to market needs or to respond to new legislation. It would take just a few weeks, rather than the 6 months it was currently taking. All they needed to do was to do some configurations of the package so that it would work just as they needed it too.

The big bosses approved the big budget then left everyone to it. They kept on being told everything was going well, and so much wanted to believe this, so failed to ask the right questions of the team. Well, guess what, it was a complete disaster. After 18 months and everything running over time and over budget, what emerged?  The departmental managers had insisted on keeping all the functionality from their beloved but creaking old system; the big consultancy was being paid for man-hours of programming so did not seem to mind that the off-shored programmers were having to stretch and bend the new package out of shape to make it look like the old system. And the internal project management was so weak, they were unable to call out the issues, even if they had fully understood them.

Instead of merely configuration, the implementation had large chunks of custom programming bolted onto the package, making it just as unstable and difficult to maintain as the old system. Worse still, it made it very difficult to upgrade the package; to install the latest version (to derive benefits from new features), given the way it had been implemented. There was now a large support bill just to keep the new behmoth alive.

In a sense, nothing had changed.

Far from ‘coding’ being the great advance for our economy, it is often, as in this sorry tale, a great drag on it, because this is how many large system implementations fail.

Schools, Colleges and Universities train everyone to ‘code’, so what will they do when in the field? Like a man with a hammer, every problem looks like a nail, even when a precision milling machine was the right tool to use.

Shouldn’t the student be taught how to reframe their thinking to use different skills that are appropriate to the task in hand? Today we have too many Coders and not enough Composers, and its seems everyone is to blame, because we are all seduced by this zeitgeist of the ‘coder’.

When we consider the actual skills needed to implement, say, a large, data-oriented software package – like that banking package – one finds that activities needed are, for example: Requirements Analysis, Data Modelling, Project Management, Testing, Training, and yes of course, Composing.  Programming should be restricted to those areas such as data interfaces to other systems, where it must be quarantined, so as not to undermine the upgradeability of the software package that has been deployed.

So what are the skills required to define and deploy information management solutions, which are document-oriented, aimed at capturing, preserving and reusing the knowledge within an organization?

Let the credits roll: Project Manager; Information Strategist; Business Analyst; Process Architect; Information Architect; Taxonomist; Meta-Data Manager; Records Manager; Archivist; Document Management Expert; Document Designer; Data Visualizer; Package Configurer; Website Composer; … and not a Coder, or even a Programmer, in sight.

The vision of everyone becoming coders is not only the wrong answer to the question; its also the wrong question. The diversity of backgrounds needed to build a knowledge economy is very great. It is a world beyond ‘coding’ which is richer and more interesting, open to those with backgrounds in software of course, but also in science and the humanities. We need linguists as much as it we need engineers; philosophers as much we need data analysts; lawyers as much as we need graphics artists.

To build a true ‘knowledge economy’ worthy of that name, we need to differentiate and explore a much richer range of competencies to address all the issues we will face than the way in which information professionals are narrowly defined today.

(C) Richard W. Erskine, 2017

——

Note:

In his essay I am referring to business and institutional applications of information management. Of course there will be areas such as scientific research or military systems which will always require heavy duty, specialist software engineering; but this is another world when compared to the vast need in institutions for repeatable solutions to common problems, where other skills are argued to be much more relevant and important to success.

1 Comment

Filed under Essay, Information Management, Software Development

Beyond Average: Why we should worry about a 1 degree C rise in average global temperature

When I go to the Netherlands I feel small next to men from that country, but then I am 3 inches smaller than the average Brit, and the average Dutchman is 2 inches taller than the average Brit. So I am seeing 5 inches of height difference in the crowd around me when surrounded by Dutch men. No wonder I am feeling an effect that is much greater than what the average difference in height seems to be telling me on paper.

Averages are important. They help us determine if there is a real effect overall. Yes, men from the Netherlands are taller than men from Britain, and so my impressions are not merely anecdotal. They are real, and backed up by data.

If we are wanting to know if there are changes occurring, averages help too, as they ensure we are not focusing on outliers, but on a statistically significant trend. That’s not to say that it is always easy to handle the data correctly or to separate different factors, but once this hard work is done, the science and statistics together can lead us to knowing important things, with confidence.

For example, we know that smoking causes lung cancer and that adding carbon dioxide into the atmosphere leads to increased global warming.

But, you might say, correlation doesn’t prove causation! Stated boldly like that, no it doesn’t. Work is required to establish the link.

Interestingly, we knew the fundamental physics of why carbon dioxide (CO2) is a causative agent for warming our atmosphere – not merely correlated – since as early as Tyndall’s experiments which he started in 1859, but certainly no later than 1967, when Manabe & Wetherald’s seminal paper resolved some residual physics questions related to possible saturation of the infra-red  absorption in the atmosphere and the co-related effect of water vapour. That’s almost 110 years of probing, questioning and checking. Not exactly a tendency on the part of scientists to rush to judgment! And in terms of the correlation being actually observed in our atmosphere, it was Guy Callendar in 1938 who first published a paper showing rising surface temperature linked to rising levels of CO2.

Whereas, in the case of lung cancer and cigarettes correlation came first, not fundamental science. It required innovations in statistical methods to prove that it was not merely correlation but was indeed causation, even while the fundamental biological mechanisms were barely understood.

In any case, the science and statistics are always mutually supportive.

Average Global Warming

In the discussions on global warming, I have been struck over the few years that I have been engaging with the subject how much air time is given to the rise in atmospheric temperature, averaged for the whole of the Earth’s surface, or GMST as the experts call it (Global Mean Surface Temperature).  While it is a crucial measure, this can seem a very arcane discussion to the person in the street.

So far, it has risen by about 1 degree Centigrade (1oC) compared to the middle of the 19th Century.

There are regular twitter storms and blogs ‘debating’ a specific year, and last year’s El Nino caused a huge debate as to what this meant. As it turns out, the majority of recent warming is due to man-made global warming, and this turbo-charged the also strong El Nino event.

Anyone daring to take a look at the blogosphere or twitter will find climate scientists arguing with opinion formers ill equipped to ‘debate’ the science of climate change, or indeed, the science of anything.

What is the person in the street supposed to make of it? They probably think “this is not helping me – it is not answering the questions puzzling me – I can do without the agro thanks very much”.

To be fair, many scientists do spend a lot of time on outreach and in other kinds of science communications, and that is to be applauded. A personal favourite of mine is Katharine Hayhoe, who always brings an openness and sense of humility to her frequent science communications and discussions, but you sense also, a determined and focused strategy to back it up.

However, I often feel that the science ‘debate’ generally gets sucked into overly technical details, while basic, or one might say, simple questions remain unexplored, or perhaps assumed to be so obvious they don’t warrant discussion.

The poor person in the street might like to ask (but dare not for fear of being mocked or being overwhelmed with data), simply:

“Why should we worry about an average rise of 1oC temperature, it doesn’t seem that much, and with all the ups and downs in the temperature curve; the El Nino; the alleged pause; the 93% of extra heat going into the ocean I heard about … well, how can I really be sure that the surface of the Earth is getting warmer?”

There is a lot to unpick here and I think the whole question of ‘averages’ is part of the key to approaching why we should worry.

Unequivocally Warming World

Climate Scientists will often show graphs which include the observed and predicted annual temperature (GMST) over a period of 100 years or more.

Now, I ask, why do they do that?

Surely we have been told to that in order to discern a climate change trend, it is crucial to look at the temperature averaged over a period of at least 10 years, and actually much better to look at a 30-year average?

In this way we smooth out all the ups and downs that are a result of the energy exchanges that occur between the moving parts of the earth system, and the events such as volcanic eruptions or humans pumping less sulphur into the atmosphere from industry. We are interested in the overall trend, so we can see the climate change signal amongst the ‘noise’.

We also emphasis to people – for example, “the Senator with a snowball” – that climate change is about averages and trends, as distinct from weather (which is about the here and now).

So this is why the curve I use – when asked “What is the evidence that the world is warming?” – is a 30-year smoothed curve (red line) such as the one shown below (which used the GISS tool):

30 yr rolling average of GMST

[also see the Met Office explainer on global surface temperature]

The red line shows inexorable warming from early in the 20th Century, no ifs, no buts.

End of argument.

When I challenged a climate scientist on Twitter, why don’t we just show this graph and not get pulled into silly arguments with a Daily Mail journalist or whoever, I was told that annual changes are interesting and need to be understood.

Well sure, for climate scientists everything is interesting! They should absolutely try to answer the detailed questions, such as the contribution global warming made to the 2016 GMST. But to conflate that with the simpler and broader question does rather obscure the fundamental message for the curious but confused public who have not even reached base camp.

They may well conclude there is a ‘debate’ about global warming when there is none to be had.

There is debate amongst scientists about many things: regional impact and attribution; different feedback mechanisms and when they might kick in; models of the Antarctic ice sheet; etc. But not about rising GMST, because that is settled science, and given Tyndall et al, it would be incredible if it were not so; Nobel Prize winning incredible!

If one needs a double knock-out, then how about a triple or quadruple knock-out?

When we add the graphs showing sea level rise, loss of glaciers, mass loss from Greenland and Antarctica, and upper ocean temperature, we have multiple trend lines all pointing in one direction: A warming world. It ain’t rocket science.

We know the world has warmed – it is unequivocal.

Now if a the proverbial drunk, duly floored, still decides to get up and wants to rerun the fight, maybe we should be choosing not to play his games!?

So why do arguments about annual variability get so frequently aired on the blogosphere and twitter?

I don’t know, but I feel it is a massive own goal for science communication.

Surely the choice of audience needs to be the poor dazed and confused ‘person in the street’, not the obdurately ignorant opinion columnists (opinion being the operative word).

Why worry about a 1oC rise?

I want to address the question “Why worry about a 1oC rise (in global mean surface temperature)?”, and do so with the help of a dialogue. It is not a transcript, but along the lines of conversations I have had in the last year. In this dialogue, I am the ClimateCoach and I am in conversation with a Neighbour who is curious about climate change, but admits to being rather overwhelmed by it; they have got as far as reading the material above and accept that the world is warming.

Neighbour:  Ok, so the world is warming, but I still don’t get why we should worry about a measly 1oC warming?

ClimateCoach: That’s an average, over the whole world, and there are big variations hidden in there. Firstly, two thirds of the surface of the planet is ocean, and so over land we are already talking about a global land mean surface temperature in excess of 1oC, about 1.5oC. That’s the first unwelcome news, the first kicker.

Neighbour: So, even if it is 5oC somewhere, I still don’t get it. Living in England I’d quite like a few more Mediterranean summers!

ClimateCoach: Ok, so let’s break this down (and I may just need to use some pictures).  Firstly we have an increase in the mean, globally. But due to meteorological patterns there will be variations in temperature and also changes in precipitation patterns around the world, such as droughts in California and increased Monsoon rain in India. This  regionality of the warming is the second kicker.

Here is an illustration of how the temperature increase looks regionally across the world.

GISTEMP global regional

Neighbour: Isn’t more rain good for Indian farmers?

ClimateCoach: Well, that depends on timing. It has started to be late, and if it doesn’t arrive in time for certain crops, that has serious impacts. So the date or timing of impacts is the third kicker.

Here is an illustration.

Screen Shot 2017-04-15 at 08.45.34.png

Neighbour: I noticed earlier that the Arctic is warming the most. Is that a threat to us?

ClimateCoach: Depends what you mean by ‘us’. There is proportionally much greater warming in the Arctic, due to a long-predicted effect called ‘polar amplification’, in places as much as 10oC of warming. As shown in this map of the arctic. But what happens in the Arctic doesn’t stay in the Arctic.

Arctic extremes

Neighbour: I appreciate that a warming Arctic is bad for ecosystems in the Arctic – Polar Bears and so on – but why will that effect us?

ClimateCoach: You’ve heard about the jet stream on the weather reports, I am sure [strictly, the arctic polar jet stream]. Well, as the Arctic is warmed differentially compared to latitudes below the Arctic, this causes the jet stream to become more wiggly than before, which can be very disruptive. This can create, for example, fixed highs over Europe, and very hot summers.

Neighbour: But we’ve had very hot summers before, why would this be different?

ClimateCoach: It’s not about something qualitatively different (yet), but it is quantitatively. Very hot summers in Europe are now much more likely due to global warming, and that has real impacts. 70,000 people died in Europe during the 2003 heatwave.  Let me show you an illustrative graph. Here is a simple distribution curve and it indicates a temperature at and above which (blue arrow) high impacts are expected, but have a low chance. Suppose this represents the situation in 1850.

Normal distribution

Neighbour: Ok, so I understand the illustration … and?

ClimateCoach: So, look at what happens when we increase the average by just a little bit to a higher temperature, say, by 1oC to represent where we are today. The whole curve shifts right. The ‘onset of high impact’ temperature is fixed, but the area under the curve to the right of this has increased (the red area has increased), meaning a greater chance than before. This is the fourth kicker.

In our real world example, a region like Europe, the chance of high impact hot summers has increased within only 10 to 15 years from being a one in 50 year event to being a 1 in 5 year event; a truly remarkable increase in risk.   

Shifted Mean and extremes

Neighbour: It’s like loading the dice!

ClimateCoach: Exactly. We (humans) are loading the dice. As we add more CO2 to the atmosphere, we load the dice even more. 

Neighbour: Even so, we have learned to cope with very hot summers, haven’t we? If not, we can adapt, surely?

ClimateCoach: To an extent yes, and we’ll have to get better at it in the future. But consider plants and animals, or people who are vulnerable or have to work outside, like the millions of those from the Indian sub-continent who work in construction in the Middle East.  It doesn’t take much (average) warming to make it impossible (for increasingly long periods) to work outside without heat exhaustion. And take plants. A recent paper in Nature Communications showed that crop yields in the USA would be very vulnerable to excessive heat.

Neighbour: Can’t the farmers adapt by having advanced irrigation systems. And didn’t I read somewhere that extra CO2 acts like a fertiliser for plants?

ClimateCoach: To a point, but what that research paper showed was that the warming effect wins out, especially as the period of excessive heat increases, and by the way the fertilisation effect has been overstated. The extended duration of the warming will overwhelm these and other ameliorating factors. This is the fifth kicker.

This can mean crop failures and hence impacts on prices of basic food commodities, even shortages as impacts increase over time.

Neighbour: And what if we get to 2oC?  (meaning 2oC GMST rise above pre-industrial)

ClimateCoach: Changes are not linear. Take the analogy of car speed and pedestrian fatalities. After 20 miles per hour the curve rises sharply, because the car’s energy is a function of the square of the speed, but also the vulnerability thresholds in the human frame. Global warming will cross thresholds for both natural and human systems, which have been in balance for a long time, so extremes get increasingly disruptive. Take an impact to a natural species or habitat: one very bad year, and there may be recovery in the following 5-10 years, which is ok if the frequency of very bad years is 1 in 25-50 years. But suppose very bad years come 1 in every 5 years? That would mean no time to recover. Nature is awash with non-linearities and thresholds like this.

Neighbour: Is that what is happening with the Great Barrier Reef – I heard something fleetingly on BBC Newsnight the other night?

ClimateCoach: I think that could be a very good example of what I mean. We should talk again soon. Bring friends. If they want some background, you might ask them to have a read of my piece Demystifying Global Warming & Its Implications, which is along the lines of a talk I give.

Putting it together for the person in the street.

I have explored one of many possible conversations I could have had. I am sure it could be improved upon, but I hope it illustrates the approach. We should be engaging those people (the majority of the population) who are curious about climate change but have not involved themselves so far, perhaps because they feel a little intimidated by the subject.

When they do ask for help, the first thing they need to understand is that indeed global warming is real, and is demonstrated by those average measures like GMST, and the other ones mentioned such as sea-level rise, ice sheet mass loss, and ocean temperature; not to mention the literally thousands of indicators from the natural world (as documented in the IPCC 5th Assessment Report).

There are also other long-term unusual sources of evidence to add to this list, as Dr Ed Hawkins has discussed, such as the date at which Cherry blossom flowers in Kyoto, which is trending earlier and earlier.  Actually, examples such as these, are in many ways easier for people to relate to.

Gardeners the world over can relate to evidence of cherry blossom, wine growers to impacts on wine growing regions in France, etc. These diverse and rich examples are in many ways the most powerful for a lay audience.

The numerous lines of evidence are overwhelming.

So averages are crucial, because they demonstrate a long-term trend.

When we do raise GMST, make sure you show the right curve. If it is to show unequivocal global warming at the surface, then why not show one that reflects the average over a rolling 30 year period; the ‘smoothed’ curve. This avoids getting into debates with ‘contrarians’ on the minutae of annual variations, which can come across as both abstract and arcane, and puts people off.

This answers the first question people will be asking, simply: “Is the world warming?”. The short answer is “Unequivocally, yes it is”. And that is what the IPCC 5th Assessment Report concluded.

But averages are not the whole story.

There is the second but equally important question “Why worry about a 1oC rise (in global mean surface temperature)?”

I suspect many people are too coy to ask such a simple question. I think it deserves an answer and the dialogue above tried to provide one.

Here and now, people and ecosystems experience weather, not climate change, and when it is an extreme event, the impacts are viscerally real in time and place, and are far from being apparently arcane debating points.

So while a GMST rise of 1oC sounds like nothing to the untutored reader, when translated into extreme weather events, it can be highly significant.  The average has been magnified to yield a significant effect, as evidenced by the increasing chance of extreme events of different kinds, in different localities, which can increasingly be attributed to man-made global warming.

The kickers highlighted in the dialogue were:

  • Firstly, people live on land so experience a higher ‘GMST’ rise (this is not to discount the impacts on oceans);
  • Secondly, geographical and meteorological patterns mean that there are a wide range of regional variations;
  • Thirdly, the timing (or date) at which an impact is felt is critical for ecosystems and agriculture, and bad timing will magnify the effect greatly;
  • Fourthly, as the average increases, so does the chance of extremes. The dice are getting loaded, and as we increase CO2, we load the dice more.
  • Fifthly, the duration of an extreme event will overwhelm defences, and an extended duration can cross dangerous thresholds, moving from increasing harm into fatal impacts, such as crop failure.

I have put together a graphic to try to illustrate this sequence of kickers:

Screen Shot 2017-04-15 at 08.36.37.png

As noted on this graphic (which I used in some climate literacy workshops I ran recently), the same logic used for GMST can be applied to other seemingly ‘small’ changes in global averages such as rainfall, sea-level rise, ocean temperature and ocean acidification. To highlight just two of these other examples:

  • an average global sea-level rise translates into impacts such as extreme storm surges, damaging low-lying cities such as New York and Miami (as recently reported and discussed).
  • an average ocean temperature rise, translates into damage to coral reefs (two successive years of extreme events have caused serious damage to two thirds of the Great Barrier Reef, as a recent study has confirmed).

Even in the relatively benign context of the UK’s temperate climate, the Royal Horticultural Society (RHS), in a report just released, is advising gardeners on climate change impacts and adaptation. The instinctively conservative ‘middle England’ may yet wake up to the realities of climate change when it comes home to roost, and bodies such as the RHS reminds them of the reasons why.

The impacts of man-made global warming are already with us, and it will only get worse.

How much worse depends on all of us.

Not such a stupid question

There was a very interesting event hosted by CSaP (Centre for Science and Policy) in Cambridge recently. It introduced some new work being done to bring together climate science and ‘big data analytics’. Dr Emily Schuckburgh’s talk looked precisely at the challenge of understanding local risks; the report of the talk included the following observation:

“Climate models can predict the impacts of climate change on global systems but they are not suitable for local systems. The data may have systematic biases and different models produce slightly different projections which sometimes differ from observed data. A significant element of uncertainty with these predictions is that they are based on our future reduction of emissions; the extent to which is yet unknown.

To better understand present and future climate risks we need to account for high impact but low probability events. Using more risk-based approaches which look at extremes and changes in certain climate thresholds may tell us how climate change will affect whole systems rather than individual climate variables and therefore, aid in decision making. Example studies using these methods have looked at the need for air conditioning in Cairo to cope with summer heatwaves and the subsequent impact on the Egyptian power network.”

This seems to be breaking new ground.

So maybe the eponimous ‘person in the street’ is right to ask stupid questions, because they turn out not to be so stupid after all.

Changing the Conversation

I assume that the person in the street is curious and has lots of questions; and I certainly don’t judge them based on what newspaper they read. That is my experience. We must try to anticipate and answer those questions, and as far as possible, face to face. We must expect simple questions, which aren’t so stupid after all.

We need to change the focus from the so-called ‘deniers’ or ‘contrarians’ – who soak up so much effort and time from hard pressed scientists – and devote more effort to informing the general public, by going back to the basics. By which I mean, not explaining ‘radiative transfer’ and using technical terms like ‘forcing’, ‘anomaly’, or ‘error’, but using plain English to answer those simple questions.

Those embarrasingly stupid questions that will occur to anyone who first encounters the subject of man-made global warming; the ones that don’t seem to get asked and so never get answered.

Maybe let’s start by going beyond averages.

No one will think you small for doing so, not even a Dutchman.

[updated 15th April]

1 Comment

Filed under Climate Science, Essay, Science Communications

Falstaff prepares for battle in Paris

Christopher Monckton and his merry band of global warming contrarians have been in Paris last week plotting their next skirmish in their never ending war against the science of global warming.

Their meeting to discuss their ‘messaging’ for COP21 has been documented by a journalist from Open Democracy and gives a remarkable expose into their rambling thought processes.

I have a vision of Falstaff – a tragic, comic and hopelessly flawed figure – and his crew of weary old soldiers preparing for a new battle. For audiences of Shakespeare’s plays, these scenes provide some light relief from the more serious plots afoot in his great plays. The same was true here except that on this occasion no one was laughing.

In the main play at COP21 there are serious actors at work: mayors of cities planning to decarbonise; managers of huge investment funds now actively forcing businesses to accept fiduciary responsibility; entrepreneurs promoting zero carbon innovations in energy, transport and elsewhere; climate action networks working with citizen groups; and many more. They are not debating whether or not we have a problem – all informed people know we do – they are instead working hard on solutions. Whatever happens with the final text of COP21, the transition is underway. It cannot be stopped.

The contrarians are bound together by a suspicion, and in some cases hatred, of environmentalism, the UN and ‘big’ Government. They have no interest in exploring scientific truth, only in finding ways to create confusion in the climate debate, for the sole purpose of delaying action. So their strategy has been to challenge science in ways that are thoroughly disingenuous.

For example, over many years these people have said that you cannot reliably measure the average global surface temperature of the Earth, or have claimed it is in error because of the heat island effect or whatever (all untrue, but they keep repeating it). So guess what happens when it appears that the warming has slowed or ‘paused’? They then switch tack and say “look, its stopped warming”, now feigning a belief in the very science of global temperature measurement they were lambasting before.

I call that disingenuous.

This is a game that some people have called ‘wack a mole’, because the contrarians pop up in one place and no sooner have you wacked them there, they pop up in another place. Having no shame, they are happy to pop in the prior places where they have been thoroughly ‘wacked’, hoping no one will remember. This is ‘wack a mole’ meets Groundhog Day.

It is not merely a case of getting tangled in knots over the science. Even before we get to the science part, the contrarians deploy a myriad of debating techniques and logical fallacies. One of the favourite fallacies deployed by contrarians is what I call ‘Argument from Incredulity’.

Now, I do not blame anyone for being incredulous about the universe. I would say it is quite normal, on hearing it for the first time, to be incredulous that we are in a galaxy with a few hundred billion stars and in a universe with over 100 billion galaxies. Incredulity is often a good starting point for enquiry and discovery. But it should never be an excuse for persistent ignorance.

As a child, I was surprised when I learned that even 1oC temperature rise meant a fever and a few degrees could be fatal. It is indeed a wonder how a complex system, like the human body, works to create such a fine equilibrium, and that when the system goes even slightly out of equilibrium, it spells trouble.

The Earth’s system has also been in equilibrium. It too, can get a fever with apparently small changes that can knock it out of equilibrium.

In No. 7 of the talking points in Monckton’s rather long list is his observation that CO2 is less than a tenth of 1% of the Earth’s atmosphere (currently, it is 0.04%, or 400 parts per million [ppm])). True, but so what?

If I look through clear air along a long tube I see visible light from a torch at the other end undiminished, but if I then add a small amount of smoke there will be significant dimming out of all proportion with the relative concentration of the smoke. Why? Because if you add a small effect to a situation where there is little or no effect, the change is large.

The same is true when considering infra-red (which is invisible to the human eye but is emitted from the ground when it has been warmed by sunlight). Since 99% of the Earth’s atmosphere is transparent to this infra-red, the ‘small’ amount of CO2 (which does absorb infra-red) is very significant in relative terms. Why? Again, because if you add a small effect to a situation where there is little or no effect, the change is large.

Contrarians like to express the rise as 0.03% to 0.04% to suggest that it is small and insignificant.

Actually, a better way to express the change is that it is equivalent to a 33% increase in CO2 concentrations above pre-industrial levels (see Note).

The current 400 ppm is rising at a rate of over 2 ppm per year. All of this increase is due to human combustion of fossil fuels. That is not small, it is huge, and at a rate that is unprecedented (being over a period of 150 years not the 10s of thousands of years over the ice age cycles).

But here is the most amazing conclusion to the Monckton meeting. In trying to rehearse the arguments they should use when ‘messaging’ on the topic of the greenhouse effect:

“We accept that there is such a thing as the greenhouse effect …
yes, if you add CO2 to the atmosphere, it would cause some warming – there are some on the fringes who would deny that, but it’s tactically efficacious for us to accept that.”

Efficacious to say something you don’t believe! I don’t call that denial, I call it deceitful.

The old soldiers were naturally up in arms. Being sold out at this stage, would be a bitter pill to swallow. As the reporter noted:

Monckton suggested that they should accept that the greenhouse effect is real. There was a fair amount of disagreement in the room. The chair said “I’m trying to appeal to left wing journalists”. For a moment they lost control as a number of people shouted out their various objections. The conclusion?: “The Greenhouse Effect – the debate continues”.

Enough of dissembling contrarians, I say.

At this point the comic interlude must come to a close. Time to get back to some serious debate.

[Falstaff exits, stage Right]

[The action moves back to the main stage]

COP21 continues without interruption, despite noises off.

(c) Richard Erskine, 2015

NOTE

In fact, the Earth’s average surface temperature would be roughly the same as the Moon’s (being the same distance from the sun) without the CO2 in the Earth’s atmosphere, about 30oC cooler (-15oC rather than +15oC, on average). So adding even a small amount of CO2 to to an atmosphere of Oxygen, Nitrogen and Argon has a huge effect. Something on top of nothing is a big change in percentage terms.

Over the 4 last ice ages, CO2 concentrations have varied between 180 and 300 parts per million. So less than a halving or doubling of CO2 concentrations in the atmosphere moved the Earth from ice age to interglacial and back again. We know that less than a doubling can have dramatic changes.

Today’s level of 400 ppm has not been seen on Earth for almost 1 million years.

For at least the last thousand years, the level has been stable at 280 ppm, up until the industrial revolution.

The question of a ‘pause’ in surface temperature is debated amongst climate scientists. One thing they do not disagree about: the increased CO2 means there is an energy imbalance that is causing the planet to warm, with over 90% of the heat going into the oceans, mountain glaciers receding apace, etc.

Leave a comment

Filed under Climate Science, Contrarianism, COP21, Essay, Uncategorized

Demystifying Global Warming and Its Implications

This essay is published on my blog EssaysConcerning.com, and is the basis for a talk I give by the same title. It provides a guide to global warming in plain English while not trivialising the subject. It avoids technical terms & jargon (like ‘forcing’) and polarising or emotive language (like ‘denier’ or ‘tree hugger’). My goal was to give those who attend the talk or read this essay a basic foundation on which to continue their own personal exploration of this important subject; it provides a kind of ‘golden thread’ through what I believe are the key points that someone new to the subject needs to grasp. References, Further Reading, Notes and Terminology are included at the end of this essay. Slides from the talk, including some bullet points, are included in the essay to provide summaries for the reader. 

I am Richard Erskine and I have a Doctorate from Cambridge University in Theoretical Chemistry.  In the last 27 years I have worked in strategic applications of information management. Quite recently I have become concerned at the often polarised nature of the discourse on global warming, and this essay is my attempt to provide a clear, accurate and accessible account of the subject. I will leave the reader to judge if I have been successful in this endeavour.

Published July 2015 [Revised March 2016].

Contents

1.   The role of Carbon Dioxide (CO2)
2.   Ice Ages and Milankovitch Cycles
3.   How do we know this history of the Earth?
4.   How do we know there is warming occurring and that it is man-made?
5.   What are the projections for the future?
6.   Can mankind stay within the 2oC goal?
7.   Is international agreement possible?
8.   Planning a path to zero carbon that supports all countries
9.   The transformation to a zero carbon future

This essay is about Global Warming, past, present and future, man-made and natural, and about our human response to the risks it poses. It starts with a historical perspective on the wider subject of climate change (See Further Reading – Spencer Weart, The Discovery of Global Warming).

In the early 19th Century people realised that there had been geological changes due to glaciers, such as large rocks deposited in valleys. By 1837 Louis Agassiz (1807-1873) proposed the concept of ice ages. We now know that there were 4 major ice ages over the past 400,000 years. Between each ice age are periods called inter-glacials. In the deep history of our 4.5 billion year old planet there were other periods of cooling and warming extending back millions of years.

1. The role of Carbon Dioxide (CO2 )

John Tyndall (1820-1893) was a highly respected scientist who loved to holiday in the Alps and wondered what had caused the ice ages. In 1861 he published a paper that was key to our modern understanding (Reference 1).

He showed that carbon dioxide (as we now call it) and water vapour, amongst others, were very effective at trapping the radiative heat (what we call infra-red radiation). Infra-red radiation is emitted from the surface of the Earth when it is heated by visible radiation from the Sun.

The Nitrogen, Oxygen and Argon that together make up 99% of the Earth’s atmosphere are completely transparent to this infra-red radiation. So, while carbon dioxide made up only 0.028% of the atmosphere, with water vapour adding variable levels of humidity, they were thereby recognised 150 years ago as being responsible for trapping the heat that makes the Earth habitable for humans. We call these gases ’greenhouse gases’.

Consequently, the Earth is 30oC warmer than would be the case in the absence of greenhouse gases (on average 15oC, as opposed to -15oC) [see Note 1]

Understanding how so-called ‘greenhouse gases’ absorb infra-red radiation and heat the atmosphere is well established textbook physics, but does get a little technical. Nevertheless, there are plenty of very good resources that are very helpful in explaining this [see Note 2].

Figure 1 - John Tyndall


2. Ice Ages and Milankovitch Cycles

But this still begged the question: what triggered the ice ages? Our modern understanding of the ice ages is informed by two hundred years of scientific research, and the accumulation of knowledge and insight. Milutin Milankovitch (1879-1958) was a Serbian mathematician and astronomer who calculated the cyclical variations (“Milankovitch Cycles”) in the Earth’s orbit and orientation which impact on the intensity of the Sun’s rays reaching polar and other regions of the Earth. His goal was to explain climatic patterns. It was only in the 1970s that Milankovitch Cycles were widely accepted as playing a key role as triggers for entering and leaving an ice age.

The explanation is as follows. Some change starts the process of cooling that takes us into an ice age. The most probable trigger is the start of one of the periodic variations in the orbit and orientation of the Earth. The timing of these cycles correlates well with the ice ages. The greater seasonality of the northern hemisphere (due to its proportionally greater land mass) was a significant factor in promoting growth of the ice sheets.

While these changes were insufficient to explain the full cooling required, they provided the trigger [see Note 3]. After this initial cooling there would have been more snow and ice sheet growth, with the Earth reflecting more light. Overall the resulting cooler Earth system would have been better at capturing carbon dioxide over these timescales [see Note 4]. Since cooler air is less humid, there would also have been less water vapour in the atmosphere.

Overall, the reduction in greenhouse gases in the atmosphere would have led to further cooling. This negative feedback process continues, step by step, leading to a new equilibrium where the temperature dropped by a few degrees, the ice sheets grew towards their peak volume, and the sea levels fell accordingly [see Feedback in Terminology].

The exit from an ice age is the reverse of this process. There would have been a trigger that brought slight warming, during an alternate phase of a Milankovitch Cycle. Reductions in snow cover and retreating ice sheets meant less light was reflected, leading to another increment of warming.

Then some carbon dioxide would have been released from the oceans, leading to further warming. This slight warming led to increased humidity [see Note 5], which is a positive feedback effect, and this led to additional warming, which in turn led to the release of more CO2 from the oceans, which led to further warming.

This positive feedback process would have led to a progressively warmer planet and eventually a new equilibrium being reached [see Note 6] in an interglacial period such as the one we are living in.

Figure 2 - Milutin Milankovith


3. How do we know this history of the Earth?

Since the 1950s ice cores (see photo below) have been drilled into the great ice sheets of Greenland and Antarctica that together hold 99% of the the Earth’s ice. The techniques used to analyse these ice cores have been advanced so that we are now able, since the 1980s and 1990s, to look back over these 400,000 years with increasing precision, across the timescale of 4 major ice ages. The Vostok ice cores in the late 1990s reached back 420,000 years. The EPICA cores drilled through the thickest part of the Antartica ice sheet reaches back 800,000 years. In Greenland, the NEEM ice core reaches back 150,000 years.

Figure 3 - Ice Cores

Scientists have literally counted the successive years of compressed snow fall manifest within the ice sheets. By looking at the bubbles of air and materials trapped in these ice cores scientists can determine the concentration of carbon dioxide and other gases over this period.

They can also measure the global temperature that would have existed over the same period through an ingenious measurement of isotopic ratios, as first suggested in 1947 (Reference 2) by the chemist Harold Urey (1893-1981). The story of these ice cores has been told very well by Professor Richard Alley [Alley, Further Reading]

Oxygen’s most common isotope is Oxygen-16 (16O), wherein the nucleus is composed of 8 protons (the defining attribute of the element Oxygen), and 8 neutrons. The next most common stable isotope of oxygen is Oxygen-18 (18O) which has extremely low abundance compared to 16O. 18O has 2 extra neutrons in the nucleus, but is chemically identical.

Water is H2O and when a molecule of it evaporates from the ocean it needs a little kick of energy to break free from its liquid state. The small percentage of 18O-based water in the atmosphere varies in a way that is related to the temperature of the atmosphere that Urey calculated. So when the moisture in the air is mixed and later gathers as clouds and turns to snow that falls in Greenland and Antarctica, it leaves an indicator through its 18O content, of the average temperature of the atmosphere at that time.

Figure 4 - CO2 and Temperature

Ice core evidence is being gathered and checked by many independent teams from many countries at different locations, and there are other independent lines of evidence to back up the main conclusions.

For example, there are the loess layers in the sediment of lakes that can be analysed using analogous techniques, with isotopes of other elements, to provide indicators of temperature over different periods . Some of these methods can look back in time even further than the ice cores, by looking at ancient shells in the ocean sediments, for example.

By analysing the ice cores up to 2 miles deep, scientists can look back in time and measure the CO2 concentration and the temperature, side by side, over several ice ages. Above is a presentation of the data  from the seminal Petit et al 1999 paper in Nature (Reference 3), derived from ice cores retrieved from Antarctica. These ice core projects were epic undertakings.

What this shows is a remarkable correlation between carbon dioxide concentrations and temperature. The studies from Greenland in the Northern Hemisphere and Antarctica in the Southern Hemisphere reveal a global correlation.

Because the initial trigger for exiting an ice age would have been a Milankovitch Cycle related to the orbit and orientation of the Earth, the subsequent release of CO2 slightly lagged the change in temperature, but only initially (see Note 7). As previously described, the increased CO2 concentrations and the subsequent positive feedback generated by water vapour provided the principal drivers for the global warming that took the Earth into an interglacial period.

Within the glacial and interglacial periods changes occurred that reflected intermediate fluctuations of warming and cooling. These fluctuations punctuated the overall trends when entering and leaving an ice age. This was due to multiple effects such as major volcanic eruptions.

For example, the Tambora volcanic eruption of 1815 “released several tens of billions of kilograms of sulphur, lowered the temperature in the Northern Hemisphere by 0.7oC” (Page 63, Reference 4). This led to crop failures on a large scale and a year without a summer that inspired Lord Byron to write a melancholy poem. This was a relatively short lived episode because the sulphur aerosols (i.e. droplets of sulphuric acid) do not stay long in the upper atmosphere, but it does illustrate the kinds of variabilities that can be overlaid on any long-term trends.

Another major actor in long-term internal variability is the world’s great ocean conveyor belt, of which the gulf stream is a part. This brings vast amounts of heat up to the northern Atlantic making it significantly warmer than would otherwise be the case. There are major implications for the climate when the gulf stream is weakened or, in extremis, switched off.

On shorter timescales, the warming El Niño and cooling La Niña events, which occur at different phases in the equatorial Pacific every 2 to 7 years, add a significant level of variability that has global impacts on climate.

These internal variabilities of the Earth system occurring over different timescales ensure there is no simple linear relationship between CO2 and global temperature on a year by year basis. The variations ensure that as heat is added to the Earth system and exchanged between its moving parts, the surface atmospheric response rises on a jagged curve.

Nevertheless, overall CO2 can be clearly identified as the global temperature ‘control knob’, to borrow Professor Richard Alley’s terminology. The CO2 concentration in the atmosphere is the primary indicator of medium to long term global temperatures trends, in both the lower atmosphere and the upper ocean.

Over the period of the ice ages, the concentration of CO2 in the atmosphere has varied between about 180 parts per million (ppm) and 300 ppm. So, less than a doubling or halving of CO2 concentrations was enough for major changes to occur in the Earth’s climate over hundreds of thousands of years.

As the ice cores have been studied with greater refinement it has been realised that in some cases, the transitions can be relatively abrupt, within a few decades, not the thousands of years that geologists have traditionally assumed, suggesting that additional positive feedbacks have come into play to accelerate the warming process.


4. How do we know there is warming occurring and that it is man-made?

We know from the physics of CO2 in the atmosphere and the way that heat is accumulating in the Earth’s system as concentrations rise (with over 90% of the extra heat currently being deposited in the upper oceans, Reference 5). Satellite and ground measurements confirm the energy imbalance.

Rising temperature in the atmosphere, measured over decadal averages, is therefore inevitable, which indeed is what is found (Reference 6): the Intergovernmental Panel on Climate Change (IPCC) included published data based on the globally averaged temperature from instruments around the globe (illustrated below). The Annual Average is very spiky, due to short-term variabilities as discussed.

Each year is not guaranteed to be hotter than the previous year, but the average of 10 consecutive years is very likely to to be hotter than the previous 10 year average, and the average of 30 consecutive years is almost certain to be hotter than the previous 30 year average. The averaging smooths out those internal variabilities that occasionally obscure the underlying trend.

Figure 5 - Rising Temperature

Nine of the ten hottest years in the instrumental record since 1884 have been in the 21st century, with 1998 being the one exception because of a large El Niño (Reference 7). Update: it is now 15 of the 16 hottest years in the instrumental record that have been since the year 2000 (Reference 8).

Many people have asked whether or not variations in solar output could be causing the warming, or maybe CO2 from volcanoes, but as discussed below these do not explain the warming.

Below we show a Figure from the IPCC that shows the various contributions to the warming of the Earth system (Box 13.1 Figure 1, References 6) during the period 1970 to 2011. The cumulative energy flux into the Earth system resulting from the influence of various sources is shown as coloured lines: well mixed and short lived greenhouse gases; solar; aerosols in lower atmosphere (tropospheric); volcanic aerosols (relative to 1860–1879). These are added to give the cumulative energy inflow (black) [see also animation of the data at Reference 9]

What this shows is that the greenhouse gases, principally man-made CO2, have been the predominant contributor to warming, with changes in solar output having a minimal cooling effect. Volcanic and other aerosols have been significant but their effect was to reduce the net warming.

Excellent summaries of the IPCC findings are available [see References 10 and 11].

As we can see, the Sun’s output has been quite stable, and volcanoes in recent decades have only produced between 0.5% and 1% of the additional  CO2 to be accounted for. This is to be contrasted with over 99% of the additional CO2 coming from man-made sources. This assessment is also confirmed by analysing the tell-tale mix of isotopes of carbon in the atmospheric CO2 which shows that most of it must have come from the combustion of fossil fuels, rather than volcanoes.

Volcanoes, through their injection of aerosols (namely, droplets of sulphuric acid) into the atmosphere are actually doing the reverse – creating a cooling effect that is slightly reducing the net global warming.

Figure 6 - Whodunnit?

Since 1958 the concentration of CO2 in the atmosphere has been measured at Mauna Loa in Hawaii reliably thanks to Charles Keeling (1928-2005). The “Keeling Curve” is a great gift to humanity (Reference 12) because it has provided, and continues to provide, a reliable and contiguous measure of the CO2 concentration in our atmosphere. The National Oceanic and Atmospheric Administration (NOAA) in practice now uses data from many global sites.

The rate of that increasing CO2 is consistent with, and can only be accounted for, as a result of the human activities [see Note 8].

Figure 7 - Keeling Curve

For the last one thousand years leading to the 20th Century, the concentration of CO2 was quite stable at 280 ppm, but since the start of the industrial revolution, it has risen to 400 ppm, with 50% of that rise in the last 50 years. An annual cycle is overlaid on the overall trend [see Note 9]. The Earth has not seen a level of 400 ppm for nearly 1 million years.

The carbon in the Earth system cycles through the atmosphere, biosphere, oceans and other carbon ‘sinks’ as they are called. The flow of ‘carbon’ into the atmosphere is illustrated in the following Figure (Reference 6). Man-made burning of fossil fuels causes a net increase in CO2 in the atmosphere above and beyond the large natural flows of carbon.

To understand this, an analogy used by Professor Mackay is useful (see Further Reading). Imagine an airport able to handle a peak of 100,000 in-coming passengers a day (and a balancing 100,000 out-going passengers). Now add 5,000 passengers diverted from a small airport. The queues will progressively grow because of a lack of additional capacity to process passengers.

Similarly, the CO2 in the atmosphere is growing. Humanity has hitherto been adding 2 ppm of CO2 to the atmosphere each year, and it is accumulating there [see Note 10]. This is the net flow into the atmosphere (but also with raised levels in the upper ocean in equilibrium with the atmosphere).  Once raised to whatever level we peak at, the atmosphere’s raised concentration would take many thousands of years to return to today’s level by natural processes.

Figure 8 - Carbon Cycle

It is significant enough that the Earth has not had concentrations as high as 400 ppm for nearly 1 million years. But today’s situation is unique for an additional critical reason: the rate of increase of CO2 is unprecedented.

The IPCC is conservative in assessing additional incremental increases in atmospheric CO2 concentrations and other greenhouse gases on top of the human emissions, as a result of ocean and biosphere warming, but we are entering uncharted waters, which is why the current situation is so concerning.


5. What are the projections for the future?

In science and engineering computer models are used to understand the motions of billions of stars in galaxies; the turbulence of air flowing around structures; and many other systems. They are also used to help manage our societies. In our complex world models are used for the operational monitoring and management of all kinds of man-made and natural systems: our electricity networks; pathways of disease transmission; and in many other areas. When used to assess future risks, these models allow ‘what if’ questions to be posed (e.g. in relation to electricity supply, what if everyone puts the kettle on at half time?). This enables us to plan, and take mitigating actions, or adapt to impacts [These arguments are developed in more detail in a separate essay “In Praise of Computer Models”].

Given the high risks we face from global warming, it is essential we do the same here also. This is why so much effort has gone into developing models of the climate and, more broadly, the Earth system (including atmosphere, oceans, biosphere, land, areas of snow and bodies of ice).

These models have evolved since the 1950s and have become increasingly sophisticated and successful. While there is no doubt that the Earth is warming and that this is primarily due to man-made emissions of CO2, the models help society to look into the future and answer questions like ‘what if the level peaks at 500 ppm in 2060?’, for example. The models are a vital tool, and are continuing to evolve (Reference 13).

There are many questions that are not black and white, but are answered in terms of their level of risk. For example ‘what is the risk of a 2003-level heat-wave in Europe?’ is something that models can help answer. Increasingly serious flooding in Texas during May 2015 is the kind of regional effect that climate modellers had already identified as a serious risk.

In general, it is much easier for the general public to understand impacts such as flooding in Texas, than some abstract globally averaged rise in temperature.

Providing these assessments to planners and policy-makers is therefore crucial to inform actions either in supporting reductions in greenhouse gases (mitigation) to reduce risks, or in preparing a response to their impacts (adaptation), or both. It is worth stressing that mitigation is much more cost effective than adaptation.

Svante Arrhenius (1859-1927) was a Swedish chemist who in 1896 published a paper on the effect of varying concentrations of CO2 in the atmosphere (Archer, Further Reading). He calculated what would happen if the concentration of CO2 in the atmosphere was halved. He, like Tyndall, was interested in the ice ages.

Almost as an after-thought he also calculated what would happen if the concentration was doubled (i.e. from 280 ppm to 560 ppm) and concluded that the global average temperature would rise by 6oC. Today, we call this an estimate of the Equilibrium Climate Sensitivity (ECS), or the estimate of the temperature rise the Earth will experience when it reaches a new equilibrium (see Note 11).

Guy Callendar (1898 – 1964) was the first to publish (in 1938) evidence for a link between man-made increases in CO2 in the atmosphere and increased global temperature. His estimate for ECS was a lower but still significant 2oC (Archer, Further Reading).

Since Syukuo Manabe (1931-) and Richard Wetherald (1936-2011) produced the first fully sound computer-based estimate of warming from a doubling of  CO2 in 1967 (Archer and Pierrehumbert, Further Reading), and General Circulation Models (GCMs) of the climate have been progressively refined by them and others.

The modern ‘most likely’ value of ECS is 3oC, different to both Arrhenius and Callendar, neither of whom had the benefit of today’s sophisticated computing facilities. 3oC is the expected warming that would result from a doubling the pre-industrial CO2 concentration of 280 ppm to 560 ppm (Reference 6).

Figure 9 - Arrhenius

The ECS includes short and medium term feedbacks (typically applicable over a period of 50-100 years) which takes us to the end of the 21st century, but not the full effects of the longer term feedbacks associated with potential changes to ice sheets, vegetation and carbon sinks that would take us well beyond beyond 2100.

“Traditionally, only fast feedbacks have been considered (with the other feedbacks either ignored or treated as forcing), which has led to estimates of the climate sensitivity for doubled CO2 concentrations of about 3°C. The 2×CO2 Earth system sensitivity is higher than this, being ∼4–6°C if the ice sheet/vegetation albedo feedback is included in addition to the fast feedbacks, and higher still if climate–GHG feedbacks are also included. The inclusion of climate–GHG feedbacks due to changes in the natural carbon sinks has the advantage of more directly linking anthropogenic GHG emissions with the ensuing global temperature increase, thus providing a truer indication of the climate sensitivity to human perturbations.” 
(Previdi et al., See Reference 14).

 

The so-called Earth System Sensitivity (ESS) is not widely discussed because of the uncertainties involved, but it could be as much as twice as large as the ECS according to the above quoted paper, and this would then be in the range of warming and cooling that was discussed earlier, in the record of the last 4 ice ages [see Note 12]. This is indicative of what could have occurred over these millennial timescales, and could do so again.

The key question we need to answer in the immediate future is: what pathway will the world follow in the next 50 years, in terms of its emissions and other actions (e.g. on deforestation) that will impact net atmospheric concentrations of greenhouse gases?

The IPCC 5th Assessment Report (AR5) included projections based on a range of different Representative Concentration Pathways (RCPs) leading up to 2100. Each RCP includes assumptions on, for example, how fast and how much, humanity will reduce its dependence on fossil fuels and on other factors like population growth and economic development (See Reference 15). The actual projections of future warming are dependent on what decisions we make in limiting and then reducing our emissions of CO2, because the lower the cumulative peak concentration the better, and the faster we reach a peak the better.

The following figure includes four of the IPCC RCPs. The one we shall call ‘business as usual’ would be extremely dangerous (many would use the word ‘catastrophic’) with a rise in the global average temperature of 5oC by 2100. It is not a ‘worst case’ scenario, because it is not difficult to envisage futures that would exceed this ‘business as usual’ scenario (e.g. much faster economic development with fossil fuel use increasing in proportion).

Only rapid and early cuts in emissions would be safe, leading to a peak in CO2 concentration by, say, 2030 (including some efforts to bring down concentrations after this using carbon capture and storage), leading to a 1.5oC rise by 2100.

The two other intermediate scenarios would be over the 2oC expected warming and would give rise to increasingly serious (and costly) interventions, with both short term and long term impacts.

Demystifying GW Talk (Slides) .001

The IPCC noted:
“There are multiple mitigation pathways that are likely to limit warming to below 2°C relative to pre-industrial levels. These pathways would require substantial emissions reductions over the next few decades and near zero emissions of CO2 and other long-lived greenhouse gases by the end of the century. Implementing such reductions poses substantial technological, economic, social and institutional challenges, which increase with delays in additional mitigation and if key technologies are not available. Limiting warming to lower or higher levels involves similar challenges but on different timescales.”
IPCC 5th Assessment Report, Summary for Policy Makers, SPM 3.4

Article 2 of the UN Framework Convention on Climate Change (UNFCCC), whose inaugural meeting was in Rio de Janeiro in 1992, stated the goal was to limit “greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system”, but formal recognition of the much cited 2oC target wasn’t until 2010 (Reference 16). There has been some debate whether the target should be lowered to 1.5oC, recognising the inherent dangers in the perception (or hope) that 2oC is a ‘safe’ limit we can overshoot with impunity.

A temperature trend has variabilities, as we have seen, over short to medium timescales because of several factors. These factors will continue to make themselves felt in the future.

Some people may seek comfort in the knowledge that there areas of uncertainty (e.g. level of impact at regional level), but as some wise person once observed, uncertainty is not our friend. The long-term future for our climate would take an extremely long time to unfold – to reach some new Earth system equilibrium – even if we stopped burning fossil fuels today. For example, the melting of the Greenland ice sheet could take many hundreds if not thousands of years.

Some changes are relatively fast and are already being felt as the planet warms, for example:

  • About three quarters of the Earth’s mountain glaciers are receding at accelerating rates (Reference 17), putting fresh water supplies at risk in many places such as Peru and the Indian sub-continent. While some may say that we can fix this problem by desalinating sea water, as they do in the Middle East, and even power this using solar, as the Saudis are planning to do, this is clearly a massive extra burden on stressed global water resources that would require significant additional electricity capacity, and brings with it huge risks to natural and human systems.
  • Sea levels are rising faster than expected and predicted to rise by up to 1 metre by 2100 (Reference 6). We could eventually see a rise of about 2.5 metres per 1oC rise in global surface temperature. So even if the world keeps to the 2oC commitment, we could anticipate a sea level rise of 5m eventually (Reference 18), putting at risk a majority of our cities that lie close to sea level today and where a growing percentage of the world’s population now resides (about 50% and growing). Note that while the IPCC scenarios focus on the state of the climate reached by 2100, in the longer term, changes could be locked in that have impacts for thousands of years  (Reference 19).
  • While a warmer climate can extend growing seasons in temperate zones, it can also bring problems for plants such as heat exhaustion, irrigation problems and increased range and resilience of insects. Outbreaks often defoliate, weaken and kill trees. For example, pine beetles have damaged more than 1.5 million acres of forest in Colorado, and this is attributed to global warming. The impact of temperature rise on food crops like wheat is expected to be negative overall, with yields likely to drop by 6% for every 1oC rise in temperature, according to a recent paper in Nature (Reference 20).
  • The acidity of the oceans has increased by 30% since pre-industrial times (Reference 21). This is increasing every year with 2 billion tonnes of CO2 being added to the upper layer of the oceans. This is having an impact on corals but longer term it can impact on any creatures that form calcium carbonate to build skeletons or shells. Plankton underpin a large part of the marine food chain and are thereby threatened by increasing CO2.

The IPCC analysed the widespread impacts of global warming that are already being felt (the following graphic is from the IPCC report, but the bullets are the author’s summary of just a selection of impacts).

Figure 11 - Current Global Impacts

Plants and animals evolve over long periods, so sudden changes cannot be compensated for by equally rapid biological evolution.

The planetary system is mind-bogglingly complex, and has huge reservoirs of carbon in fossil fuels and even greater ones in the deep ocean, so it is a marvel how the combination of physical and biological processes has managed to keep the concentration of CO2 in the atmosphere remarkably stable for a long time.

The Earth, as James Lovelock famously observed, is like a living system. Without life, there would be little or no oxygen in the atmosphere. If there was much more than its current 21% contribution the atmosphere would be dangerously flammable, and if there were much less, we mammals would struggle to breathe.

We see intricate balances in nature wherever we look in the biosphere and physical systems. That is why small changes can have big effects. You may wonder how an averaged global temperature change of 1oC or 2oC can have any significant effects at all.

The first point to realise is that this is an average and it reflects much larger swings in temperature and also regional differences. The Arctic for example is warming at a faster rate than elsewhere, and also the lower atmosphere warms as the upper atmosphere cools: These are two effects long predicted by climate models (as far back as the crude models of the 1950s, long before these predictions were proven by satellite measurements).

One result of these changes in the Arctic is that the jet stream running below it is slowing and getting more wiggly. This wiggly jet stream can accentuate extremes and create phenomena like blocking highs that fix weather events for longer than normal.  This is already leading to increased risks of extreme events after just a 0.8oC average global warming.

To illustrate what this might mean to western Europe and the UK, let’s look at heatwaves. When the average temperature is shifted a little higher, so are the extremes. What was very rare, becomes rare, and what was infrequent, can become quite frequent (Reference 22). Whilst a specific heatwave is not attributable to global warming, the odds mean that some are, and increasingly so as the average temperature increases. This perhaps obvious point is now backed up by research:

“The summer of 2003 was  the hottest ever recorded for central and western Europe, with average temperatures in many countries as much as five degrees higher than usual. Studies show  at least 70,000 people died as a result of the extreme high temperatures. In August alone, France recorded over 15,000 more deaths than expected for that time of year, a 37 per cent rise in the death rate. The same month also saw almost 2,000 extra deaths across England and Wales … While a heatwave used to happen once every 50 years, we’re now likely to see one every five years, the study concludes.” Robert McSweeney (References 23)

Similar increases in frequency could occur for other kinds of extremes like the flooding that hit Somerset in the UK during 2013-14. These regional impacts (current and projected) are being researched through ‘attribution studies’ by the UK’s Met Office, for example.


6. Can mankind stay within the 2oC goal?

We as humans in just 150 years have emitted over 2,000 billion tonnes of carbon dioxide (abbreviated as 2,000 GtCO2) by burning fossil fuels buried for millions of years. On the back of the energy we have unleashed, we have achieved huge advances in nutrition, medicine, transport, industry and elsewhere.

To have good odds of avoiding going beyond the 2oC rise (compared to pre-industrial levels) that the nations of the world have committed to, the world should emit no more than 565 GtCO2 in the 40 years from 2010 to 2050 (References 24, 25, 26). This is a red line (otherwise called the ‘carbon budget’) that we should not cross.

There is an equivalent of 3,000 GtCO2 (emissions potential) in the known reserves for listed companies. At our current rate of over 40 GtCO2 (equivalent) emissions a year [see Note 13] we would reach the red-line by 2030. By 2050 we would be well beyond the red-line and would exhaust the reserves by 2075 [see Note 13, 14].

Figure 12 - Fossil Fuel Red Line

There are factors that will change the rate of emissions. Increasing consumption per capita in developing countries will increase the annual emissions if fuelled by carbon-based sources of energy. On the other hand, as countries transition to zero carbon sources of energy, there will be a trend to reduce emissions. This means that the ‘carbon budget’ may be spent over a shorter or longer duration. It is clearly a question of which of these two forces wins out over this period of transition.

However, the annual rate of COincrease during the four years up to 2015 has consistently exceeded 2 ppm, and in 2015 was about 3 ppm, as  NOAA have reported. Clearly there is no sign yet of a levelling off of emissions globally.

In the year 2000, the carbon footprint between the highest and lowest consumers differed by a factor of about 10. The USA was close to 25 tonnes of CO2 net emissions (equivalent) per person each year, compared to India, which was more like 2 tonnes (Mackay, Further Reading).

“Now, all people are created equal, but we don’t all emit 5½ tons of CO2 per year. We can break down the emissions of the year 2000, showing how the 34-billion-ton rectangle is shared between the regions of the world.”

Figure 13 - Carbon Footprint

“This picture … divides the world into eight regions. Each rectangle’s area represents the greenhouse gas emissions of one region. The width of the rectangle is the population of the region, and the height is the average per-capita emissions in that region. In the year 2000, Europe’s per-capita greenhouse gas emissions were twice the world average; and North America’s were four times the world average.” (Professor David Mackay, Further Reading)

The above graphic (based on year 2000 data) is taken from Professor David Mackay’s book “Sustainable Energy without the Hot Air”.  This book provides a clear approach to understanding our options and making the maths of energy consumption and supply stand-up to scrutiny: five different scenarios for reducing our carbon emissions are discussed to meet our energy needs.

It is also worth noting that research by Oxfam published in 2015  indicates that the top 10% of the world’s population are responsible for 50% of emissions, and that extreme carbon inequality exists around the world (Reference 27).

While much of the debate about ‘alternatives’ focuses on energy production (wind, solar, nuclear, etc.), consumption is an equally important topic. There is a need for radical reductions in consumption in order to have any chance of meeting emissions targets.

Imagine a world in 2050 where the population has risen to and stabilised at around 9 billion, in part due to a rising middle class making up perhaps 50% of the population, with smaller families but higher per capita consumption levels: then the total energy demands might have grown by nearly 5-fold. Those aspiring to an energy intensive life-style will be likely grow proportionally.

If we continue with fossil fuels generating 80% of our energy, we would expect that the global emissions would increase proportionally to say 5 times the current levels. At that rate we would go beyond desired levels well before 2050, setting in train a temperature rise well past the 2oC goal, and placing the planet on a path to unstoppable and calamitous global warming.

We would also have deferred the necessity to prepare for a world without fossil fuels, and through this delay we would have created an even steeper cliff to climb to make the transition to zero carbon.

Despite their different starting points, the per capita carbon emissions of all countries need to be planned to move towards zero carbon by 2100, and drastically reduced well before then. Professor Sir David King in his Walker Institute Lecture illustrated a possible scenario to achieve this (Reference 28):

Figure 14 - Getting countries to converge

The Paris Climate Summit in December 2015, which was the 21st Conference of the Parties to the UNFCCC (UN Framework Convention on Climate Change) or COP21 for short, has been crucial in providing a framework to achieve this. New to this COP has been an emphasis on ‘bottom up’ initiatives at regional and national levels. The so-called Intended Nationally Determined Contributions (INDCs) have set targets and will enable countries to manage their own plans towards a low carbon future.

Some developed countries like USA and the UK have already been cutting emissions per capita from high levels. Economies like China and India starting at a relatively low level will rise in per capita emissions, peaking by 2030 if possible.

All countries should be aiming to converge on 2 tonnes of CO2 per capita by say 2050, then meet the zero target by 2100.

The above graph from King’s Walker Institute Lecture (Reference 28) plots an outline path towards a zero carbon 2100. The developed and developing parts of the world will follow different routes but need to converge well before 2100 on a greatly reduced level of emissions per capita.

This journey has already started and has been enabled by building new markets. The price per Watt of photovoltaics (PV) has fallen from $76 in 1977 to $0.3 in 2015 (according to Wikipedia). This was helped enormously by the introduction of feed-in tariffs in Europe that helped create a growing market for PV, and competition and innovation combined to help drive down the unit price. This is how markets develop, and it means that the rest of the world can benefit from the seed this created. However, there is a huge mountain to climb to transition the current energy model to a transformed one.  It is not about if, but it is about when this must happen.

While Sir David King shows it is possible to stay below 2oC, if we act with urgency, it is becoming increasingly difficult to do so, and some would argue that given the procrastination to date, is no longer realistic.  However, that does not negate the need to push for the most aggressive reductions in emissions that are achievable.


7. Is international agreement possible?

There are many examples of where regional and international agreements have successfully regulated environmental pollution, such as acid rain and lead in petrol.

A good example is to recall what was done to address the hole in the Ozone Layer, which was being caused by certain chemicals such as CFCs. This led to the Montreal Protocol (1987), and most importantly the subsequent agreements in London (1990), Copenhagen (1992) and Beijing (1999). The targets for harmful emissions were progressively reduced, including mechanisms to enable the market to transition away from CFCs. The world came together effectively to regulate and progressively reduce the threat.

The following picture demonstrates that agreements on global environmental challenges, like reducing damaging pollutants in the atmosphere, can be effective, but require sustained effort over a number of years.

Figure 15 - CFCs Ozone Hole

For global issues like the ozone hole, internationally agreed targets are essential, as Margaret Thatcher observed in her speech to the UN in 1989 (Reference 29). But this leaves industry free to compete. They can make fridges, innovating and competing on a level playing field, albeit one without CFCs.

Global warming is a much more challenging problem to solve. The history of the genesis of the IPCC formed in 1988 is discussed in Weart (Further Reading), and shows how long it took for the foresight of the pioneers in the field to be followed up, and for this to lead to internationally coordinated efforts.

On 1st June 2015, the CEOs of Shell and some other major European based oil & gas companies wrote to the Financial Times (Reference 30), with their letter entitled widespread carbon pricing is vital to tackling climate change, which was also the basis for a submission they made to the Paris Conference (COP21). This is demonstrating that the oil & gas industry is showing some indications of wanting to engage meaningfully, at least in Europe (albeit alongside their contentious desire to promote gas as a bridge to a zero carbon future).

The following Figure (Reference 28), taken from Professor Sir David King’s recent talk illustrates some of the international and national initiatives.

Figure 16 - Timeline for Climate Action

In short, it is not a choice between either environmentalism and regulation on the one hand, or free enterprise on the other, but in fact a combination of all three. There is not only room for innovation and entrepreneurialism in a greener world, but a necessity for it.


8. Planning a path to zero carbon that supports all countries

The path to low carbon will of course require addressing fossil fuels, in electricity generation, transport and industry. However, it is worth noting that improved machine efficiency, reduced travel, better buildings, etc., can make significant contributions (it is not just about changing the source of energy). The concept of ‘stabilization’ of the climate has a been around for some time through multiple parallel initiatives (see for example Reference 31).

Nevertheless, the role of fossil fuels remains a dominant feature of our energy landscape, and the question arises as to how we ensure ‘equity’ in a world where the developing world has neither been responsible for, nor had the benefits of, most of the fossil fuel burned to date.

However, those that claim we would hold back developing countries by denying them the benefits of cheap fossil fuels are ignoring 3 things:

  • When carbon pricing, or equivalent mechanisms, properly reflect the damage that is being done, and will be done, then fossils fuels will no longer be cheap.
  • The sooner we commit to a future without fossil fuels, the sooner we can develop the new infrastructure and systems needed to enable the transition, including new sources of energy, smart networks, information systems and conservation.
  • Some countries are already moving in this direction. Denmark has a goal of producing 100% of its energy from renewables by 2050, and Ethiopia is committing to reduce their CO2 emissions by two thirds by 2050. Despite all the rhetoric, China and the USA are adding large amounts of wind and solar power, and have made recent bilateral commitments. Even in the UK, with huge resistance to renewables in the media at least (which overstates the public’s views), renewables are significant: “Renewable energy provided 13.4 GW, or 43%, of British electricity at 2pm on Saturday 6th June 2015. I believe this is a new record” (Reference 32). This was an exceptional day, but nevertheless it may surprise many people, and is indicative of what could be possible. Also, in the second quarter of 2015, renewables generated more electricity than either nuclear or coal.

So a start has been already been made. Globally we need to increase greatly the level of commitment and delivery, as there is no  reason why renewables could not power humanity’s needs:

“Meeting 100 per cent of global energy demands through renewable energy is technically and economically feasible. The main problems are political and social.” Professor Mark Jacobson (Reference 33)

To achieve transformational change one needs a vision and a plan, which will have multiple streams of activity. The Solutions Project have a state-by-state plan to get the USA to zero emissions by 2100 (Reference 34).

For reasons of geography, a similar vision is more challenging for the UK, but a strategy has been developed that could achieve the same for the UK by the Centre for Alternative Technology (CAT) that shows what could be achieved, if we choose that path  (see CAT’s Zero Carbon Britain report, Reference 35).

Internationally, we need to have a similar vision and plan to push each stream forward in the overall transformation. In so doing the target needs to include a significant cut in carbon emissions by 2050 in order to keep within the 2oC goal.

The earlier we reach a global peak in annual emissions of CO2, and the lower the peak in total concentration in the atmosphere, the greater the chance of achieving the goal. So every year of delay amounts to additional risk. There is a cost to procrastination, as Michael Mann wisely observed.

The World Bank has produced a report showing how decarbonization of development can be achieved, with early action on transportation being a key priority (Reference 36).

The following figure is a simplified extract from the referenced World Bank report, giving a flavour of the steps required to get to zero carbon (please read the full report to get a proper appreciation of the strategy).

Figure 17 - Steps to Zero-Carbon


9. The transformation to a zero carbon future

As Elon Musk said, “I think the solution is fairly obvious … we have this handy fusion reactor in the sky” (Reference 37). Man-made fusion reactor technology has no prospect of digging us out of our current carbon hole, which requires action now, not in 50 years time (commercially scalable fusion energy is famously always 50 year’s away), though no doubt in the distant future it could play a role [see Note 15].

There are many other forms of zero carbon energy to consider – including renewables like wind and wave power – and each country will have its own choices to make based on a wide range of factors. In our windy UK, wind and tidal power have particular potential. However, there are reasons for believing that solar power will play a major role in the future on a global scale.

Today, and every day, the Sun radiates huge amounts of energy onto the Earth:

“The planet’s global intercept of solar radiation amounts to roughly 170,000 TeraWatt [TW] ( 1 TW = 1000 GW). … [man’s] energy flow is about 14 TW, of which fossil fuels constitute approximately 80 percent. Future projects indicate a possible tripling of the total energy demand by 2050, would correspond to an anthropogenic energy flow of around 40 TW. Of course, based on Earth’s solar energy budget such a figure hardly catches the eye …”

Frank Niele (Reference 38).

Humans currently require about 15 TW of power (15,000 GW), and while this would grow as the Earth’s population and standards of living rise (and probably stabilise), it is clear that by harnessing a fraction of the energy provided by the Sun we could accommodate humanity’s energy needs.

If, in 2050, humanity’s power demand peaks at 40TW, then a modest 10,000 solar arrays, each 100 square kilometres (10km x 10km) distributed around the world would deliver at least 100% of our needs [see Note 16].

Figure 18 -Solar Key to Transformation

Achieving this solar energy potential in its full sense will require a sustained programme to create a flexible transmission and storage infrastructure, able to handle a distributed renewables network. It would require grid-scale solutions, able to store GW hours of energy. All of this is achievable. The solutions are receiving a lot of focus (Reference 39).

In addition to the domestic and utility scale batteries that Tesla Energy and others are developing, there are other ingenious ideas such as the Hydraulic Rock Storage System invented by Professor Dr. Eduard Heindl (Reference 40). This is analogous to existing reservoirs in places like Scotland, but using a more compact system.

Figure 19 - Energy Storage

So while we all feel daunted by the transition that needs to be made from our carbon-centric world to a zero-carbon one, it is reassuring to know that some brilliant minds are on the case. They are not waiting for the politicians to all agree.

It is worth recalling that the abolition of the slave trade and then slavery itself met with huge resistance in Britain, embedded as it was in the economy. The point is that sometimes things seem impossible at the start of a change, but appear to be obvious and inevitable with the benefit of hindsight.

The consultancy McKinsey has written of the disruptive impact of solar power on the energy market (Reference 41), in part due to the fact that it satisfies electricity supply when  demand is at its peak, thereby undermining the profits of traditional sources of energy that rely of high prices at these times.

There are huge challenges to society to become less wasteful of its material and energy resources, to ensure sustainability for everyone on Earth. However, this is achievable without going back to a pre-industrial past.

It will mean a greater democratisation of resources, and an acceptance that the process of achieving the goals of improved health, nutrition and other measures of well-being cannot be fuelled by fossil fuels. The carbon route is a dead end that will bring more pain than gain.

The impact of global warming on its current trajectory would be disastrous for humanity. And while four fifths of currently known reserves of hydrocarbons are deemed to be un-burnable ‘stranded assets’, if we want a good chance to stay under 2oC (as illustrated earlier), do not expect the carbon industries to be content with current reserves.

They are continuing as we speak to uncover more reserves of carbon in the Arctic, in the Canadian tar sands, through ubiquitous fracking, and so it goes on. Peak oil? Forget it! With advanced seismic techniques the geologists will continue to find reserves. The world has become drunk on carbon!

There is another way. We see the pressure building to ensure those dangerous carbon assets, both present and future, become stranded.

Diverse voices (Reference 42) are raising concerns: the Governor of the Bank of England is urging the financial community to consider the risk of stranded assets; the Pentagon has talked about global warming as a ‘threat multiplier’; and Pope Francis has now added his voice, concerned at the ethical dimensions of global warming.

Figure 20 - Diverse Voices

More radical voices are also coming to the fore including the author Naomi Klein, who sees global warming not so much as an issue of sustainable energy per se, but of justice for those who are and will be most impacted by global warming. While global warming has not been a central issue in recent general elections in the UK, it is rising up the political agenda. It is hardly ever out of the news, and campaigns like the ‘divestment’ movement are getting a lot of people thinking. Many organisations are divesting from fossil fuels.

Those commentators who see reductions in CO2 emissions as a low priority goal in a world crying out for cheap energy to drive developmental goals in emerging economies are falsely framing carbon reduction and economic development as mutually exclusive goals. Far from being just another global problem to add to a long list, global warming has become the defining issue that now frames the others.

“So is the climate threat solved? Well, it should be. The science is solid; the technology is there; the economics look far more favorable than anyone expected. All that stands in the way of saving the planet is a combination of ignorance, prejudice and vested interests. What could go wrong? Oh, wait.”   Paul Krugman (Reference 43).

Our response should be positive and aspirational, heralding huge possibilities for innovation and positive changes for a cleaner and sustainable environment. Some countries are already deciding to take this route.

This is a future that remains energy rich, but fuelled by zero carbon sources, with greater energy efficiency and less waste than in our current throw-away culture. In this new world we will address the global challenges of the developing and developed world, because they are linked not separate. No country will be stranded.

We will also be aware of each other’s different backgrounds, cultures and values, which may determine which alternative energy resources we favour or fear. Inclusive public debate is a must.

In reality, the developmental goals that are being pursued in the developing world are crying out for a new model. Zero carbon development, including a major role for solar and other renewables which can be scaled up fast at both small and industrial scales, will help create this new model. Even new Saudi desalination plants are to be powered by solar power. The writing is on the wall for fossil fuels.

Such developments offer hope that a transition to a zero carbon world is not merely feasible within the right timescales, but is actually already underway, and offering a much more credible and sustainable future than a high-risk one based on fossil fuels.

Figure 21 - Ending on a light notes

This is a weighty topic, albeit such an important one. In order to end on not just a positive but also a lighter note, I invite you to enjoy the graphic I have included above! My own comment in response is, of course:

We can create a better world, so it won’t be for nothing!

(c) Richard W. Erskine, 2015 (Revised March 2016).

—————————————————————–


****************

References

****************

For completeness references are included, if only to highlight the longevity, depth and diversity of work that has gone into building our current understanding of global warming and its implications. However, for the general reader, I recommend Further Reading, which includes some free to access books and other resources.

  1. Tyndall, J. (1861), ’On the absorption and radiation of heat by gases and vapours, and on the physical connexion of radiation, absorption, and conduction’, Philosophical Magazine Series 4, Vol. 22: 169-94, 273-85.
  2. Urey, H. C (1947), ‘The thermodynamic properties of isotopic substances’, J. Chem. Soc.562-581.
  3. J.R. Petit, J. Jouzel. et. al., ‘Climate and atmospheric history of the past 420,000 years from the Vostok ice core in Antarctica’, Nature 399 (3 June), pp. 429-436, 1999.
  4. Courtillot, V., “Evolutionary Catastrophe: The Science of Mass Extinction”, Cambridge University Press, 1999.
  5. Painting, R., ‘Ocean Warming has been Greatly Underestimated’, Skeptical Science, ‘Ocean Warming has been greatly underestimated’14 October 2014
  6. Fifth Assessment Report (AR5), Intergovernmental Panel on Climate Change (IPCC), is available in full
  7. ‘Global Climate Change: Vital Signs of the Planet’, NASA
  8. “2015 was the hottest year on record”, Tom Randall & Blacki Magliozzi, Bloomberg, 20th January 2016
  9. Animation of the data is provided by the following link [use arrow at base of picture to step through] “What’s Really Warming the World?”
  10. A graphical and highly accessible summary of the IPCC AR5 in about 200 pages can be found in: “Dire Predictions: Understanding Climate Change: The Visual Guide to the Findings of the IPCC”, by Michael Mann and Lee R. Kump, DK Publishing & Pearson, 2015. [also now available as an eBook]
  11. A useful summary of the IPCC findings can be found on-line at Serendipity
  12. ‘”Keeling curve” of carbon dioxide levels becomes chemical landmark’, NOAA, 27 April 27, 2015
  13. Regarding climate models (state of art, emergent patterns & uncertainties):
  14. “Climate sensitivity in the Anthropocene”, M. Previdi et al, Quarterly Journal of the Royal Meteorological Society, Volume 139,  Issue 674,  July 2013 Part A
  15. “The Beginner’s Guide to Representative Concentration Pathways”, G.P. Wayne, Sceptical Science, v1.0, August 2013
  16. “Two degrees: The history of climate change’s ‘speed limit’”, Mat Hope & Rosamund Pearce, 8th December 2014, Carbon Brief
  17. “Melting glaciers are caused by man-made global warming, study shows”, Steve Connor, The Independent, 14th August 2014
  18. “Latest numbers show at least 5 metres sea-level rise locked in”, New Scientist, Michael Le Page, 10th June 2015
  19. “Consequences of twenty-first-century policy for multi-millennial climate and sea-level change”, Peter U. Clark et al, Nature Climate Change (2016)
  20. “Global warming will cut wheat yields, research shows”, Fiona Harvey, The Guardian, 23 December 2014
  21. “What is ocean acidification?”, NOAA
  22. “Climate Change and Heat Waves”, Kaitlin Alexander, 3rd April 2012
  23. “European summer heatwaves ten times more likely with climate change”, Robert McSweeney, The Carbon Brief, 8 Dec 2014
  24. Olivier JGJ, Janssens-Maenhout G, Muntean M and Peters JAHW (2014), ‘Trends in global CO2 emissions; 2014 Report’, The Hague: PBL Netherlands Environmental Assessment Agency; Ispra: European Commission, Joint Research Centre
  25. “How much of the world’s fossil fuel can we burn?”, Duncan Clark, The Guardian, 23 March 2015
  26. ‘Unburnable Carbon – Are the world’s financial markets carrying a carbon bubble?’, Carbon Tracker Initiative
  27. “Extreme Carbon Inequality”, Oxfam, December 2015.
  28. King, D., ’The Paris UN Climate Summit – Hopes and Expectations’, Walker Institute Annual Lecture,10th June 2015
  29. “Speech to United Nations General Assembly (Global Environment)”, Margaret Thatcher, 8 November 1989. 
  30. “Widespread carbon pricing is vital to tackling climate change”, Financial Times, 1st June 2015, Signed by: Helge Lund, BG Group plc; Bob Dudley, BP plc; Claudio Descalzi, Eni S.p.A.; Ben van Beurden, Royal Dutch Shell plc; Eldar Sætre, Statoil ASA; Patrick Pouyanné, Total S.A.
  31. Pacala, S and Socolow, R, ‘Stabilization Wedges: Solving the Climate Problem for the Next 50 years with Current Technologies’, Science, Vol. 305, 13th August 2004.
  32. “New record for UK renewables output”, Carbon Commentary, 7th June 2015
  33. Professor Mark Jacobson, Director of Atmosphere and Energy, Stanford University and co-author, Powering a Green Planet
  34. “100% Renewable Energy Vision”, The Solutions Project (this is a state-by-state plan for the USA)
  35. A UK plan to make the UK to energy use 100% renewables has been developed by CAT: Zero Carbon Britain: Rethinking the Future”, Centre for Alternative Technology, 2013.
  36. Fay, Marianne; Hallegatte, Stephane; Vogt-Schilb, Adrien; Rozenberg, Julie; Narloch, Ulf; Kerr, Tom. 2015. Decarbonizing Development : Three Steps to a Zero-Carbon Future. Washington, DC: World Bank. © World Bank
  37. “The Missing Piece”, 2015 Tesla Powerwall Keynote by Elon Musk, 1st May 2105
    • Also go to Tesla Energy to see the Powerwall
  38. Energy: Engine of Evolution, Frank Niele, Shell Global Solutions, 2005.
  39. Energy Research in North Rhine-Westphalia: The Key to the Energy Transition
  40. “Hydraulic Rock Storage: A new concept in storing electricity”, Heindl Energy
  41. “The disruptive potential of solar power: As costs fall, the importance of solar power to senior executives is rising”, David Frankel, Kenneth Ostrowski, and Dickon Pinner, McKinsey Quarterly, April 2014
  42. Diverse voices:
  43. “Salvation Gets Cheap”, Paul Krugman, New York Times, 17th April 2014



****************

Further Reading

****************

This is by no means an exhaustive list but includes some favourites of mine.

Items 1 and 4 are freely available on-line and offer an accessible combination of the history of global warming science and practical ideas on meeting our energy needs in the future – so good places to start one’s exploration of this broad subject.

For those wanting historical primary sources, Item 2 includes reprints of the paper by Tyndall (1861) and other seminal papers from 1827 to 1987, from a range of key scientific contributors (not all cited in the essay, but no less important for that), covering diverse topics. A history of the research into ice cores is well covered in item 3 in a popular form, by a leading geologist specialising in climate change (and if you visit Youtube, one of the most entertaining speakers you will find on any subject), Professor Richard Alley.

The IPCC report (Reference 6) is an impressive but challenging document. You can probably find time to read the ‘Summary for Policy Makers’, but for a compelling and pictorial guide, Item 5 is highly accessible. 

If you would like to explore the science more then Item 6 includes scientific treatments for those with some appetite for more technical explanations of the fundamental science, and won’t be scared off by a few equations: (a) Is a relatively accessible and short book from a leader in the field of the global carbon cycle and its relationship to climate change, Professor David Archer; (b) Is a scientifically literate and well structured blog (rather like a book in web form), that politely deals with blog comments, so useful for those wanting to explore deeper scientific questions, but having difficulty accessing the books; and (c) Is a complete, undergraduate level, textbook for those wanting a structured and coherent synthesis of the science, in all its details, from a leader in planetary climate science, Professor Raymond Pierrehumbert, who was a lead author of  the IPCC AR4 Report.

If you want to explore some of the debating points that are often raised about the science, then Item 7 provides a good guide: Skeptical Science does a good job at responding to the many myths that have been spread in relation to the science underpinning our understanding of global warming; Climate Feedback provides annotations of articles which abuse or misuse the science, so you can see comments and corrections in context.

With the exception of Professor David Mackay’s book, I have avoided books or sources covering policy questions (sustainability, energy, economics, etc.), which are crucial to engage on but outside the main thread of this essay.

  1. The Discovery of Global Warming, Spencer R. Weart, Harvard University Press, 2008 (Revised and Expanded Edition).
  2. The Warming Papers – The Scientific Foundation for The Climate Change Forecast, David Archer and Raymond Pierrehumbert, Wiley-Blackwell, 2011
  3. The Two-Mile Time Machine: Ice Cores, Abrupt Climate Change and Our Future, Richard B. Alley, Princeton University Press, 2000
  4. Sustainable Energy – Without The Hot Air, David JC Mackay, UIT Cambridge Ltd, 2009
  5. Dire Predictions: Understanding Climate Change: The Visual Guide to the Findings of the IPCC, Michael Mann and Lee R. Kump, DK Publishing & Pearson, 2015.
  6. More technical, scientific treatments:
  7. Countering myths



****************

Notes

****************

  1. If there were no heat-trapping (infra-red absorbing) gases in the atmosphere, the temperature can be calculated using Stefan’s Law and the answer is about -15oC. Actually, this is about the average temperature on the moon that receives about the same amount of visible radiation from the sun as we do on Earth per square metre, and has no atmosphere. So why is the Earth much warmer than this? When visible light from the sun heats the surface of the Earth it warms up, but at the same time it emits energy in the form of longer wavelength infra-red radiation which is absorbed by CO2 but there is infra-red emitted into space. How does this change the temperature of the Earth? This can be thought of as a bucket of water with a hole in it. The visible light is like the water being poured into the bucket, whereas the infrared is like water leaking from the bucket. At some point these balance each other, as the water rises to a point whereby the pressure is sufficient to ensure that the outward flow of water equals the inward flow.  The level of the water reached by analogy represents the equilibrium energy retained by the Earth, which translates to a warming of the Earth’s surface.  Because of the heat trapping gases, the temperature on Earth is 30oC higher (so about 15oC on average).
    • Note that we could have started the narrative with Fourier, who in 1827 had worked out the broad principles of what would be needed to explain the warming of the Earth’s atmosphere. However, I chose to focus the narrative on the ice ages. This is not to diminish Fourier’s contribution and I recommend Weart (Further Reading) to get a fuller account of all the scientists who have made seminal contributions.
  2. Understanding the atmospheric ‘greenhouse’ effect:
  3. While there is little doubt that Milankovitch cycles play a key role in the ice ages, the details are subtle. For example, while a change in the eccentricity of the orbit will change the amount of sunlight reaching a pole during its summer, averaged over a year, the change in total energy reaching the Earth is small. The key insight is that the northern hemisphere has more land and overall more ‘seasonality’ so that changes in energy absorbed in the northern hemisphere when the snow/ice cover drops becomes highly significant.  There are subtle details to this process involving the Milankovitch cycles, the cryosphere and carbon reservoirs that are still the subject of on-going research. A useful discussion of these subtleties can be found at SkepticalScience, including references to primary research.
  4. The carbon cycle is complex and works using different mechanisms over different cycle times. Over the period of the ice ages, there was an overall reduction in CO2 in the atmosphere during the colder periods, but this is not as simple as saying that colder sea water absorbed more CO2. This is clarified in a very useful article: “Does temperature control atmospheric carbon dioxide concentrations?”, Bob Anderson, 7th July 2010, Earth Institute Columbia University
  5. The increase in water vapour concentrations is based on “a well-established physical law (the Clausius-Clapeyron relation) determines that the water-holding capacity of the atmosphere increases by about 7% for every 1°C rise in temperature” (IPCC AR4 FAQ 3.2). For a doubling of CO2 in the atmosphere, the well established radiative physics (definitively laid down in “Radiative Transfer”, S. Chandrasekhar (1950) and a corner stone for climate models), tells us that that would lead to about a 1°C warming. However, the effect of water vapour is to add an additional 2°C of warming (and like with CO2 its the energy budget at the top of atmosphere that is key in determining the warming of the troposphere). This is a fast feedback. This adds up (1+2) to the 3°C  of warming overall. This estimate excludes the effects of clouds in the upper troposphere (which tend to lower temperatures by reflecting sunlight) and lower troposphere (which tend to help to trap heat), but which overall appear to cancel each other out, and so have a net neutral impact on the temperature change overall [this is however an area of active research, with a number of questions to be resolved]. There is often confusion about the role of water. For example, a common misconception is that increases in water vapour will lead to more clouds that will then offset the warming, which is false because the relative humidity (which is what largely governs the propensity for cloud formation) stays almost the same (as discussed by Chris Colose in “How not to discuss the Water Vapour feedback”, Climate Change, 2008).
    • Another example of the misunderstandings surrounding the role of water vapour is provided by Matt Ridley in an interview he gave to Russ Roberts at EconTalk.org in 2015. There a three factors alluded to here (1) CO2 (2) Water vapour (invisible vapour acting as a GHG) (3) Water in a condensed form in the form of clouds. But in this part of the discussion Ridley succeeds in completely losing sight of factor (2), and while recognising that (3) equates to something small (if not zero), he concludes that the overall warming should be 1°C. Well no! The models used fundamental physics, not “amplifying factors” added as parameters. The effects emerge from this basic physics. Ignoring (2) does not make it go away. It is worrying when someone with as much influence as Matt Ridley (and whose biography of Francis Crick is testament of his qualities as a science writer in another field where he commands respect) seems not to be able to grasp something so basic and well established as this. Here is what he said, which so clearly reveals his misunderstanding of the subject:
      • “So, why do they say that their estimate of climate sensitivity, which is the amount of warming from a doubling, is 3 degrees? Not 1 degree? And the answer is because the models have an amplifying factor in there. They are saying that that small amount of warming will trigger a furtherwarming, through the effect mainly of water vapor and clouds. In other words, if you warm up the earth by 1 degree, you will get more water vapor in the atmosphere, and that water vapor is itself a greenhouse gas and will cause you to treble the amount of warming you are getting. Now, that’s the bit that lukewarmers like me challenge. Because we say, ‘Look, the evidence would not seem the same, the increases in water vapor in the right parts of the atmosphere–you have to know which parts of the atmosphere you are looking at–to justify that. And nor are you seeing the changes in cloud cover that justify these positive-feedback assumptions. Some clouds amplify warming; some clouds do the opposite–they would actually dampen warming. And most of the evidence would seem to suggest, to date, that clouds are actually having a dampening effect on warming. So, you know, we are getting a little bit of warming as a result of carbon dioxide. The clouds are making sure that warming isn’t very fast. And they’re certainly not exaggerating or amplifying it. So there’s very, very weak science to support that assumption of a trebling.”
  6. Why a new equilibrium? Why does the Earth simply not go on warming? One of the reasons is that Stefan’s Law means that the total energy radiated from the Earth is proportional to the temperature (in Kelvin) to the power 4 (so two times the temperature would mean 16 times the radiated energy from the surface). Extending the analogy from Note 1, this is a bit like the following: The increased CO2 is equivalent to a restriction in the ability to emit infra-red into space, or in the case of the bucket, a smaller hole in the bucket. To re-establish the balance (because the flux ‘out’ must balance the flux ‘in’), the level of water in the bucket rises, increasing the pressure of the water at the base of the bucket, and thereby re-establishing the rate of water exiting from the bottom. In the case of the radiative effects of CO2, the equivalent effect is that the height in the atmosphere at which the flux balance occurs is raised and this implies a higher temperature on the ground when one descend down to the surface (using what is called the lapse rate).  These effects therefore combine to ensure that at a given concentration of CO2 in the atmosphere, it finds a new equilibrium where the ‘energy in’ equals ‘energy out’, and the surface temperature has increased as the COconcentration increases.
  7. Regarding the ice age ‘lag’ question, the body of this essay provided an explanation. In Serendipity a financial analogy originating from Professor Alley is cited: If I take out a small loan at high interest, and get into a deeper and deeper hole, is it the interest rate or the initial loan that was the problem? Well, it was the interest rate. In the same way, the initial warming of a Milankovitch Cycle may be small, but the CO2 adds a lot of “interest” as does the consequent feedback from increased water vapour.
  8. From Mackay (see Further Reading), Note 8 to Section 1: “… the observed rise in CO2 concentration is nicely in line with what you’d expect, assuming most of the human emissions of carbon remained in the atmosphere.”  A useful way to calculate things is to remember that 127 part per million (ppm) of CO2 in the atmosphere equates to 1000 GtCO2. Now since roughly 2000 GtCO2 are estimated to have been emitted from the start of the industrial revolution to now, and assuming roughly 50% of this figure has stayed in the atmosphere for simplicity (see link below), then 1000 GtCO2 then equates to 127 ppm added to the atmosphere on top of the pre-industrial 280 ppm giving 407 ppm (roughly) in total, so in the right ballpark (we are at 400 ppm in 2015). It is also worth looking up the specific chapter within the IPCC AR5 dealing with “Carbon and other Biogeochemical Cycles”
  9. The sawtooth reflects the seasonal cycles of the predominantly northern hemisphere deciduous trees and plants. Dead leaves decompose and release CO2, whereas growing leaves draw it down. So the overall trend is overlaid with this seasonal variation. The data is taken from the National Oceanic and Atmospheric Administration (NOAA) who administer the measurements that are presented here
  10. Here is a simple calculation. Currently we are responsible for nearly 40 billion tonnes (Gt) CO2 per annum. Assuming 50% (Ref. 6) stayed in atmosphere in the short term and given that each GtCO2 equates to 0.127 parts per million (million) to CO2 atmospheric concentrations by volume, we get 0.127 * 50% * 40 = 2.5 ppm. This is about right. In Reference 6, 2001-2011 showed an average of 2 ppm per annum increase, and this rate has been increasing. However, it appears the rate of increase is if anything increasing: in 2015 the NOAA reported a 3 ppm increase of CO2 whilst at the same time the International Energy Agency reported that global emissions have been flat in 2014-2015 period, even while the economy has grown:
    • However, it appears the rate of increase in atmospheric CO2 is if anything increasing: in 2015 the NOAA reported a 3 ppm increase of CO2 whilst at the same time the International Energy Agency reported that global emissions have been flat in 2014-2015 period, even while the economy has grown. This suggests that the balance between CO2 being absorbed in the Oceans or other carbon sinks, and the atmosphere, is changing, leaving more in the atmosphere. This is early days and more work is needed to establish is this is a trend.
    • We also know that once raised, the newly raised levels in the atmosphere remain raised for thousands of years – see “Carbon Forever”, Mason Inman, Nature Reports Climate Change, 20 November 2008 and this has been further reinforced by a paper showing this in relation to the IPCC AR5 scenarios (see Reference 19).
    • The CarbonTracker provides important calculations done by the Potsdam Institute derived from the IPCC AR5 data on ‘carbon budgets’ … “to reduce the chance of exceeding 2°C warming to 20%, the global carbon budget for 2000-2050 is 886 GtCO2. Minus emissions from the first decade of this century, this leaves a budget of 565 GtCO2 for the remaining 40 years to 2050”. The graphics in Reference 19 are eye catching, but in my experience can confuse some people. Hence the inclusion of the figure shown in this document (Fossil Fuel ‘Red Line’) where I try to simplify the key points (you can be the judge as to whether I succeed). The first thing to realise is that the CO2 emissions figures in Ref. 19 are just that (in other words – roughly 50% of these figures remains in the atmosphere [a more accurate figure is 60% but the purpose here is to provide an easy to remember, simple calculation – please refer to Mackay’s book, further reading and Carbon Tracker website for all the details of source data and calculations]).
    • During the Paris COP meeting (COP21) in December 2015, 1.5°C was introduced as an aspirational target, while 2°C remains the principal goal.  This has been discussed in “Scientists discuss the 1.5C limit to global temperature rise”, CarbonBrief.org, 10th December 2015.
  11. The Equilibrium Climate Sensitivity (ECS) represents the increase in surface temperatures  after a doubling of CO2 (and other GHG) concentrations but also when there is a equilibrium reached between the heat content of the atmosphere and oceans, which has a lag time after the atmospheric concentrations have peaked. The temperature is reached is largely determined by the peak CO2 concentration and the fast feedback arising from increased water vapour in the atmosphere.
  12. The Earth System Sensitivity (ESS) tries to accommodate longer term changes that could give rise to additional ‘forcings’ such as changes to the ice/snow coverage; release of CO2 and methane from warming of the land and ocean; etc. This involves more imponderables and is over timescales beyond the IPCC timeframe for its scenarios up to the end of 21st century.  Long term consequences that are potentially locked in (even if atmospheric warming stabilises) are likely such as increased sea levels beyond 2100 (see Reference 19).
  13. In rounded numbers, what the Figure shows is approx 2000 GtCO2 emissions from pre-industrial time to around 2011 (rounded figure), and at that point, nearly 3,000 GtCO2 potential emissions if all the listed fossil fuel reserves were burned. The red line is crossed if more than 565 is burned in 40 years from 2011 to ~ 2050. Any fossil fuels in addition to this are deemed “un-burnable” or “stranded assets”. If all the reserves were burned at continuing rate of 40 GtCO2 per year, they would be exhausted by 2075 and we would have crossed the red-line well before 2050. The 40 GtCO2 per year is clearly not a fixed number – the rate of burn will tend to increase if consumption rises in developing countries on back of fossil fuels, but it will tend to decrease as zero-carbon sources of energy replace carbon-based ones.
  14. In 2013 the world emitted 35.3 GtCO2 equivalent (see Mackay, Further Reading) including man-made greenhouse gases in addition to CO2. In this essay, we have rounded the number to a convenient 36 GtCO2. Sometimes you see emissions in terms of carbon, because reserves of unburned fossil fuels make more sense in terms of carbon, and this often creates confusion. When carbon is burned, it produces CO2. The atomic mass of CO2 is 12 + (2 * 16) = 44, compared to carbon (C) which is 12, so to convert an amount expressed as a mass of carbon to one expressed as CO2 you need to multiply by 44 and divide by 12 (and vice versa). So, 36 GtCO2 equates to 9.8 GtC (12*36/44 = 9.8). In the text we rounded 9.8 to 10, making the 10 GtC figure per annum at 2013 rates.
  15. If we can make a Deuterium-Deuterium fusion reactor on Earth, rather than the Deuterium-Tritium one that is the current model for tokamak reactors such as ITER, then effectively infinite energy (in human society terms) is available because of the huge reservoirs of energy possible from the Deuterium that could be harvested from the world’s oceans. The issue is that commercial realisation of the dream, even for the easier Deuterium-Tritium reaction is still decades away, maybe 50, and so not relevant to the current debate on options for zero carbon pathways which require heavy cuts in carbon emissions by 2050. We do have a rapidly scalable ‘alternative fusion’ (solar energy).
  16. Back of envelope calculation on feasibility of solar energy powering humanity:
    • At the distance the Earth is from the Sun, it is receiving over 1300 Watts per square metre (W/sq.m) on average during the year, but we can approximate this as 1000 W/sq.m reaching the surface of the Earth on average, allowing for reflected light that does not warm the surface.
    • The Earth receives the resulting power from the Sun over an area equivalent to its apparent disc, whereas the Earths surface is 4 times this value (4 pi R^2). Therefore the average power received is (1000/4 =) 250 W/sq.m reaching the Earth’s surface.
    • As photovoltaics and other solar energy may be only 20% efficient, we can capture perhaps 50W/sq.m (50 = 20% of 250) which equates to a usable energy of 50 million Watts per square kilometre (W/sq.km) = 50 million W/sq.km
    • Now we are assuming that by 2050 the human power requirement grows to 40 TW = 40,000 GW = 40 million million W so we need an area of 40 million million W / 50 million W/sq.km = (4/5) million sq.km which is approx. 1 million sq. km, i.e. a square with sides of just under 1000 km.
    • Or more realistically, 10,000 squares distributed around the planet each of 100 sq.km (ie, 10 km sided squares), and each with some energy storage system able to smooth the energy between night and day, connected to a smart grid. Each would produce (40,000 GW / 10,000 =) 4 GW so 100 km.sq solar array equivalent to say four medium 1 GW nuclear reactors or 12 typical 330 MW coal-fired reactors.
    • Note: In the text a quote was included from Frank Niele’s book (Reference 30) that mentions a solar intercept of 170,000 TeraWatt (TW = 1000 GW). This is not the practical maximum for solar power we could harness (and Niele is not saying that, but some people might misread it that way). Due to a number of factors (we would only want to use a small area of land for solar, the efficiency of PVs, etc.) the practical limit is very much less. BUT, even allowing for this, the amount of energy is so massive that we are still left with an enormous potential, that far exceeds the 40 TW requirement. We need (in the 2050 projection) ’only’ about 1 million square km (or 0.67% of the Earth’s land area). So, in practical terms, there is no ‘functional limit’ in respect of the energy that humanity needs.


Terminology

Most spheres of enquiry create their own language and jargon, and the science & policy surrounding global warming has its fair share. In the essay I tried as far as possible to avoid using terminology that is not in common usage. As an illustration, I include a few below and my common usage alternative:

  • Albedo – is the technical term for the fraction of solar energy reflected into space. More snow and ice means a higher ‘albedo’. In the text I simply refer to the ‘Earth reflecting more light’ to convey this.
  • Anthropogenic – often used in context of ‘anthropogenic global warming’. I have used the more prosaic ‘man-made global warming’ instead.
  • Climate – this word is unavoidable! It is crucial to understand the difference between Climate and Weather (NCAR provide a short and useful description of the distinction ).
    • Because ‘climate’ deals with averaged conditions over extended periods, rather than the precise ‘weather’ at a specific place and time, it is possible to make long-term projections of the climate in a way that is impossible for weather. The climate is then characterised by ‘emergent properties’ of the model ‘runs’, such as averaged values for temperature, precipitation, etc., on a global (and also regional) level over a specified time period (e.g. up to 2100).
  • CO2e or COequivalent is used in a few places in the essay. It is used by the IPCC and others as a means of stating a single figure, say, for ‘man-made greenhouse gas emissions’. It aims to include contributions from all greenhouse gases:  CO2, Methane, etc. However, it can cause confusion, because of the different ways we can calculate the impact of different gases over different periods. Each gas has different residency times in the atmosphere, and different inherent strengths of their infra-red absorption. This issue has been discussed. The basic point to note is that “COequivalent” aims to include the contributions not only CO2  from burning fossil fuels, but changes in land-use, and all human activities. Also remember that CO2 remains the principal actor and reducing our emissions is what we can control.
  • Feedback – is a technical term, which many people will have experienced when rock musicians distort their music by taking a microphone in front of a speaker. The term ‘Feedback’ is now used for any system where the output of the system can ‘feed back’ and influence the subsequent state of the system. There are two types of feedback in general: Positive feedback happens when a signal is reinforced and grows in strength; a Negative feedback happens when a signal is dampened and reduces in strength. “Positive” has therefore nothing to do with “good” or “desirable”, but merely a mathematical adjective. In the essay, we discussed examples of both these types of feedback in relation to climate change [see Section 2].
  • Forcing – is a technical term used to denote some effect that adds additional energy to the atmospheric / planetary system, and is measured in Watts per square metre. Extra CO2, solar, aerosols, soot, etc. are all types of ‘forcings’ (which can be positive and negative), but the essay uses colloquial language like ‘influence’ on warming, or ‘contribution’ to energy.
  • KiloWatt Hour – The Watt measures the power of an electricity source or the rate of its consumption. It is quite small in the context of domestic devices, so we tend to think in terms of one thousand Watt, which is a KiloWatt.  A 1 KiloWatt electric fire is using electricity at the rate of 1 KiloWatt. But this is problematic when trying to articulate our usage of electricity, and it is better to think in terms of the total consumption over a chosen period, like 1 hour. So after one hour that electric fire has consumed 1 KiloWatt Hour.  Because we are switching lights on and off, using a toaster for a few minutes, etc. for the domestic total of consumption, we can then think about how many KiloWatt Hours (or KWh in brief) we consume in one day, or one year.  We can even express other forms of energy (e.g. the energy used by driving our cars) using the same units. Prof. Mackay (see his book in Further Reading) uses KWh liberally because it is easy to work with in this way.  In 2008, the average UK citizen was consuming 125 KWh per day.  [Note: One MWh = 1,000 KWh, and One GWh = 1,000 MWh (note shorthands: K=1000, M=1000,000, G=1000,000,000).]
  • Parts Per Million (ppm) – is a useful way to state the atmospheric concentration of CO2.  The current concentration is 400 ppm. Expressed as a percentage this is (400/1000,000)*100% = 0.04%. There are 6 x 1023 molecules of a gas in 22.4 litres at standard temperature and pressure, or 30,000 million billion in a cubic centimetre. So at 0.04%, that is still 12 million billion molecules (of CO2) per cubic centimetre, with an average separation between two nearest neighbour CO2 molecules of less than 5 micrometres at this density. Stated like that, CO2 does not seem quite so sparse as the 0.04% figure might suggest.

 

The End

(c) Richard W. Erskine, 2015, 2016 – EssaysConcerning.com (Published July 2015, Revised March 2016)

12 Comments

Filed under Climate Science, Essay

Global Group-Think?

Darwin’s discovery of evolution by natural selection (and independently by Wallace) was the result of many years of meticulous observations of the natural world. In some ways it was even more brilliant because this discovery was made in the absence of any known basis for the variations in species (which are ultimately required by the theory).

It took a century to pass, with the discovery of DNA’s structure and processes, for scientists to understand how the shuffling and transposition of genes provided the underlying mechanism for the variations on which natural selection depends (along with the differentiation of environments that provides the selective pressures at the level of species).

While there continues to be a rich vein of discoveries in the exploration of this interplay between the genotype and phenotype, the underlying truth of Darwinian natural selection remains inviolate.

Similarly, through the statistical analysis of lung cancer rates, the link between smoking and lung cancer was clearly demonstrated in 1962 (Royal College of Physician’s report ‘Smoking and Health’), long before the underlying causative processes were understood (and these underlying processes could be argued to be very much still ‘work in progress’). No one seriously doubts the link, even though the tobacco industry tried for many years to claim that correlation does not prove causation.

In the case of man-made global warming (or ‘anthropogenic global warming’, AGW), the history of its discovery is in complete contrast to the above examples. With AGW we knew the essential underlying mechanism before, not after, the macro-scale phenomenon was even recognised as an issue!

By 1861, Tyndall’s experiments had demonstrated unequivocally that carbon dioxide was able to trap heat, and in 1896 Arrhenius had made the first calculations (laboriously by hand) of how variations in the concentration of carbon dioxide in the atmosphere would impact average global temperature.

Yet despite this, it took till 1938 before Callendar first published data to show that far from being a theoretical possibility, man’s emissions of carbon dioxide were indeed having a measurable influence on global average temperature.

Few scientists took this up as an issue at this time, or even as a research priority. Maybe in 1938 the world had some higher priorities to address, with the world already deep into the ‘dark valley’ and on the eve of World War II, but it is certainly true that there was not much interest in the topic even in academic circles.

Of course, over the years some did explore different aspects related to climate and related fields of enquiry, such as the study of glaciers, diverse isotopic methods, modelling weather and climate, and many more, but these were distinct fields which did not really converse with each other. It was really only in the 1970s that various seminal conferences took place that tried to piece together these disparate strands of evidence. This history is explored in meticulous detail in Weart’s ‘The Discovery of Global Warming’

Perhaps the most striking was the use of isotopes of oxygen measured in ice cores, acting as a proxy for temperature (because of the differential evaporation rates of water), which correlated strikingly with CO2 concentrations. As Weart notes:

“In the 1960s, painstaking studies had shown that subtle shifts in our planet’s orbit around the Sun (called “Milankovitch cycles”) matched the timing of ice ages with startling precision. The amount of sunlight that fell in a given latitude and season varied predictably over millenia. …
 
The new ice cores suggested that a powerful feedback amplified the changes in sunlight.

The crucial fact was that a slight warming would cause the level of greenhouse gases to rise slightly. For one thing, warmer oceans would evaporate out more gas. For another, as the vast Arctic tundras warmed up, the bogs would emit more CO2 (and another greenhouse gas, methane, also measured in the ice with a lag behind temperature). The greenhouse effect of these gases would raise the temperature a little more, which would cause more emission of gases, which would … and so forth, hauling the planet step by step into a warm period.

Many thousands of years later, the process would reverse when the sunlight falling in key latitudes weakened. Bogs and oceans would absorb greenhouse gases, ice would build up, and the planet would slide back into an ice age. This finally explained how tiny shifts in the Earth’s orbit could set the timing of the enormous swings of glacial cycles.

These ice cores and associated methods were improved over several decades with the Vostok cores reaching back 400,000 years finally convincing many in the scientific community.

Only in the 1980s was AGW finally gaining recognition as a serious issue, and this led eventually to the formation of the IPCC in 1988, which is the internationally sponsored vehicle for assembling, reviewing and reporting on the multiple primary published research including interlocking streams of evidence and analysis.

There are some who argue against the much vaunted consensus on AGW (the 97% of climate scientists who agree that AGW is demonstrated).

I was at a meeting recently on science communication where someone from the audience objected to this 97% consensus saying “can we trust a science where so many are in agreement?” … he was pointing out that often in science there is a hotbed of debate and disagreement. Surely this 97% is evidence of some kind of group-think?

Well, of course, as Weart documents, at almost every step in the 200 odd years of science that has tried to explain the ice ages, and latterly global warming, there has been intense scientific dialogue that has been a million miles from group-think. The role of Milanovitch cycles, mentioned in the above quote, is just one example. The dialogue continues, for example, in relation to the so-called ‘hiatus’ and many other topics.

But these same combative scientists do not dispute the reality of AGW only the details, and particularly those relating to regional impacts. These will of course be the subject of intense research that continues as we as humans seek to mitigate where we can, and adapt where we must.

Let’s consider some possible examples of ‘group think’ in science:

  • Ask 1000 biologists if they think Darwinian natural selection is true and I suggest over 97% would concur.
  • Ask 1000 clinicians if smoking will greatly increase the risk of lung cancer and I suggest over 97% would concur.
  • Ask 1000 physicists if they think the 2nd Law of Thermodynamics is both true, and will survive any revolution in science (even the changes that dark matter and energy no doubt presages), and I suggest that over 97% would concur.

Are these examples of ‘group-think’?

I would say, absolutely not! They represent a consensus informed by many decades of cumulative scientific endeavour that has stood the test of time and battled through many challenges and tests.

As we see from Weart’s history, the acceptance of AGW is not something the scientific community have jumped to in some rash, rush to agree; that’s not how science works. Rather, it has been a methodical, multi-disciplinary emergence of an understanding over many decades, which only quite recently (1980s) can be said to have reached a consensus.

The reality of AGW has survived many challenges and tests (mostly from within the scientific community, best able to frame challenging tests).

I think it is therefore a rather lazy and ill-informed viewpoint to characterise the consensus on AGW among scientists (and specifically climate scientists) as evidence of ‘group-think’.

Perhaps those determined to disagree with AGW should ask themselves whether in fact they are the real victims of ‘group-think’: a curmudgeonly kind of contrarian group-think from an increasingly marginalised section of the media.

Leave a comment

Filed under Climate Science, Debate, Essay, Science

In Praise of Computer Models

As you walk or commute to work, does it ever occur to you how much thought and effort goes into keeping the lights on?

I remember many years ago doing some consulting for a major utilities company, and on one visit being taken to a room which was full of PhD level mathematicians. “What are they doing?” I asked, “Refining models for calculating the price of electricity!”. The models had to calculate the price on a half-hourly basis for the market. The modellers had to worry about supply including how electricity is distributed but also how fast incremental supply can be brought on stream; and on the demand side, the cycles of demand as well as those unusual events like 10 million electric kettles being put on at half time during a major football game.

It should be pretty obvious why modelling of the electricity supply and demand during a 24 hour cycle is crucial to the National Grid, generators, distributors and consumers. If we misjudge the response of the system, then that could mean ‘brown outs’ or even cuts.

In December 2012: “… the US Department of Homeland Security and Science held a two-day workshop to explore whether current electric power grid modelling and simulation capabilities are sufficient to meet known and emerging challenges.” 

As explained in the same article:

“New modelling approaches could span diverse applications (operations, planning, training, and policymaking) and concerns (security, reliability, economics, resilience, and environmental impact) on a wider set of spatial and temporal scales than are now available.

A national power grid simulation capability would aim to support ongoing industry initiatives and support policy and planning decisions, national security issues and exercises, and international issues related to, for instance, supply chains, interconnectivity, and trade.”

So we see that we move rapidly from something fairly complex (calculating the price of electricity across a grid), to an integrated tool to deal with a multitude of operational and strategic demands and threats. The stakeholders’ needs have expanded, and hence so have the demands on the modellers. “What if this, what if that?”.

Behind the scenes, unseen to the vast majority of people, are expert modellers, backed up by multidisciplinary expertise, using a range of mathematical and computing techniques to support the operational and strategic management of our electricity supply.

But this is just one of a large number of human and natural systems that call out for modelling. Naturally, this started with the physical sciences but has moved into a wide range of disciplines and applications.

The mathematics applied to the world derives from the calculus of the 17th Century but was restricted to those problems that were solvable analytically, using pencil and paper. It required brilliant minds like Lagrange and Euler to develop this mathematics into a powerful armoury used for both fundamental science and applied engineering. Differential equations were the lingua franca of applied mathematics.

However, it is not an exaggeration to say that a vast range of problems were totally intractable using solely analytical methods or even hand-calculated numerical methods.

And even for some relatively ‘simple’ problems, like the motions of the planets, the ‘three-body problem’ meant that a closed mathematical expressions to calculate the positions of the planets at any point in time were not possible. We have to numerically calculate the positions, using an iterative method to find a solution. The discovery of Neptune was an example of how to do this, but it required laborious numerical calculations.

Move from Neptune to landing a man on the moon, or to Rosetta’s Philae lander on the surface of the comet 67P/Churyumov–Gerasimenko, and pencil and paper are no longer practical; we need a computer. Move from this to modelling a whole galaxy of stars, a collision of galaxies, or even the evolution of the early universe, and we need a large computer (for example)

Of course some people had dreamed of doing the necessary numerical calculations long before the digital computer was available. In 1922 Lewis Richardson imagined 64,000 people each with a mechanical calculator in a stadium executing numerical calculations to predict the weather.

Only with the advent of the modern digital computer was this dream to be realised. Although of course, the exponential growth in computing power has meant that each 18 month doubling of computing power has created new opportunities to broaden or deepen the model capabilities.

John von Neumann, a key figure in the development of the digital computer, was interested in two applications – modelling the processes involved in the explosion of a thermonuclear device and modelling the weather.

The innovation in the early computers was driven by military funding, and much of the pioneering work on computational physics came out of places like the Lawrence Livermore Laboratory.

The Monte Carlo method, a ubiquitous tool in many different models and applications, was invented by Stanislaw Ulam (a mathematician who is co-author of the Teller-Ulam configuration for the H-bomb). This is one of many innovations used in computer models.

The same mathematics and physics used for classical analysis has been reformulated in a form susceptible to computing, so that the differential calculus is rendered as the difference calculus. The innovations and discoveries made then and since are as much a part of the science and engineering as the fundamental laws on which they depend. The accumulated knowledge and methods have served each generation.

Some would argue that far from merely making complex problems tractable, in some passive role, the computer models provide a qualitatively different approach to that possible prior to digital computing. Because the computers acts like experimental devices from which insights can be gleaned, they may actually inspire new approaches to the fundamental science, in a proactive manner, helping to reveal emergent patterns and behaviours in systems not obvious from the basic physics. This is not a new idea …

“Given this new mathematical medium wherein we may solve mathematical propositions which we could not resolve before, more complete physical theories may possibly be developed. The imagination of the physicist can work in a considerably broader framework to provide new and perhaps more valuable physical formulations.”  David Potter, “Computational Physics”, Wiley, 1973, page 3.

For the most part, if we think not of colliding galaxies, and other ‘pure science’ problems, the types of models I am concerned with here are ones that ultimately can impact human society. These are not confined to von Neumann’s preferred physical models.

An example from the world of genomics may help to illustrate just how broad the application of models are in today’s digital world. In looking at the relationship between adaptations in the genotype (e.g. mutations) and phenotype (e.g. metabolic processes), the complexities are enormous, but once again computer models provide a way of exploring the possibilities and patterns, that teach us something and help in directing new applications and research. A phrase used by one of the pioneers in this field, Andreas Wagner is revealing …

“Computers are the microscopes of the 21st Century” 

BBC Radio 4, ‘Start The Week’, 1st December 2014.

For many of the complex real-world problems it is simply not practical, ethical or even possible to do controlled experiments, whether it is our electricity grid, the spread of disease, or the climate. We need to be able to conduct multiple ‘runs’ of a model to explore a range of things: its sensitivity to initial conditions; how good the model is at predicting macroscopic emergent properties (e.g. Earth’s averaged global temperature); response of system to changing external parameters (e.g. the cumulative level of CO2 in the atmosphere over time); etc.

Models are thereby not merely a ‘nice to have’, but an absolute necessity if we are to have get a handle on these questions, to be able to understand these complex systems better and to explore a range of scenarios. This in turn is needed if we as a society are to be able to manage risks and explore options.

Of course, no model is ever a perfect representation of reality. I could repeat George Box’s famous aphorism that “All models are wrong but some are useful”, although coming as this did from the perspective of a statistician, and the application of simple models, this may not be so useful when thinking about modern computer models of complex systems. May I suggest a different (but much less catchy) phrase:

“Models are often useful, sometimes indispensable and always work in progress”

One of the earliest mathematicians to use computers for modelling was the American mathematician Cecil Leith, who during the war worked on models of thermonuclear devices and later worked on models for the weather and climate. In a wide-ranging 1997 interview covering his early work, he responded to a question about those ‘skeptics’ who were critical of the climate models:

“… my concern with these people is that they have not been helpful to us by saying what part of the model physics they think may be in error and why it should be changed. They just say, “We don’t believe it.” But that’s not very constructive. And so one has the feeling they don’t believe it for other reasons of more political nature rather than scientific.” 

When the early modellers started to confront difficult issues such as turbulence, did they throw their hands up and say “oh its too hard, let’s give up”? No, with the help of new ideas and methods, such as those originating from the Russian mathematician’s Kolmogorov and Obukhov, progress was made.

The cyclical nature of these improvements comes from a combination of improvements in methods, new insights, improved observational data (including filling in gaps) and raw computing power.

A Model of Models might look something like this (taken from my whiteboard):

image1-2

In this modern complex world we inhabit, models are not a nice to have, but an absolute necessity if we are to embrace complexity and be able to gain insights into the these systems, and anticipate and respond to scenarios for the future.

We are not able to control many of the variables (and sometimes only a very few), but we can see what the response is to changes in the variables we do have control over (e.g. use of storage arrays to facilitate transition to greater use of renewable energy). This in turn is needed if we as a society are to be able to manage risks and explore options, for both mitigation and adaptation in the case of global warming. The options we take need to be through an inclusive dialogue, and for that we need the best information available to inform the conversation.

Some, like US Presidential candidate Ted Cruz would prefer to shoot the messenger and shut down the conversation, when they do not like what the science (including the models) is telling them (e.g. by closing down the hugely valuable climate research based at NASA).

While many will rightly argue that modelling is not the whole story, or even the main story, because the impacts of increased man-made CO2 are already evident in a wide range of observed changes (e.g. large number of retreating glaciers), one is bound to ask “what is the alternative to doing these models?” in all the diverse fields mentioned? Are we …

  • To wait for a new disease outbreak without any tools to understand strategies and options for disease control and to know in advance the best deployment of resources, and the likely change in the spread of disease when a virus mutates to an air-borne mode of infection?
  • To wait for a brown-out or worse because we do not understand the dynamical response of our complex grid of supply to large and fast changes in demand, or the complexities of an increasingly fragmented supply-side?
  • To wait for the impacts of climate change and make no assessment of when and how much to invest in new defences such as a new Thames Barrier for London, or do nothing to advise policy makers on the options for mitigation to reduce the impact of climate change?

Surely not.

Given the tools we have to hand, the knowledge and methods we have, accumulated over decades, it would be grossly irresponsible for us as a society not to undertake modelling of these kinds; and not be put off by the technical challenges faced in doing so; and certainly not be put off by those naysayers who don’t ‘believe’ but never contribute positively to the endeavours.

We would live in a more uncertain world, prone to many more surprises, if we failed to model the human and natural systems on which we rely and our future depends. We would also fail to effectively exploit new possibilities if we were unable to explore these in advance (e.g. the positive outcomes possible from a transition to a decarbonised world).

Let’s be in praise of computer models, and be thankful that some at least – largely unseen and mostly unthanked – are providing the tools to help make sense of the future.

Richard Erskine, 24th May 2015

4 Comments

Filed under Computer Science, Essay, Models, Science

Will you act on climate change, Prime Minister Cameron?

Margaret Thatcher was no tree hugger but her respect for science heralded a genuine quest to tackle global warming, in 1988 the same year that the IPCC was founded. Can you face down your antediluvian friends to show similar foresight in the face of the procrastination and delays?

Maggie cared about the environment

In a speech to the Royal Society in September 1988 Margaret Thatcher said:

“For generations, we have assumed that the efforts of mankind would leave the fundamental equilibrium of the world’s systems and atmosphere stable. But it is possible that with all these enormous changes (population, agricultural, use of fossil fuels) concentrated into such a short period of time, we have unwittingly begun a massive experiment with the system of this planet itself.

Recently three changes in atmospheric chemistry have become familiar subjects of concern. The first is the increase in the greenhouse gases—carbon dioxide, methane, and chlorofluorocarbons—which has led some to fear that we are creating a global heat trap which could lead to climatic instability. …”

These words were spoken by a Prime Minister famous for being a champion of free markets, limited regulation and liberal economics. So was she going green and abandoning her principles? Not at all. But she understood the science and the policy implications, and in the same speech talked about the discovery by the British Antarctic Survey of the hole in the ozone layer, and said it was “common sense to support a worldwide agreement in Montreal last year” (1987).

There is no contradiction between regulation on issues that impact on the environment, across international boundaries, and support for free trade and open markets. It it obvious that global companies, working as they are to the beat of quarterly results, and national governments with election cycles of a few years, are ill-equipped to address climate changes over many decades or to deliver changes that require a global consensus. The market is also demonstrably incapable of doing this. It may be a bitter pill for believers in the ultimate wisdom of the markets, but they have fundamental limitations.

If everyone is competing to make fridges that contain CFCs, they can equally well compete fairly on a different level playing field where a benign alternative is used, so long as this is backed up by international agreements such as the Montreal Protocol (and it goes without saying that the transition must be managed well, with verifiable targets along the way).

Today, too often, there is an assumption that there is an unbridgeable divide between environmentalists and free marketeers, between Conservatives and Greens. The debate has become tribal, and at times poisonous. Even those that try to, in US terms, ‘reach across the aisle’, are likely to have their hand bitten off (if not by the ‘other’ side, then by their own!).

Polarisation and tribalism may seem to be a truism in the context of the election we have just had in the UK. But voters are multi-dimensional, so while they are forced to tick one box in our ‘first past the post’ system, it does not mean they do not share values with others who vote differently. Far from it.

Few environmentally minded people fit the stereotype of a tree-hugging, anti-capitalist that some in the media like to conjure up; and few in the business world believe we can trash the environment without serious repercussions.

In the large middle ground there are many shared values that pertain to a wide range of issues we must all confront when we think about global warming. There are so many questions that anyone, from left, right or centre might ask themselves …

  • Would you support greater investment in flood defences in Somerset or along the Thames, in the face of increasingly frequent extreme weather events (which were predicted and have now manifest themselves)?
  • What about the billions needed for a new Thames Barrier that will probably be needed, sooner or later … would you support such a project?
  • If the exodus of migrants from north Africa today is seen as a crisis, then how should we respond practically and morally, in the face of a 2oC warmer world that will decimate agriculture in Africa, and could turn the current trickle of immigrants into a flood?
  • If we care about our personal carbon footprint, will we stop criticising China, when we find that, for example, our iPhone and M&S jacket are manufactured in China (it is our carbon footprint, not theirs!)? Will we favour manufacturers who radically reduce their reliance on fossil-fuel generated electricity, thereby reducing our footprint?

Isn’t it time Prime Minister Cameron, after promising to deliver the greenest government ever at the previous election, to demonstrate unequivocally that you respect the science, as Thatcher did?

Given that the world has procrastinated for so long, action is now an imperative, and every year we fail to act substantively means that the pain of transition will increase exponentially. So will this government demonstrate real commitment to COP21 (the UNFCCC 2015 Paris Climate Conference), and back this up with meaningful action and support for an internationally binding carbon price? Such an agreement is now essential to our ability to mitigate global warming.

Prime Minister, you should be wary about hiding behind those right wing media attack dogs who would prefer that you ridicule, marginalise or ignore the issue of global warming, so that you can simply go through the motions with that furrowed brow and those soothing words of concern.

You should be wary because there are universal values held by many of the people in this country – Conservative, Liberal, Labour and Green – and the electorate do not fit neatly within the narrow tribal stereotypes when it comes to the environment. These values will increasingly be challenged and tested by the impacts and responses – both home and abroad – of global warming.

When the ‘Tory Party at Prayer’ (the Church of England) starts to divest itself of at least some fossil fuel investments and the Governor of The Bank of England is warning of a potential for “stranded assets” (due to the fact that the majority of reserves of coal, oil and gas are “unburnable carbon”, if we are to avoid dangerous global warming), you should think strategically about the threat to those shared values. Or even, to make this personal, how would you prepare to answer the question “Grandad, what did you do to address global warming?” (“sit on my hands” is not going to cut it).

Even within the relatively narrow parameters of protecting pensions, a major Tory theme that resonates with our ageing population, the Conservatives would live to bitterly regret not taking these questions seriously if they succeed in trashing those voters futures on the altar of vested interests in the fossil fuel industry.

Margaret Thatcher respected the Royal Society

I remember talking with a Professor from London University in the 1980s about how to approach difficult technical topics (e.g. the effects of nuclear weapons) in talks to lay people, concerned at that time about the medium range nuclear missile stand-off between ‘the West’ and Russia. “Should I avoid the basic science altogether?” I asked, and he said “No. Assume you have an intelligent audience, but make it accessible. I find that people feel empowered if they understand enough of the basics to be able to navigate complex subjects”

In the ‘debate’ about global warming this is very challenging because there is a lot of science to grapple with across many disciplines in understanding climate change and how we came to know that humans are causing the planet to warm dangerously (one could do worse than read Weart’s “The Discovery of Global Warming” that is available in book form but also free on the web, which unravels the 200 year old detective story that has been the scientific journey towards today’s clear consensus).

It is often only by understanding a little about the science, that one can then engage in a discussion regarding values and then in turn, try to translate the conclusions into effective policies and action.

Take for example the recent case that was reported dramatically as “Three parent babies”. This created an image of some Frankenstein creation, and of course made great headlines. Despite the hyperbole, the reality was more prosaic. The third person would be a woman providing only a tiny amount of mitochondrial DNA (for the part of a cell that is analogous to it’s battery pack), to address malfunctioning mitochondria, and consequent fatal diseases. It was reported well in some cases:

“The third-party DNA contained in the donated mitochondria comprises much less than 1% of the total genetic contribution and does not transmit any of the traits that confer the usual family resemblances and distinctive personal features in which both parents and children are interested.”

With knowledge like this it makes it possible to engage in a constructive debate on the values we have and how to move to policy. We can discuss the benefits, risks, implications for future generations and ethical dimensions on a shared understanding of the science.

While a scientist will have values that may determine what kind of science they study – such as with genetic diseases for example – the methods & approach they use to do their science must be robust and independent of those values.

In science, scientists publish peer reviewed papers, and even then the results are checked by others who aim to reproduce the results, and do this on an international basis, which helps to ensure that cultural bias is not somehow distorting the objectivity of the science.

Once we know the results of the science (peer reviewed and published), we can then overlay our values to determine what we think are the implications of the science. Only then is there a basis on which to advocate for specific policies and actions.

People will argue that scientists should never advocate in favour of policies and should stick to the science. But what if a scientist discovers a cure for a disease, publishes their work in a reputable scientific Journal, what do they do next? Wait for a politician to read the paper and act? They might wait forever! And what if a scientist has worked on the atomic bomb, to stop Hitler and maybe Japan, but then finds that after the war the military are keen to build a vast arsenal. They may feel we have been duped and may wish to stand up and be counted.

Surely, so long as the scientist is clear that they have changed hats by moving from the lab to the arena of public debate, and they are also clear about the values they hold that might influence their advocacy, then why can’t they be involved in the debate?

Scientists have frequently done this, whether over the threat of nuclear proliferation and accidental war, or the need for vaccination to prevent diseases, when science has policy implications (which is surprisingly frequent), it is often the scientists that are needed to at least ensure that the science is not lost in translation when it enters the public domain of the media and politics.

After all, we have seen in the whole MMR debacle how badly the media (journalists and commentators) often mis-translates science into stories, with serious impacts on vaccination rates, herd immunity and consequent increasing rates of measles. The consequences are still being felt as far afield as California. Misreporting of science can be a life and death issue.

Whilst many have put the blame solely at the door of the now discredited Andrew Wakefield, and he has already taken his punishment, Ben Goldacre takes a different view:

“It is madness to imagine that one single man can create a 10-year scare story. It is also dangerous to imply – even in passing – that academics should be policed not to speak their minds, no matter how poorly evidenced their claims. Individuals like Wakefield must be free to have bad ideas. The media created the MMR hoax, and they maintained it diligently for 10 years. Their failure to recognise that fact demonstrates that they have learned nothing, and until they do, journalists and editors will continue to perpetrate the very same crimes, repeatedly, with increasingly grave consequences.”

So, where are your trusted sources? It surely helps in any field to have an interpreter, who can help you navigate the science, and the disagreements, but how do you choose your interpreter? Do you ‘trust’ Ben Goldacre because he appears to have knowledge and a flair for communicating it? Do you also like the fact he is combative and regularly beats up the Daily Mail on their often bizarre medical reporting? That is not a good enough reason.

It certainly helps that an interpreter like Goldacre is also very good at referencing his sources, so you can check the interpreter. But no one individual is infallible. That is why we have bodies who specialize in areas of research, whether fundamental or applied, that provide checks and balances, and often with a duty to inform the public. That is why President Abraham Lincoln set up the National Academy of Sciences to provide the advice that he recognised was needed in an increasingly complex, technological world. We attack or set aside these bodies at our peril.

If we want to get information surrounding the new genetic treatment of mitochondrial diseases it is therefore obvious where to start; with the Human Fertilization & Embryology Authority (HFEA), set up to license and monitor UK fertility clinics and all research involving human embryos, and to provide impartial and authoritative information to the public.

In a world where opinions are aplenty and advertising revenue is dependent on ‘hits’, is it any wonder that ‘being controversial’, like a shock jock in print or on the web, is a valued commodity in the modern world of media. For such people, scientific consensus is an opportunity to target people. An opportunity for hits.

James Delingpole is an example of someone who spends a lot of time writing angry attacks against individuals and groups who he disagrees with, and is rewarded with many hits.

That may be a great business model for The Telegraph and other media outlets, but it is hardly edifying and certainly not a new model for scientific enlightenment.

He says he is an “interpreter of interpretations”. This is effectively claiming some kind of absolute privilege to select the right interpreters and their interpretations. He says he won’t read the original papers (an oddly extreme condition for getting informed), but fails even to respect those bodies covering oceanography, climatology, etc. that can help him gain an education on something he claims to be interested in. Instead, he issues blanket dismals of individuals and organisations; essentially dissing thousands of scientists who have devoted their lives to training and research on their specialised subjects.

In so doing, he rejects a scientific process that has done us pretty well for over 350 years. Would we have made the advances in medicine, technology and our understanding of the universe by his method? Of course not. When we make ourselves the ultimate authority, without any knowledge or skills in the topics considered, we end up with quack science, pub science.

If you need authoritative information on climate change / global warming, you do not need to rely on journalists or commentators, whether they come from The Guardian, The Telegraph or Daily Mail, because there is an obvious way forward. After sating yourselves with the tribal rhetoric on offer, the real education can begin.

Why not use informed and conservative bodies such as the Met Office in the UK, where balanced, well written and well referenced analyses are available (such as one from the Met Office issued on the much talked of ‘pause’ ). There are many bodies to consider, including a number in the USA like the NOAA (which, as one example, provides key source data on the rise in COin the atmosphere, the “Keeling Curve”). There is no shortage of accessible and authoritative data and interpretation. The much attacked IPCC provides a consolidation of thousands of strands of scientific research, which is transparently and freely available.

The concentration of carbon dioxide in our atmosphere has risen from about 280 parts per million (ppm) in the pre-industrial times to  400 ppm today (and rising). Since the world would be an icy ball without CO2 in the atmosphere, we rather like the 280 ppm and in fact life has been rather used to this level for a long while, being remarkably stable for the last 1000 years, until we started injecting CO2 into the atmosphere. But systems in equilibrium can be easily knocked off it by even small perturbations. We see extreme draught and extreme precipitation in different regions – all as expected.

When you add more energy into a system than is getting out, the energy in the system increases, and this is manifest in the form a rising temperature of the oceans and atmosphere. Of course it is complex, this unfolding of the energy increase as it interacts with the moving parts of the planetary system and internal cycles. But the basic science is clear and simple. There are thousands of telltale signs of warming and we have warmed just 0.7°C so far. We are on course for a dangerously warming world.

The current level of CO2 the atmosphere is unprecedented in 800,000 years, and is due to man-made burning of fossil fuels. This increase in CO2 is not a small perturbation. It is a great shove, and the system is already in search of a new equilibrium.

The IPCC provides the current best estimate of how much the earth will warm (once it has reached a new equilibrium) as a result of a doubling of CO2 in the atmosphere:

“Equilibrium climate sensitivity is likely to be in the range 2° to 4.5°C with a most likely value of about 3°C, based upon multiple observational and modelling constraints. It is very unlikely to be less than 1.5°C”.

Note that while there is a range of possible outcomes, it is really wishful thinking to hope that the earth will respond according to least of the bad outcomes on offer.

Hope is not a strategy.

In other words, you do not need to read original scientific papers; or use that as an excuse to instead rely on your favourite newspaper; or alternatively, to set yourself up as yet another ultimate authority.

Your taxes (and those in other countries) are paying for highly skilled, conscientious and well informed people, following well established scientific processes, to provide access to source data and accessible interpretations of the state of things.

Next Steps

So, Prime Minister, how will you proceed?

Will you use the power you now have to face down those in your party who claim that the case is not clear yet?

Will you demonstrate a real commitment to address the now well established human-induced global warming?

Who will be your Minister in charge of the climate change portfolio?

Would Margaret Thatcher have been proud of your decisions?

Will your grandchildren in years to come be proud of your vision and courage?

It is your choice. 

2 Comments

Filed under Climate Science, Essay

Ignoring Denial

AGW opinion

 

Professor Richard Betts made a guest post on the blog “And Then There’s Physics” (ATTP) on how we should “label the behaviour, not the person”, in relation to global warming denialism or contrarianism, and in particular labels applied to people, such as “denier”.

He proposes that we should de-polarize the ‘debate’ around Anthropogenic Global Warming (AGW), and specifically, avoid using the term ‘denier’. Of course, it takes all sides to ‘de-polarize’ but in any conflict, it is good to take the initiative.

He further points out that this goes beyond the negative language traded between those with opposed positions, because often those of moderate temperament, but with a lot of insight and knowledge (e.g. the climate scientists who are colleagues of Richard Betts) are it seems put off from engaging in the blogosphere, due to the atmosphere that has been created, in this increasingly polarized medium.

I was thinking of contributing to the over 300 comments to this blog post, but decided that my best response was a blog post of my own, because I support Professor Betts basic premise, and wanted to go further: to question if polarization was leading to something worse – missing the target!

I envisage a simple matrix to characterize the spectrum of opinions on AGW.

  • In one dimension (vertical), we have the point of view, from the “Pro” (AGW) and “Anti” and the much greater population of those or are undecided (no, I cannot say this is backed up by a specific opinion survey, but is broadly reflected in samples of opinion, and polls – but the areas are not accurate, merely indicative).
  • The other dimension (horizontal) represents the level of engagement with AGW, from ‘passive’  (and often confused with it), through to ‘engaged’ (and with an exploratory / learning posture), and finally, those who are ‘active’ participants in the ‘debate’, with an establish Point Of View (POV), which may be backed up with some expertise (but that in itself is sometimes a contentious point).

This is depicted in the diagram at the head of this post. Within this spectrum of views I have overlayed different populations:

A. The mass of population, who are it seems minded to believe in AGW, but certainly not equipped to argue the case. To a large extent they are passive and rather confused by the arguments.

B. The influencers, opinion formers and the engaged populace are much less in number but have significant impact on policy. They are engaged to the extent of exploring and learning, and have formed but malleable opinions.  In the ‘Pro’ camp will be a number of activist groups as well as outreach organizations (e.g. COIN). In the ‘Anti’ group are a number of vocal contrarians, such as contrarians that feature in the WSJ. [quite a few on both side – this is not intended to provide an exhaustive list]

C. These have an established point of view (POV), and are the experts bodies including scientific societies and of course the Intergovernmental Panel on Climate Change (IPCC), which represents the overwhelming scientific accumulation of knowledge, and consensus.

D. These have an established POV, and are those groups dedicated to countering the consensus, such as the Global Warming Policy Foundation (GWPF), often within a liberal economic posture.

X. Are those blogs in the blogosphere aligned to the ‘Pro’ position, and include experts in various disciplines, including climate science, as well as amateurs

Y. Are those blogs in the blogosphere aligned to the ‘Anti’ position, and include experts in various disciplines as well as amateurs

To be “undecided” as an establish POV is not an impossible position, and scientists in this category, after they did the analysis, have moved to the “Pro” position. See, for example, Professor Muller’s change of viewpoint.

The problem with the blogosphere is that it has today a characteristic (partly borne of weakly moderated ‘discussion threads’) that seems to encourage an escalation in language. The ‘deniers’ versus the ‘warmists’, rapidly degenerates into personal abuse and expletives.

Professor Betts feels it is not helping, and inhibits engagement of a wider audience or participation, and to recruit valuable resources who can help in communicating the science and ensuing issues.

What the diagram above tries to convey is that two groups – A. the mass population and B. influencers, opinion formers and the engaged populace – are where those (with a strong POV) should be expending our energy.  Those in the D category like GWPF have certainly got this message. Those in the C category have in recent years begun to do much better (despite a funding disadvantage), but need to do much more.

We need those in the C and X categories to spend more time talking to each other, to develop the materials needed to engage effectively with categories A and B, rather than engage in attrition with categories D and Y.

Should X category bloggers refuse to talk to Y category bloggers?  Mostly, I believe yes, given the current atmosphere. But when there is a shared interest in a specific topic, there is scope for a constructive discussion (e.g.  to debate the potential role of nuclear in mitigation).

The Anti-Pro polarization is consuming excessive energy while, guess what, the Anti-camp is working vigorously to influence the  mass population and opinion formers, to try to undermine the Pro position (although, increasingly, ineffectively judging from the polls).

Focusing on the conflict between the small number of Pro and Anti bloggers (X vs Y), may provide some kind of gratification, but it fails to ensure we build a wider ‘Pro’ platform.

We need a bigger community of active Pro communicators … that can better engage with both the passive and engaged populace, and use limited time and energy in smart ways.

Maybe the time has come to ignore denial.

43 Comments

Filed under Climate Science, Contrarianism, Debate, Essay

Becoming Digital

It is received wisdom that the world has become digital.

Leaving aside that I now qualify for concessions at some venues, is this true? Is it really an age thing, and only the young will truly ‘be’ digital? Why do we still in many homes live in some mix of analogue and digital existence? Have we really become digital or are we only part way through a long transition? What, if anything, is holding us back? (I will leave for another essay the issue of what is happening in the workplace or enterprise: in this essay I am only concerned what impinges on home life).

It is certainly the case that Nicholas Negroponte’s vision of the future “Being Digital” published in 1995, when he was Director of the MIT Media Lab (and where he remains as Chairman, no doubt, with colleagues predicting new futures), provided an excellent checklist for inventions and innovations in the digital arena, and what he characterized as the transition from atoms (eg. Books) to bits (eg. e-books), as the irreducible medium for the transmission of information and entertainment. [In the following I will insert the occasional quote from the book.]

Smart TVs

When walking through Heathrow Airport recently I saw a toddler in arms, and as they passed a display screen a little hand reached out pointing at the screen, and tried to swipe it! It amazed me.

“… personal computers almost never have your finger meet the display, which is quite startling when you consider that the human finger is a pointing device you do not have to pick up, and we have ten of them.” (p. 132)

Clearly the touch-screen generation is emerging (although the child was disappointed to discover it failed to respond … it was just a TV monitor!). [The quotation above is similar to the Steve Jobs one, included in Walter Isaacson’s biography of him, (p. 309) “God gave us ten styluses”, which he uttered in relation to the stylus bearing Apple Newton, on his return to the firm in 1997. But of course Jobs had been dreaming of touch-screen products for many years, and it is incredible that the first iPad was released only 5 years ago, and the iPhone just three years earlier than that].

Negroponte predicted the convergence of the PC and the TV, but why has it taken me until the closing days of 2014 to acquire a “Smart TV”? It is a complex matter.

One thing is that I like to get the full value from the stuff I buy and the 7 years old Sony workstation and Bravia monitor (with its inbuilt tuner) meant we could view TV terrestrial and internet catch-up services like BBC iPlayer from the same set of kit, while also using it as a media station for photos and music, with some nice Bose speakers attached. But this is a setup that comes at a price, in ways that are more than simply financial.

The cost of setting up a fully fledged PC (which is mostly intended for entertainment) is high, whereas the Smart TV encapsulates the required processing power for you at a fraction of the cost. Why do we need a geek to watch a film? No reason at all. It really should be plug and view. And this also avoids all those irritating invitations to upgrade this or that software; to rerun virus checks; and battle with bloat-ware like Internet Explorer; etc.).  Not to mention that when we picked it up I could literally lift the Smart TV box with my little finger. This is therefore not only about the TCO (Total Cost of Ownership) but also the TIO (Total Irritation of Ownership). [The old Sony PC setup lives on in my new Study, where I will now use its power to greater effect, spending more time curating my vast photo collection, and writing blogs like this]

Sometimes the market is not quite ready for an idea, and it takes time to educate people about the options. The convergence of so many elements, including internet services, Full HD, large LED screens, and much more, when coupled with people’s poor experiences of high TCO and TIO mean that they like me are ready to make the move when thinking about a new “TV”. In my case triggered by the thought of moving the media station to my new office, and thinking “Do I REALLY want another PC to drive a TV monitor?”.

eBooks

On a recent long trip, my wife and I succumbed to getting a Kindle, allowing us to take a small library of novels with us for the journey and avoid falling foul of weight limitations on our flights. The Kindle is great technology because it does neither more nor less than one needs, which is a high contrast means of reading text, optimized for easy-on-the-eye reading as close as possible to what we know and love in a physical book. Power consumption is low, so battery life is long, because it does not try to do too much.

Does this stop us buying books? Well no. Even novels are still acquired in physical form sometimes because I suppose we are of an age where we still like the feel of a book in our hands.

But there are other reasons, that mean that even were we to wean ourselves off the physical novel, with it’s exclusively textual content, other books would not be so easily rendered in compelling electronic form due to their richer content.

Quite often the digital forms of richer media books are poorly produced, being often merely flat PDF renditions of the original. One of the books we downloaded for our trip was a Rough Guide to Australia and frankly it was a poor experience on a Kindle and no substitute for the physical product.

It recalls for me the inertia and lack of imagination of music companies who failed to see the potential in digital forms, seeing only threats not possibilities, which then saw them overhauled by file-sharing and ultimately ‘products’ such as iTunes and Spotify. In a sense, the problems and possibilities are worse for books because at least with books, it should not have taken much imagination to see where publishers could have provided different forms of ‘added value’, and so transform their role in a new digital landscape.

For example, when a real effort is made to truly exploit the possibilities of the digital medium – its interactivity, visual effects, hyperlinking, etc. – then a compelling product is possible that goes far beyond mere text and static visuals. Richard Dawkin’s “The Magic of Reality” for the iPad is an electronic marvel (a book made App), including the artistry of Dave McKean.

It brings the content to life, with wonderful text, touch-screen navigation, graphics and interactive elements. It clearly required a major investment in terms of the art and graphical work to render the great biologists ideas and vision into this new form. It could never have been achieved on a Kindle. It was able to shine on an iPad.

This is the next kind of digital book that really does exploit the possibilities of the medium, and should be the future of electronic books, if more publishers had the imagination to exploit the platform in this way.

The concept of personalization is also a great idea that only the digital world can bring to reality. This has already happening in news to a greater or lesser extent, as Negroponte predicted:

“There is another way to look at a newspaper, and that is as an interface to news. Instead of reading what other people thinks is news and what other people justify as worthy of the space it takes, being digital will change the economic model of news selections, make your interests play a bigger role, and, in fact, use pieces from the cutting-room floor that did not make the cut from popular demand.” (p. 153)

However, the physical forms live on, or takes a long time to die.

I used to buy The Independent, but now, for reasons partly concerned with content but also the user experience, I have moved to The Guardian tablet product: but we still get a physical ‘paper’ on Sunday, because it is somehow part of the whole sprawling business of boiled eggs, toast and coffee: an indulgence like superior marmalade.

Some physical forms will remain for more persistent reasons.

When we recently went to an exhibition of Henri Cartier-Bresson’s photography in Paris, we came away the coffee table sized book of the exhibition measuring 30cm x 25cm x 5cm. This book is an event in itself, to be handled and experienced at the coffee table, not peered at through some cold screen.

And what of my old copy of P.A.M Dirac’s “The Principles of Quantum Mechanics” in it’s wonderfully produced Oxford Monograph form, where even the paper has a unique oily smell? I received this for my 21st birthday from my mother in 1974, and it is inscribed by her. For me, it is irreplaceable, in whatever form you might offer to me.

We will never completely sideline physical / analogue products, but for books at least we may see them being pushed towards two extremes. On the one hand, the pulp fiction Print-On-Demand low cost product, or on the other hand, the high impact, high cost product like the coffee table art book.

Our senses of sight, smell, taste, touch & hearing are analogue and so we have a innate bias towards analogue. That is why the iPad is so much more natural for a child learning to interact with technology than a traditional PC.

The producers of digital products must work hard to really overcome the hurdles that digital production often faces to match the intimacy and warmth of the physical, analogue forms. But when they do, they can create stunning results.

We are for sure ‘Becoming Digital’, but the journey is far from over and there is still much to learn to make the transition complete, whatever that might mean.

eAlbums

I remember 10 years ago making the shift to digital photography. The trigger this time was a big birthday/ holiday for my wife, and the thought of upgrading my camera, from Canon SLR to Canon SLR, but now a metal bodied just-about-affordable digital one. I had flirted with digital but it had been very expensive to get even close to the quality of analogue (film). But in 2005 I found that the technology had crossed that threshold of affordable quality.

In making the transition to digital I had long ago lost the time or appetite for the darkroom, and the Gamer enlarger has been in the loft now for more than 20 years. But even without substituting the chemicals with Photoshop, there is a lot to think about in the move to digital:

How will one organize and index one’s photos?

And, the big question for me, how will one avoid simply substituting the large box of photos and negatives that never quite found time to be curated and nurtured into Albums, with a ‘digital’ box, with JPEG and RAW files that never quite get around to being curated and nurtured into Albums?

When my wife and I returned from the holiday of a life-time in Tanzania, and I had some 3000 shots (high resolution JPEGs), including a sequence of a Leopard on the bough of a tree which I waited an hour to take: the few fleeting moments as it rose from its slumbers, stretched and then disappeared into the grasses of the Serengeti.

How could I turn this huge collection into a Christmas present for my wife worthy of the experience?

  • Well, I first decided on how to thematically organize the album … Our Journey, Enter the Big 5, The Cats, …
  • Then I sifted and selected the photos I would include in the album, before printing these 150 photos that survived the cull in different sizes and aspect ratios.
  • These were then pasted into the album, leaving plenty of space for narrative, commentary, and the odd witty aside.
  • This whole process took 3 whole days. A work of love and a little art I like to think.

Could that really be done purely digitally?

Now I know and can concede that much of this analogue work could now be done using some on-line ‘create your album’ service (of which there are many), even perhaps creating a look and feel that tries to emulate the warmth and intimacy of the old fashioned album.

There is a ‘but’,  because even if we have digitized the process to that extent, people still want the physical album as the end product sent to their home.

Why?

Surely we could have a digital display cycling through the album and avoiding the need for a physical artefact entirely? Why do we, in Negroponte’s language, need atoms when we can have bits instead?

Part of this is cultural. Handing around an album at Christmas rather than clicking a display on the device on the wall is still something that commands greater intimacy. But even supposing we clear that cultural and emotional hurdle there remains another more fundamental one.

Will this iconic album be around in 50 or 100 years time, like the albums we see from our grandparents, cared for as a family heirloom? Now, while many people now – knowingly or otherwise – are storing their digital photos on the cloud, and this number is growing exponentially, how many would trust cloud services to act as custodians of the family’s (digital) heirlooms?

I would suggest that few would today. So, what needs to happen next to take us from ‘Becoming Digital’ to fully digital, at least when it comes to our family photos and videos?

Google or Facebook are not the answer as they do not understand the fundamental need, that there may actually be stuff I do not want shared by default with every person I come into contact with – and I am obliged to understand increasingly complex and poorly thought-out access controls to ensure confidentiality – and if I slip up, it is my fault.

I am prepared to pay for professional services that respect my need to ensure confidentiality and copyright by default, and sharing is controlled precisely only when and with whom I want to, through choices simply and clearly made.

Clearly the Googles and Facebooks of this world do not offer a philosophy or business model to provide such a platform, because we have entered into a pact with these social media giants: we get to use their platforms for free if and only if we are prepared to share intimate detail of our lives, so we can be monetized, through a network of associated Apps and services that make recommendations to us. They are marketing engines offering to be our friend!

That is the choice we are forced to make if we want to stay in touch with our distant family networks.

So what is the alternative?

Well, we need a whole lot of stuff that goes beyond devices and platforms, and is nothing like social media. Imagine that the National Archives in a country like the UK joined forces with a respected audit firm (like PwC) and legal firm (like Linklaters) to institute a kind of ‘digital repository accreditation and archiving service’ that acted in support of commercial providers, and was funded by its own statutory services.

The goal would be to set legally enshrined standards for the accreditation, auditing and care of digital artefacts in all forms, in perpetuity, acting as the trusted archive of last resort. Added value services could be developed including rich descriptive meta-data, collection management, etc., to enable commercial providers to create a market that goes far beyond mere storage, but was not dependent on the long-term viability of any commercial entity.

This combined enterprise would provide that extra level of confidence customers fundamentally need.

Now that would be interesting!

As this example illustrates, the process of ‘Becoming Digital’ is so much more than the latest device, or App, or other gizmo, or even content production process (as we saw with eBooks).

It requires something that satisfies those less easy to define emotional, cultural and legal requirements, that would make it truly possible for my grandchildren to enjoy visual and textual heirlooms in a purely digital form that are secure and confidential, in perpetuity.

Conclusion

“Being Digital” was a seminal and visionary book and it is no wonder that the incomparable Douglas Adams in his review comments included on the cover said:

“… Nicholas Negroponte writes about the future with the authority of someone who has spent a great deal of time there.”

Now we are nearing 20 years into that future, it is interesting to see how things are playing out and how much of his vision has come to pass, and how much more there is to do.

What is most evident to me, from a personal perspective at least, is that ‘Becoming Digital’ in all its forms is a rocky and personal path, with lots of hurdles and distractions on the way, and an awkward marriage between analogue and digital, between atoms and bits, that looks set to continue for a long while yet … at least in this household.

(c) Richard W. Erskine, 2014.

Leave a comment

Filed under Digital Media, eBooks, Essay, Photography, TV

The Quantum of doubt, and Uncertainty of Journalism

ATTP wrote a great piece on policing science that prompted this blog post. I am shocked that Matt Ridley, as someone with a good science degree, is stooping so low as to ‘diss’ so many scientists, including the President of the Royal Society.

I have just seen the excellent first episode in Professor Al Khalili’s TV series on BBC4 on The Secrets of Quantum Physics which combined an historical perspective with some practical hands on science. Great stuff.

It covers the battle between Bohr and Einstein on the interpretation of quantum physics, and how much later Bell’s insight and subsequent experiments by others helped to show that Bohr was right after all.

Did the scientific community denigrate Bohr or Einstein over the many years that the controversy raged? No, they took sides for sure, but this was a scientific debate, not a personal attack. Was there a ‘Matt Ridley’ or ‘Melanie Phillips’ from the press judging this debate? No, because they hadn’t a clue how to judge it. Quantum theory is trivial in comparison to climate science, so how come they feel skilled enough to judge it’s veracity?

Yet, greats like Bohr and Einstein respected each other even as they deeply disagreed, like two top sparring partners, but ultimately, they respected the process of science above their mutual respect: science was the winner.

I respect the huge number of scientists grappling with something far more complex than quantum theory: the fate of our climate. They do so with great dignity and perseverance, amongst the noise and denigration of a few such as the aforementioned: The decades studying ice cores; The decades developing models that are brilliant (“all models are wrong but some are useful” is true, but a better term might be … “all models are created with great diligence using the appropriate best science, best computers, and best empirical evidence … and by goodness, they are very useful indeed”, and we do not have a Planet B to do a blind trial controlled experiment!); the list is long.

Science is about making mistakes. Challenging. Testing. Theorizing. Testing again. In true Popperian style: the goal is to make the mistakes as quickly as possible! But the diverse and argumentative community of scientists are the best at acting as judges and jury – this is how it has worked to date – because they have the skills and processes to do this. If there is a brilliant new discovery to be had to confound the status quo, why would someone keep quiet about it!?

And even when the knowledge did not exist to understand something like the ‘ultraviolet catastrophe’, it was scientists (first Planck in 1900 identifying quanta as a requirement for understanding the black body spectrum, then Einstein in explaining the photoelectric effect in 1905 and finally convincing everyone that light quanta were real) that resolved the problem.

Were they shunned as heretics who did not abide by the mainstream? Actually, after a little debate, the cream comes to the surface in science. Always has. Always will.

In climate science, we are not expecting or needing new physics. The problem is complex, but we know that we can derive broad and reasonable conclusions from complex and difficult data. That is true in climate science and true in big data. But not in Journalism.

The Wall Street Journal and Daily Mail give over acres of newsprint and webspace to the likes of Matt Ridley, Dominic Lawson and Melanie Philips to spout their ill-informed vitriol against science and scientists. These never genuinely challenge the science but aim to attack the person or organization. They ascribe motives not competing science. They have none.

Of course science weeds out bad apples, like the now struck off Andrew Wakefield. He is also a case study in the diabolical abuse of power of some in the press, like the Daily Mail, during the MMR debacle and now over global warming.

Not even a thousand years of study and re-evaluation will somehow elevate poor Dr Wakefield from poor misunderstood researcher to misunderstood genius, as his supporters would have us believe.

The Daily Mail does not appreciate that for every genuine genius, there are a legion of cranks. In journalism too often, diatribe and horrible brown stuff rises to the surface, not cream. The tendency to champion cranks over genuine science is both bizarre and a huge disservice to the readers of these organs.

Does Joe Public trust the future of science more in the hands of the institutions of science such as the Royal Society and National Academy of Science, or the habitually contrarian agents of scientific illiteracy such as the Daily Mail and Wall Street Journal?

I think we know that the wisdom of the masses would not fail us.

2 Comments

Filed under Climate Science, Essay, Philosophy of Science, Science

The Spurious ‘Debate’ On Anthropogenic Global Warming (AGW)

A lot has been made of often toxic ‘debates’ that often accompany news items and blogs on the web. You do not have to look far. Take many news items on the BBC and you will often find that a journalist’s blog is leapt upon by all manner of often uncivil, anonymous and barely moderated posts that degenerate into all manner of speculations on the motives of him or her, and conspiracies about this or that.

This is no more so than when the topic involves science, and in particular, climate science. The vitriol of many posts means that people with a genuine understanding of the science are too weary to engage in these discussion threads. When they do, and try to build a bridge with so-called sceptics, they will find that their motives are questioned. The problem is that comment threads on the web seem to be about as far from the norms of ‘debate’ as it is possible to get.

For a debate, the protagonists must start from at least some areas of common ground, and then debate their differences using a common language, where the words from each side are understood within common norms and frameworks. Yes, it can get heated but debate can remain ‘on subject’ and not resort to personal attacks.

At the Hay book festival a few years ago two prominent historians, Eric Hobsbawm and Niall Ferguson, debated the origins of the First World War. Despite their serious political differences, a civilized debate ensued, and they actually ended up agreeing.

Now imagine if a senior scientist at the National Ignition Facility in Livermore wanted to challenge a motion “There is no prospect of commercialized fusion power making a significant contribution to mitigating the current pathway towards dangerous anthropogenic global warming (AGW) ”, he would start with several agreed points, such as the reality of AGW and the dangerous pathway part too, probably. But imagine that this was a blog ‘debate’ and then: firstly, someone jumped in who said that cold fusion already worked and there was a conspiracy to hide this truth from the world; secondly, a guy pops up with an argument saying that AGW is a lie, because it defies common sense that 400 parts per million of CO^2 have so much effect, and he has references to prove it!; etcetera.

There are a number of factors at work here that will prevent genuine debate:

  • To have a debate, there must be common norms, language and frameworks that enable constructive focused debate e.g. in the AGW ‘debate’ a basic understanding of the kinetic theory of heat; the laws of thermodynamics; the absorption spectra of molecules; etc., before one can ‘debate’ the way in which models use this basic physics. I can imagine Professor Betts of the Met Office having a debate with James Lovelock of Gaia fame on a motion “The lack of modelling of sub-surface ocean circulation undermines the ability of general circulation models of the climate to make useful predictions of future warming”. A fair challenge at first sight, but I bet Professor Betts has plenty of arguments to have a sensible debate with Lovelock. Lovelock would not jump in with “but CO^2 is not a greenhouse gas”, because that is not true.
  • The casual use of crooked forms of argument that have been studied for as long as debate has been with us (for a survey, see the sadly out of print book by Thouless: http://neglectedbooks.com/Straight_and_Crooked_Thinking.pdf ), which pepper many political arguments but are now used routinely in these ‘debates’.
  • What philosophers call ‘category errors’ abound: these discussion threads often conflate so many apparently random points that debate is well nigh impossible. Given that, as my mother used to say “empty vessels make the most noise”, is it any wonder that the substance of any debate gets lost in the noise of ignorance and vitriol?

One feels bound to ask “Who is a debate for?”. For students campaigning against investment in Fossil Fuels there is little interest in ‘debates’ as to the truth of AGW, as they are convinced that there is a serious AGW issue and have moved on from debate to action.

Lord Lawson, on the other hand, probably spends little time going through discussion threads. His language and framework is not a science-based one, but is based on a liberal view of economics: human progress and the market will save the day, so the details of the science are really not something he is equipped to debate or is fundamentally interested in. He probably regards AGW proponents as at best unwitting tools of anti-free market forces which must be defeated at all costs.

For those ‘sceptics’ who are genuinely interested in challenging the science, rather than the motives of scientists, there needs to be a forum for genuine debate, and we must stop pretending that the un-moderated threads that largely populate blogs that challenge AGW can provide this platform.

You may well ask, are not the people who are genuinely interested in challenging the science, the scientists themselves! After all, they do this day in and day out using credible scientific venues, such as refereed journals, conferences and so forth. That is indeed true. They have the training, skills and experience to enable them to challenge the science effectively, and to reach solid conclusions. By that process, a consensus has emerged that the Earth is warming, and that it is largely or entirely due to human activity.

‘Sceptics’ who are genuinely interested in challenging that consensus will have to participate in the same scientific process. This requires a minimum level of knowledge and skill – putting in the time and effort required – before they can “challenge the science”. If they’re not willing to do that, they can hardly be considered genuine sceptics. Scientists are quintessentially sceptical. They are the uber ‘sceptics’!

But what about those who are ill equipped to challenge the science, but perhaps find themselves challenged or befuddled by the science!? In this broader realm concerned with communicating what is established science, to those who have an interest in the science but lack the knowledge and skills – including some journalists, politicians, University of the 3rd Agers, etc. – there is a need for credible scientists to engage. This is more likely to be achieved using old fashioned forms of discourse – village halls, or video talks, that are as close as possible to face-to-face discourse – and far removed from the un-moderated, of anonymous ‘debates’ on the web. This is not debate, it is an open form of communication.

The debate comes, of course, for all of us when we come to consider the options we face when confronted by science. Do we continue with nuclear or push to scale renewables? Valid topics for political debate. There is no reason, and actually some advantage, for this discourse to be done in concert with those sceptics who are able to engage in genuine debate about those things worth debating, with those who share sufficient norms, language and frameworks needed to facilitate genuine debate.

I would welcome a debate with Lord Lawson, but that is impossible while he remains in denial about the science. So, just as Ferguson and the now departed Hobsbawm, were able, on an historical topic, to engage in useful debate that lead to a conclusion  despite a huge chasm between them (politically), so too, even on a complex and challenging topic like AGW, discourse is possible – given a suitable topic for debate.

Leave a comment

Filed under Climate Science, Essay, Science