Category Archives: Computer Science

Google and the Internet: Friend or Foe to the Planet?

I keep hearing this meme that goes along the lines of “a Google search will use X amount of energy”, where X is often stated in a form of a scary number.

I think numbers are important.

According to one source a Google search is about 0.0003 kWh of energy, whereas a 3kW kettle running for one minute uses 3 x (1/60) = 1/20 = 0.05 kWh, which is 160 times as much (another piece  uses an equivalent figure – Note 1).

On the UK grid, with a carbon intensity of approximately 300 gCO2/kWh (and falling) that would equate to 0.09 gCO2 or roughly 0.1 gCO2 per search. On a more carbon intensive grid it could be double this, so giving 0.2 gCO2 per search, which is the figure Google provided in response to The Sunday Times article by MIT graduate Alex Wissner-Gross (cited here), who had estimated 7 gCO2 per search.

If the average Brit does the equivalent of 100 searches a day, that would be:
100 x 0.0003 kWh = 0.03 kWh, whereas according to Prof. Mackay, our total energy use (including all forms) is 125 kWh per person per day in UK, over 4,000 times more.

But that is not to say the that the total energy used by the Google is trivial.

According to a Statista article, Google used over 10 teraWatthours globally in 2018 (10 TWh = 10,000,000,000 kWh), a huge number, yes.

But the IEA reports  that world used 23,000 TWh in 2018. So Google searches would represent about 0.04% of the world’s energy on that basis, a not insignificant number, but hardly a priority when compared to electricity generation, transport, heating, food and forests. Of course, the internet is more than simply searches – we have data analysis, routers, databases, web sites, and much more. Forbes published findings from …

A new report from the Department of Energy’s Lawrence Berkeley National Laboratory figures that those data centers use an enormous amount of energy — some 70 billion kilowatt hours per year. That amounts to 1.8% of total American electricity consumption.

Other estimates indicate a rising percentage now in the low few percentage points, rivalling aviation. So I do not trivialise the impact of the internet overall as one ‘sector’ that needs to address its carbon footprint.

However, the question naturally arises, regarding the internet as a whole:

how much energy does it save, not travelling to a library, using remote conferencing, Facebooking family across the world rather than flying, etc., compared to the energy it uses?

If in future it enables us to have smarter transport systems, smart grids, smart heating, and so on, it could radically increase the efficiency of our energy use across all sectors. Of course, we would want it used in that way, rather than as a ‘trivial’ additional form of energy usage (e.g. hosting of virtual reality game).

It is by no means clear that the ‘balance sheet’ makes the internet a foe rather than friend to the planet.

Used wisely, the internet can be a great friend, if it stops us using planes, over-heating our homes, optimising public transport use, and so forth. This is not techno-fetishism, but the wise use of technology alongside the behavioural changes needed to find climate solutions. Technology alone is not the solution; solutions must be people centred.

Currently, the internet – in terms of its energy use – is a sideshow when it comes to its own energy consumption, when compared to the other things we do.

Stay focused people.

Time is short.

(c) Richard W. Erskine, 2019

 

Note 1

I have discovered that messing about with ‘units’ can cause confusion. So here is an explainer. The cited article uses a figure of 0.3 Watt hours, or 0.3 Wh for short. The more commonly used unit of energy consumption is kilo Watt hours or kWh. As 1000 Wh = 1 kWh, so it remains true if we divide both sides by 1000: 1 Wh = 0.001 kWh. And one small step means 0.1 Wh = 0.0001 kWh. Hence, 0.3 Wh = 0.0003 kWh.  If you don’t spot the ‘k’ things do get mighty confusing!

 

1 Comment

Filed under Computer Science, Global Warming Solutions, Science in Society

The Zeitgeist of the Coder

When I go to see a film with my wife, we always stick around for the credits, and the list has got longer and longer over the years … Director, Producer, Cinematographer, Stuntman, Grips, Special Effects … and we’ve only just started. Five minutes later and we are still watching the credits! There is something admirable about this respect for the different contributions made to the end product. The degree of differentiation of competence in a film’s credits is something that few other projects can match.

Now imagine the film reel for a typical IT project … Project Manager, Business Analyst, Systems Architect, Coder, Tester and we’re almost done, get your coat. Here, there is the opposite extreme; a complete failure to identify, recognise and document the different competencies that surely must exist in something as complex as a software project. Why is this?

For many, the key role on this very short credits list is the ‘coder’. There is this zeitgeist of the coders – a modern day priesthood – that conflates their role with every other conceivable role that could or should exist on the roll of honour.

A good analogy for this would be the small scale general builder. They imagine they can perform any skill: they can fit a waterproof membrance on a flat roof; they can repair the leadwork around the chimney; they can mend the lime mortar on that Cotswold stone property. Of course, each of these requires deep knowledge and experience of the materials, tools and methods needed to plan and execute them right.  A generalist will overestimate their abilities and underestimate the difficulties, and so they will always make mistakes.

The all purpose ‘coder’ is no different, but has become the touchstone for our digital rennaissance. ‘Coding’ is the skill that trumps all others in the minds of the commentariat.

Politicians, always keen to jump on the next bandwagon, have for some years now been falling over themselves to extol the virtues of coding as a skill that should be promoted in schools, in order to advance the economy.  Everyone talks about it, imagining it offers a kind of holy grail for growing the digital economy.  But can it be true? Is coding really the path to wealth and glory, for our children and our economy?

Forgetting for a moment that coding is just one of the skills required on a longer list of credits, why do we all need to become one?

Not everyone is an automotive engineer, even though cars are ubiquitous, so why would driving a car mean we all have to be able to design and build one? Surely only a few of us need that skill. In fact, whilst cars – in the days when we called them old bangers – did require a lot of roadside fixing, they are now so good we are discouraged from tinkering with them at all.  We the consumers have become de-skilled, while the cars have become super-skilled.

But apparently, every kid now needs to be able to code, because we all use Apps. Of course, it’s nonsense, for much the same reasons it is nonsense that all car drivers need to be automotive engineers. And as we decarbonise our economy Electric Vehicles will take over, placing many of the automotive skills in the dustbin. Battery engineers anyone?

So why is this even worth discussing in the context of the knowledge economy? We do need to understand if coding has any role in the management of our information and knowledge, and if not, what are the skills we require. We need to know how many engineers are required, and crucially, what type of engineers.

But lets stick with ‘coding’ for a little while longer. I would like to take you back to the very birth of computing, to deconstruct the wording ‘coding’ and place into context. The word coding originates the time when programming a computer meant knowing the very basic operations expressed as ‘machine code’ – Move a byte to this memory location, Add these two bytes, Shift everything left by 2 bytes – which was completely indecipherable to the uninitiated. It also had a serious drawback in that a program would have to be re-written to run on another machine, with its own particular machine code. Since computers were evolving fast, and software needed to be migrated from old to new machines, this was clearly problematic.

Grace Hooper came up with the idea of a compiler in 1952, quite early in the development of computers. Programs would then be written in a machine-agnostic ‘high level language’ (which was designed to be readable, almost like a natural language, but with a simple syntax to  allow logic to be expressed … If (A = B) Then [do-this] Else [do-that]). A compiler on a machine would take a program written in a high-level language and ‘compile’ it into the machine code that could run on that machine.  The same program could thereby run on all machines.

In place of ‘coders’ writing programs in machine code, there were now ‘programmers’ doing this in high-level language such as Cobol or FORTRAN (both of which were invented in the 1950s), and later ones as they evolved.

So why people still talk about ‘coders’ rather than ‘programmers’ is a mystery to me. Were it just an annoying misnomer, one could perhaps ignore it as an irritant, but it reveals a deeper and more serious misunderstanding.

Coding … I mean Programming … is not enough, in so many ways.  When the politician pictures a youthful ‘coder’ in their bedroom, they imagine the next billionaire creating an App that will revolutionize another area of our lives, like Amazon and Uber have done.

But it is by no means clear that programming as currently understood, is the right skill  for the knowledge economy.  As Gottfried Sehringer wrote in an article “Should we really try to teach everyone to code?” in WiRED, even within the narrow context of building Apps:

“In order to empower everyone to build apps, we need to focus on bringing greater abstraction and automation to the app development process. We need to remove code — and all its complexity — from the equation.”

In other words, just as Grace Hooper saw the need to move from Coding to Programming, we need to move from Programming to something else. Let’s call it Composing: a visually interactive way to construct Apps with minimal need to write lines of text to express logical operations. Of course, just as Hooper faced resistance from the Coders, who poured scorn on the dumbing down of their art, the same will happen with the Programmers, who will claim it cannot be done.

But the world of digital is much greater than the creation of ‘Apps’. The vast majority of the time spent doing IT in this world is in implementing pre-built commercial packages.  If one is implementing them as intended, then they are configured using quite simple configuration tools that aim to eliminate the need to do any programming at all. Ok, so someone in SAP or Oracle or elsewhere had to program the applications in the software package, but they are a relatively small population of technical staff when compared to the numbers who go out to implement these solutions in the field.

Of course it can all go wrong, and often does. I am thinking of a bank that was in trouble because their creaking old core banking system – written in COBOL decades ago by programmers in the bank – was no longer fit for purpose. Every time changes were made to financial legislation, such as tax, the system needed tweaking. But it was now a mess, and when one bug was fixed, another took its place.

So the company decided to implement an off-the-shelf package, which would do everything they needed, and more. The promise was the ability to become a  really ‘agile’ bank. They would be able to introduce new products to market rapidly in response to market needs or to respond to new legislation. It would take just a few weeks, rather than the 6 months it was currently taking. All they needed to do was to do some configurations of the package so that it would work just as they needed it too.

The big bosses approved the big budget then left everyone to it. They kept on being told everything was going well, and so much wanted to believe this, so failed to ask the right questions of the team. Well, guess what, it was a complete disaster. After 18 months and everything running over time and over budget, what emerged?  The departmental managers had insisted on keeping all the functionality from their beloved but creaking old system; the big consultancy was being paid for man-hours of programming so did not seem to mind that the off-shored programmers were having to stretch and bend the new package out of shape to make it look like the old system. And the internal project management was so weak, they were unable to call out the issues, even if they had fully understood them.

Instead of merely configuration, the implementation had large chunks of custom programming bolted onto the package, making it just as unstable and difficult to maintain as the old system. Worse still, it made it very difficult to upgrade the package; to install the latest version (to derive benefits from new features), given the way it had been implemented. There was now a large support bill just to keep the new behmoth alive.

In a sense, nothing had changed.

Far from ‘coding’ being the great advance for our economy, it is often, as in this sorry tale, a great drag on it, because this is how many large system implementations fail.

Schools, Colleges and Universities train everyone to ‘code’, so what will they do when in the field? Like a man with a hammer, every problem looks like a nail, even when a precision milling machine was the right tool to use.

Shouldn’t the student be taught how to reframe their thinking to use different skills that are appropriate to the task in hand? Today we have too many Coders and not enough Composers, and its seems everyone is to blame, because we are all seduced by this zeitgeist of the ‘coder’.

When we consider the actual skills needed to implement, say, a large, data-oriented software package – like that banking package – one finds that activities needed are, for example: Requirements Analysis, Data Modelling, Project Management, Testing, Training, and yes of course, Composing.  Programming should be restricted to those areas such as data interfaces to other systems, where it must be quarantined, so as not to undermine the upgradeability of the software package that has been deployed.

So what are the skills required to define and deploy information management solutions, which are document-oriented, aimed at capturing, preserving and reusing the knowledge within an organization?

Let the credits roll: Project Manager; Information Strategist; Business Analyst; Process Architect; Information Architect; Taxonomist; Meta-Data Manager; Records Manager; Archivist; Document Management Expert; Document Designer; Data Visualizer; Package Configurer; Website Composer; … and not a Coder, or even a Programmer, in sight.

The vision of everyone becoming coders is not only the wrong answer to the question; its also the wrong question. The diversity of backgrounds needed to build a knowledge economy is very great. It is a world beyond ‘coding’ which is richer and more interesting, open to those with backgrounds in software of course, but also in science and the humanities. We need linguists as much as it we need engineers; philosophers as much we need data analysts; lawyers as much as we need graphics artists.

To build a true ‘knowledge economy’ worthy of that name, we need to differentiate and explore a much richer range of competencies to address all the issues we will face than the way in which information professionals are narrowly defined today.

(C) Richard W. Erskine, 2017

——

Note:

In his essay I am referring to business and institutional applications of information management. Of course there will be areas such as scientific research or military systems which will always require heavy duty, specialist software engineering; but this is another world when compared to the vast need in institutions for repeatable solutions to common problems, where other skills are argued to be much more relevant and important to success.

1 Comment

Filed under Essay, Information Management, Software Development

In Praise of Computer Models

As you walk or commute to work, does it ever occur to you how much thought and effort goes into keeping the lights on?

I remember many years ago doing some consulting for a major utilities company, and on one visit being taken to a room which was full of PhD level mathematicians. “What are they doing?” I asked, “Refining models for calculating the price of electricity!”. The models had to calculate the price on a half-hourly basis for the market. The modellers had to worry about supply including how electricity is distributed but also how fast incremental supply can be brought on stream; and on the demand side, the cycles of demand as well as those unusual events like 10 million electric kettles being put on at half time during a major football game.

It should be pretty obvious why modelling of the electricity supply and demand during a 24 hour cycle is crucial to the National Grid, generators, distributors and consumers. If we misjudge the response of the system, then that could mean ‘brown outs’ or even cuts.

In December 2012: “… the US Department of Homeland Security and Science held a two-day workshop to explore whether current electric power grid modelling and simulation capabilities are sufficient to meet known and emerging challenges.” 

As explained in the same article:

“New modelling approaches could span diverse applications (operations, planning, training, and policymaking) and concerns (security, reliability, economics, resilience, and environmental impact) on a wider set of spatial and temporal scales than are now available.

A national power grid simulation capability would aim to support ongoing industry initiatives and support policy and planning decisions, national security issues and exercises, and international issues related to, for instance, supply chains, interconnectivity, and trade.”

So we see that we move rapidly from something fairly complex (calculating the price of electricity across a grid), to an integrated tool to deal with a multitude of operational and strategic demands and threats. The stakeholders’ needs have expanded, and hence so have the demands on the modellers. “What if this, what if that?”.

Behind the scenes, unseen to the vast majority of people, are expert modellers, backed up by multidisciplinary expertise, using a range of mathematical and computing techniques to support the operational and strategic management of our electricity supply.

But this is just one of a large number of human and natural systems that call out for modelling. Naturally, this started with the physical sciences but has moved into a wide range of disciplines and applications.

The mathematics applied to the world derives from the calculus of the 17th Century but was restricted to those problems that were solvable analytically, using pencil and paper. It required brilliant minds like Lagrange and Euler to develop this mathematics into a powerful armoury used for both fundamental science and applied engineering. Differential equations were the lingua franca of applied mathematics.

However, it is not an exaggeration to say that a vast range of problems were totally intractable using solely analytical methods or even hand-calculated numerical methods.

And even for some relatively ‘simple’ problems, like the motions of the planets, the ‘three-body problem’ meant that a closed mathematical expressions to calculate the positions of the planets at any point in time were not possible. We have to numerically calculate the positions, using an iterative method to find a solution. The discovery of Neptune was an example of how to do this, but it required laborious numerical calculations.

Move from Neptune to landing a man on the moon, or to Rosetta’s Philae lander on the surface of the comet 67P/Churyumov–Gerasimenko, and pencil and paper are no longer practical; we need a computer. Move from this to modelling a whole galaxy of stars, a collision of galaxies, or even the evolution of the early universe, and we need a large computer (for example)

Of course some people had dreamed of doing the necessary numerical calculations long before the digital computer was available. In 1922 Lewis Richardson imagined 64,000 people each with a mechanical calculator in a stadium executing numerical calculations to predict the weather.

Only with the advent of the modern digital computer was this dream to be realised. Although of course, the exponential growth in computing power has meant that each 18 month doubling of computing power has created new opportunities to broaden or deepen the model capabilities.

John von Neumann, a key figure in the development of the digital computer, was interested in two applications – modelling the processes involved in the explosion of a thermonuclear device and modelling the weather.

The innovation in the early computers was driven by military funding, and much of the pioneering work on computational physics came out of places like the Lawrence Livermore Laboratory.

The Monte Carlo method, a ubiquitous tool in many different models and applications, was invented by Stanislaw Ulam (a mathematician who is co-author of the Teller-Ulam configuration for the H-bomb). This is one of many innovations used in computer models.

The same mathematics and physics used for classical analysis has been reformulated in a form susceptible to computing, so that the differential calculus is rendered as the difference calculus. The innovations and discoveries made then and since are as much a part of the science and engineering as the fundamental laws on which they depend. The accumulated knowledge and methods have served each generation.

Some would argue that far from merely making complex problems tractable, in some passive role, the computer models provide a qualitatively different approach to that possible prior to digital computing. Because the computers acts like experimental devices from which insights can be gleaned, they may actually inspire new approaches to the fundamental science, in a proactive manner, helping to reveal emergent patterns and behaviours in systems not obvious from the basic physics. This is not a new idea …

“Given this new mathematical medium wherein we may solve mathematical propositions which we could not resolve before, more complete physical theories may possibly be developed. The imagination of the physicist can work in a considerably broader framework to provide new and perhaps more valuable physical formulations.”  David Potter, “Computational Physics”, Wiley, 1973, page 3.

For the most part, if we think not of colliding galaxies, and other ‘pure science’ problems, the types of models I am concerned with here are ones that ultimately can impact human society. These are not confined to von Neumann’s preferred physical models.

An example from the world of genomics may help to illustrate just how broad the application of models are in today’s digital world. In looking at the relationship between adaptations in the genotype (e.g. mutations) and phenotype (e.g. metabolic processes), the complexities are enormous, but once again computer models provide a way of exploring the possibilities and patterns, that teach us something and help in directing new applications and research. A phrase used by one of the pioneers in this field, Andreas Wagner is revealing …

“Computers are the microscopes of the 21st Century” 

BBC Radio 4, ‘Start The Week’, 1st December 2014.

For many of the complex real-world problems it is simply not practical, ethical or even possible to do controlled experiments, whether it is our electricity grid, the spread of disease, or the climate. We need to be able to conduct multiple ‘runs’ of a model to explore a range of things: its sensitivity to initial conditions; how good the model is at predicting macroscopic emergent properties (e.g. Earth’s averaged global temperature); response of system to changing external parameters (e.g. the cumulative level of CO2 in the atmosphere over time); etc.

Models are thereby not merely a ‘nice to have’, but an absolute necessity if we are to have get a handle on these questions, to be able to understand these complex systems better and to explore a range of scenarios. This in turn is needed if we as a society are to be able to manage risks and explore options.

Of course, no model is ever a perfect representation of reality. I could repeat George Box’s famous aphorism that “All models are wrong but some are useful”, although coming as this did from the perspective of a statistician, and the application of simple models, this may not be so useful when thinking about modern computer models of complex systems. May I suggest a different (but much less catchy) phrase:

“Models are often useful, sometimes indispensable and always work in progress”

One of the earliest mathematicians to use computers for modelling was the American mathematician Cecil Leith, who during the war worked on models of thermonuclear devices and later worked on models for the weather and climate. In a wide-ranging 1997 interview covering his early work, he responded to a question about those ‘skeptics’ who were critical of the climate models:

“… my concern with these people is that they have not been helpful to us by saying what part of the model physics they think may be in error and why it should be changed. They just say, “We don’t believe it.” But that’s not very constructive. And so one has the feeling they don’t believe it for other reasons of more political nature rather than scientific.” 

When the early modellers started to confront difficult issues such as turbulence, did they throw their hands up and say “oh its too hard, let’s give up”? No, with the help of new ideas and methods, such as those originating from the Russian mathematician’s Kolmogorov and Obukhov, progress was made.

The cyclical nature of these improvements comes from a combination of improvements in methods, new insights, improved observational data (including filling in gaps) and raw computing power.

A Model of Models might look something like this (taken from my whiteboard):

image1-2

In this modern complex world we inhabit, models are not a nice to have, but an absolute necessity if we are to embrace complexity and be able to gain insights into the these systems, and anticipate and respond to scenarios for the future.

We are not able to control many of the variables (and sometimes only a very few), but we can see what the response is to changes in the variables we do have control over (e.g. use of storage arrays to facilitate transition to greater use of renewable energy). This in turn is needed if we as a society are to be able to manage risks and explore options, for both mitigation and adaptation in the case of global warming. The options we take need to be through an inclusive dialogue, and for that we need the best information available to inform the conversation.

Some, like US Presidential candidate Ted Cruz would prefer to shoot the messenger and shut down the conversation, when they do not like what the science (including the models) is telling them (e.g. by closing down the hugely valuable climate research based at NASA).

While many will rightly argue that modelling is not the whole story, or even the main story, because the impacts of increased man-made CO2 are already evident in a wide range of observed changes (e.g. large number of retreating glaciers), one is bound to ask “what is the alternative to doing these models?” in all the diverse fields mentioned? Are we …

  • To wait for a new disease outbreak without any tools to understand strategies and options for disease control and to know in advance the best deployment of resources, and the likely change in the spread of disease when a virus mutates to an air-borne mode of infection?
  • To wait for a brown-out or worse because we do not understand the dynamical response of our complex grid of supply to large and fast changes in demand, or the complexities of an increasingly fragmented supply-side?
  • To wait for the impacts of climate change and make no assessment of when and how much to invest in new defences such as a new Thames Barrier for London, or do nothing to advise policy makers on the options for mitigation to reduce the impact of climate change?

Surely not.

Given the tools we have to hand, the knowledge and methods we have, accumulated over decades, it would be grossly irresponsible for us as a society not to undertake modelling of these kinds; and not be put off by the technical challenges faced in doing so; and certainly not be put off by those naysayers who don’t ‘believe’ but never contribute positively to the endeavours.

We would live in a more uncertain world, prone to many more surprises, if we failed to model the human and natural systems on which we rely and our future depends. We would also fail to effectively exploit new possibilities if we were unable to explore these in advance (e.g. the positive outcomes possible from a transition to a decarbonised world).

Let’s be in praise of computer models, and be thankful that some at least – largely unseen and mostly unthanked – are providing the tools to help make sense of the future.

Richard Erskine, 24th May 2015

2 Comments

Filed under Computer Science, Essay, Models, Science

Becoming Digital

It is received wisdom that the world has become digital.

Leaving aside that I now qualify for concessions at some venues, is this true? Is it really an age thing, and only the young will truly ‘be’ digital? Why do we still in many homes live in some mix of analogue and digital existence? Have we really become digital or are we only part way through a long transition? What, if anything, is holding us back? (I will leave for another essay the issue of what is happening in the workplace or enterprise: in this essay I am only concerned what impinges on home life).

It is certainly the case that Nicholas Negroponte’s vision of the future “Being Digital” published in 1995, when he was Director of the MIT Media Lab (and where he remains as Chairman, no doubt, with colleagues predicting new futures), provided an excellent checklist for inventions and innovations in the digital arena, and what he characterized as the transition from atoms (eg. Books) to bits (eg. e-books), as the irreducible medium for the transmission of information and entertainment. [In the following I will insert the occasional quote from the book.]

Smart TVs

When walking through Heathrow Airport recently I saw a toddler in arms, and as they passed a display screen a little hand reached out pointing at the screen, and tried to swipe it! It amazed me.

“… personal computers almost never have your finger meet the display, which is quite startling when you consider that the human finger is a pointing device you do not have to pick up, and we have ten of them.” (p. 132)

Clearly the touch-screen generation is emerging (although the child was disappointed to discover it failed to respond … it was just a TV monitor!). [The quotation above is similar to the Steve Jobs one, included in Walter Isaacson’s biography of him, (p. 309) “God gave us ten styluses”, which he uttered in relation to the stylus bearing Apple Newton, on his return to the firm in 1997. But of course Jobs had been dreaming of touch-screen products for many years, and it is incredible that the first iPad was released only 5 years ago, and the iPhone just three years earlier than that].

Negroponte predicted the convergence of the PC and the TV, but why has it taken me until the closing days of 2014 to acquire a “Smart TV”? It is a complex matter.

One thing is that I like to get the full value from the stuff I buy and the 7 years old Sony workstation and Bravia monitor (with its inbuilt tuner) meant we could view TV terrestrial and internet catch-up services like BBC iPlayer from the same set of kit, while also using it as a media station for photos and music, with some nice Bose speakers attached. But this is a setup that comes at a price, in ways that are more than simply financial.

The cost of setting up a fully fledged PC (which is mostly intended for entertainment) is high, whereas the Smart TV encapsulates the required processing power for you at a fraction of the cost. Why do we need a geek to watch a film? No reason at all. It really should be plug and view. And this also avoids all those irritating invitations to upgrade this or that software; to rerun virus checks; and battle with bloat-ware like Internet Explorer; etc.).  Not to mention that when we picked it up I could literally lift the Smart TV box with my little finger. This is therefore not only about the TCO (Total Cost of Ownership) but also the TIO (Total Irritation of Ownership). [The old Sony PC setup lives on in my new Study, where I will now use its power to greater effect, spending more time curating my vast photo collection, and writing blogs like this]

Sometimes the market is not quite ready for an idea, and it takes time to educate people about the options. The convergence of so many elements, including internet services, Full HD, large LED screens, and much more, when coupled with people’s poor experiences of high TCO and TIO mean that they like me are ready to make the move when thinking about a new “TV”. In my case triggered by the thought of moving the media station to my new office, and thinking “Do I REALLY want another PC to drive a TV monitor?”.

eBooks

On a recent long trip, my wife and I succumbed to getting a Kindle, allowing us to take a small library of novels with us for the journey and avoid falling foul of weight limitations on our flights. The Kindle is great technology because it does neither more nor less than one needs, which is a high contrast means of reading text, optimized for easy-on-the-eye reading as close as possible to what we know and love in a physical book. Power consumption is low, so battery life is long, because it does not try to do too much.

Does this stop us buying books? Well no. Even novels are still acquired in physical form sometimes because I suppose we are of an age where we still like the feel of a book in our hands.

But there are other reasons, that mean that even were we to wean ourselves off the physical novel, with it’s exclusively textual content, other books would not be so easily rendered in compelling electronic form due to their richer content.

Quite often the digital forms of richer media books are poorly produced, being often merely flat PDF renditions of the original. One of the books we downloaded for our trip was a Rough Guide to Australia and frankly it was a poor experience on a Kindle and no substitute for the physical product.

It recalls for me the inertia and lack of imagination of music companies who failed to see the potential in digital forms, seeing only threats not possibilities, which then saw them overhauled by file-sharing and ultimately ‘products’ such as iTunes and Spotify. In a sense, the problems and possibilities are worse for books because at least with books, it should not have taken much imagination to see where publishers could have provided different forms of ‘added value’, and so transform their role in a new digital landscape.

For example, when a real effort is made to truly exploit the possibilities of the digital medium – its interactivity, visual effects, hyperlinking, etc. – then a compelling product is possible that goes far beyond mere text and static visuals. Richard Dawkin’s “The Magic of Reality” for the iPad is an electronic marvel (a book made App), including the artistry of Dave McKean.

It brings the content to life, with wonderful text, touch-screen navigation, graphics and interactive elements. It clearly required a major investment in terms of the art and graphical work to render the great biologists ideas and vision into this new form. It could never have been achieved on a Kindle. It was able to shine on an iPad.

This is the next kind of digital book that really does exploit the possibilities of the medium, and should be the future of electronic books, if more publishers had the imagination to exploit the platform in this way.

The concept of personalization is also a great idea that only the digital world can bring to reality. This has already happening in news to a greater or lesser extent, as Negroponte predicted:

“There is another way to look at a newspaper, and that is as an interface to news. Instead of reading what other people thinks is news and what other people justify as worthy of the space it takes, being digital will change the economic model of news selections, make your interests play a bigger role, and, in fact, use pieces from the cutting-room floor that did not make the cut from popular demand.” (p. 153)

However, the physical forms live on, or takes a long time to die.

I used to buy The Independent, but now, for reasons partly concerned with content but also the user experience, I have moved to The Guardian tablet product: but we still get a physical ‘paper’ on Sunday, because it is somehow part of the whole sprawling business of boiled eggs, toast and coffee: an indulgence like superior marmalade.

Some physical forms will remain for more persistent reasons.

When we recently went to an exhibition of Henri Cartier-Bresson’s photography in Paris, we came away the coffee table sized book of the exhibition measuring 30cm x 25cm x 5cm. This book is an event in itself, to be handled and experienced at the coffee table, not peered at through some cold screen.

And what of my old copy of P.A.M Dirac’s “The Principles of Quantum Mechanics” in it’s wonderfully produced Oxford Monograph form, where even the paper has a unique oily smell? I received this for my 21st birthday from my mother in 1974, and it is inscribed by her. For me, it is irreplaceable, in whatever form you might offer to me.

We will never completely sideline physical / analogue products, but for books at least we may see them being pushed towards two extremes. On the one hand, the pulp fiction Print-On-Demand low cost product, or on the other hand, the high impact, high cost product like the coffee table art book.

Our senses of sight, smell, taste, touch & hearing are analogue and so we have a innate bias towards analogue. That is why the iPad is so much more natural for a child learning to interact with technology than a traditional PC.

The producers of digital products must work hard to really overcome the hurdles that digital production often faces to match the intimacy and warmth of the physical, analogue forms. But when they do, they can create stunning results.

We are for sure ‘Becoming Digital’, but the journey is far from over and there is still much to learn to make the transition complete, whatever that might mean.

eAlbums

I remember 10 years ago making the shift to digital photography. The trigger this time was a big birthday/ holiday for my wife, and the thought of upgrading my camera, from Canon SLR to Canon SLR, but now a metal bodied just-about-affordable digital one. I had flirted with digital but it had been very expensive to get even close to the quality of analogue (film). But in 2005 I found that the technology had crossed that threshold of affordable quality.

In making the transition to digital I had long ago lost the time or appetite for the darkroom, and the Gamer enlarger has been in the loft now for more than 20 years. But even without substituting the chemicals with Photoshop, there is a lot to think about in the move to digital:

How will one organize and index one’s photos?

And, the big question for me, how will one avoid simply substituting the large box of photos and negatives that never quite found time to be curated and nurtured into Albums, with a ‘digital’ box, with JPEG and RAW files that never quite get around to being curated and nurtured into Albums?

When my wife and I returned from the holiday of a life-time in Tanzania, and I had some 3000 shots (high resolution JPEGs), including a sequence of a Leopard on the bough of a tree which I waited an hour to take: the few fleeting moments as it rose from its slumbers, stretched and then disappeared into the grasses of the Serengeti.

How could I turn this huge collection into a Christmas present for my wife worthy of the experience?

  • Well, I first decided on how to thematically organize the album … Our Journey, Enter the Big 5, The Cats, …
  • Then I sifted and selected the photos I would include in the album, before printing these 150 photos that survived the cull in different sizes and aspect ratios.
  • These were then pasted into the album, leaving plenty of space for narrative, commentary, and the odd witty aside.
  • This whole process took 3 whole days. A work of love and a little art I like to think.

Could that really be done purely digitally?

Now I know and can concede that much of this analogue work could now be done using some on-line ‘create your album’ service (of which there are many), even perhaps creating a look and feel that tries to emulate the warmth and intimacy of the old fashioned album.

There is a ‘but’,  because even if we have digitized the process to that extent, people still want the physical album as the end product sent to their home.

Why?

Surely we could have a digital display cycling through the album and avoiding the need for a physical artefact entirely? Why do we, in Negroponte’s language, need atoms when we can have bits instead?

Part of this is cultural. Handing around an album at Christmas rather than clicking a display on the device on the wall is still something that commands greater intimacy. But even supposing we clear that cultural and emotional hurdle there remains another more fundamental one.

Will this iconic album be around in 50 or 100 years time, like the albums we see from our grandparents, cared for as a family heirloom? Now, while many people now – knowingly or otherwise – are storing their digital photos on the cloud, and this number is growing exponentially, how many would trust cloud services to act as custodians of the family’s (digital) heirlooms?

I would suggest that few would today. So, what needs to happen next to take us from ‘Becoming Digital’ to fully digital, at least when it comes to our family photos and videos?

Google or Facebook are not the answer as they do not understand the fundamental need, that there may actually be stuff I do not want shared by default with every person I come into contact with – and I am obliged to understand increasingly complex and poorly thought-out access controls to ensure confidentiality – and if I slip up, it is my fault.

I am prepared to pay for professional services that respect my need to ensure confidentiality and copyright by default, and sharing is controlled precisely only when and with whom I want to, through choices simply and clearly made.

Clearly the Googles and Facebooks of this world do not offer a philosophy or business model to provide such a platform, because we have entered into a pact with these social media giants: we get to use their platforms for free if and only if we are prepared to share intimate detail of our lives, so we can be monetized, through a network of associated Apps and services that make recommendations to us. They are marketing engines offering to be our friend!

That is the choice we are forced to make if we want to stay in touch with our distant family networks.

So what is the alternative?

Well, we need a whole lot of stuff that goes beyond devices and platforms, and is nothing like social media. Imagine that the National Archives in a country like the UK joined forces with a respected audit firm (like PwC) and legal firm (like Linklaters) to institute a kind of ‘digital repository accreditation and archiving service’ that acted in support of commercial providers, and was funded by its own statutory services.

The goal would be to set legally enshrined standards for the accreditation, auditing and care of digital artefacts in all forms, in perpetuity, acting as the trusted archive of last resort. Added value services could be developed including rich descriptive meta-data, collection management, etc., to enable commercial providers to create a market that goes far beyond mere storage, but was not dependent on the long-term viability of any commercial entity.

This combined enterprise would provide that extra level of confidence customers fundamentally need.

Now that would be interesting!

As this example illustrates, the process of ‘Becoming Digital’ is so much more than the latest device, or App, or other gizmo, or even content production process (as we saw with eBooks).

It requires something that satisfies those less easy to define emotional, cultural and legal requirements, that would make it truly possible for my grandchildren to enjoy visual and textual heirlooms in a purely digital form that are secure and confidential, in perpetuity.

Conclusion

“Being Digital” was a seminal and visionary book and it is no wonder that the incomparable Douglas Adams in his review comments included on the cover said:

“… Nicholas Negroponte writes about the future with the authority of someone who has spent a great deal of time there.”

Now we are nearing 20 years into that future, it is interesting to see how things are playing out and how much of his vision has come to pass, and how much more there is to do.

What is most evident to me, from a personal perspective at least, is that ‘Becoming Digital’ in all its forms is a rocky and personal path, with lots of hurdles and distractions on the way, and an awkward marriage between analogue and digital, between atoms and bits, that looks set to continue for a long while yet … at least in this household.

(c) Richard W. Erskine, 2014.

Leave a comment

Filed under Digital Media, eBooks, Essay, Photography, TV