The Uneven Road to EV Adoption in the UK

The UK has a net zero target by 2050 enshrined in law, and with transport the largest emitting sector, accelerating electric vehicle (EV) uptake is fundamental to reaching that target. Luckily, the adoption of new technologies is rarely linear, but instead has been shown to follow an S-curve as uptake moves from the tech-savvy to the mainstream. As part of the Park and Charge Oxfordshire research project, we undertook S-curve analysis to understand the potential trajectories (see Enabling the Acceleration of Electric Vehicle Adoption (Policy Brief 1, February 2022)).

After a long hiatus from EV uptake analysis, we recently ventured to re-run the s-curve analysis (initially published in December 2021). Unsurprisingly, the data formats and granularity have changed since then. Although this meant an entire rejig of the code, the higher geographical granularity across the datasets meant more regional insights than before. The primary data file includes the cumulative number of electric vehicles and all vehicles by counties from 2010 through Quarter 4, 2023, from the UK Government’s Department for Transport (VEH0105.ods). It was processed using Python. 

The figure below shows the cumulative EV uptake forecasts for England, Scotland and Wales. As can be seen, EV growth forecasts for Scotland and England are quite similar and the vehicle fleet is on target to be near-enough 100% electric by 2045. However, Wales is lagging by almost 8 years.

As our initial interest was for Oxfordshire, we plotted the forecasts for England and Oxfordshire, which are also incredibly similar. Then we took advantage of the higher geographic granularity now available to look further within Oxfordshire. We plotted the five district councils (Oxford City Council, Cherwell, South Oxfordshire, Vale of White Horse, and West Oxfordshire). Though initial uptake was slower, Cherwell has picked up the pace in the last 5 years and is leading the way in achieving all EV status sooner than rest of the districts in the county. 

On the other hand, this analysis tells us little about why uptake is faster or slower in different places. We know from other research that commercial fleets and company car schemes are dominating the new EV market in the UK. Cherwell has the highest economic activity rate and the second highest number of jobs in Oxfordshire after Oxford City, where major employers like the University tend to promote non-car commuting. Could this be why they also have a higher EV adoption rates?

The S-curve analysis also does not account for external events nor national policy changes – until any changes in adoption rates enter historic data. As can be seen below, when we first started this research in 2019, the time until full adoption was much longer. Then the pandemic and energy crises hit, and the UK government set a 2030 target to end sales of internal combustion engine (ICE) cars and vans. There has also been substantial investment from the Government and industry  in public and private EV charging. Has this pulled the S-curves forward in time, as well as making the exponential growth phase steeper? And what impact has the subsequent push-back of targets and mixed messages had since late 2023? We don’t yet have the data to be able to see any more recent changes in our S-curve analysis.

With the political landscape in the UK altering overnight and the 2030 phase-out date for ICE vehicles potentially back in the frame, the mixed messages may have little to no impact. However, with car dealers reporting that EV sales are still below the Zero Emissions Mandate target, we think our S-curve analysis could help the government in Westminster, local authorities, and industry see where there is still work to do to get adoption on track across the country. Have a look at our online tool for your area of interest and see if you agree!

What do you think? We would welcome questions and comments on the applications of S-curve analysis to EV adoption in the UK, on the influence of different geodemographics or policy change, and thoughts about avenues for further research. Please respond to this post on LinkedIn.

Big Data Busting

You’ve heard the term before. Maybe from me. Big Data. It’s a catchphrase of our time. But have you ever asked what it means? Google’s search engine defines it as a noun referring to “extremely large data sets that may be analysed computationally to reveal patterns, trends, and associations, especially relating to human behaviour and interactions.” And Google should know, right? It’s their day job.

But I had another definition proposed to me at a workshop on the topic last week. Roger Downing of the Hartree Centre in Warrington, part of the Science and Technology Facilities Council, described big data as datasets that were “uncomfortably large to deal with on a single machine”. That’s one of the reasons why the Hartree Centre exists and why I and a group of other PhD students were being treated to a workshop on big data there – they have plenty of machines to deal with the datasets comfortably. But over the course of the week, I began to wonder whether big data was about not just the size of the datasets, but also the data analysis decisions that may be uncomfortable for individual humans to deal with.

Certainly the volume of data and the speed with which it’s generated is staggering for humans or machines. Even though it has to be translated at some point into a plethora of ones and zeros, the datasets themselves are made up of numbers, measurements, text, images, audio and visual recordings, shape files and mixed formats collected and stored in a variety of computer programming languages. The datasets come from sources around the world and are produced by scientists, machines, transactions, interactions and ordinary people. Therefore, it is no surprise that some of the data is meticulous, some is missing and some is mendacious.

And all of it only has value if it can be analysed in such a way that can help people in society make better decisions more efficiently and achieve their goals, whether they be health and well-being or the bottom line.  So if the analysis is uncomfortable for a single machine, then big data analytics requires tools that enable ‘cluster computing’ with processing in parallel and allowances for ‘fault-tolerance’ or duplication of original and subsequent datasets so that information is not corrupted or lost during processing. The performance of such tools are designed and judged for speed, efficiency, ease of use, compatibility and unity, i.e. the more different data types the tool can handle, programming languages it can interact with, and variety of output it can produce within a unified framework, the better.

Of course tools must be used by well-trained data scientists, because the analysis of data and its value depends upon asking the right questions. Those right questions are most likely to be asked if data scientists not only have statistical and computer science skills, but also expertise in their area of study and a combination of creativity and curiosity that seeks new paths for research. Which again, is why we were there, as it is felt in some circles that it may be easier to offer training in statistics and computer programming to those working and researching within specialist areas than to train statisticians and computer scientists in all the disciplines they may encounter in their work with big data. Furthermore, patterns and predictions coming out of big data analysis are not helpful if the data has not been cleaned first and checked for its accuracy, consistency and completeness, a much easier task with specialist knowledge at your disposal. Machines cannot learn if they are not trained on structured and then validated data. And people cannot trust the output without control over the input and an understanding of how data was transformed into information.

And so there is the issue of comfort again. The technology now exists to economically store big datasets and try to merge them even if there is no certainty that added value will result. Machines analyse big data and offer potential audiences instead of actual ones, probabilities and levels of confidence instead of facts. Machine learning and cognitive computing utilise big data to create machine assistants, enhancing and accelerating human expertise, rather than machine workers, undertaking mundane tasks for humans. Thus we enter a brave new world. But I still can’t say I’m entirely comfortable.