The Uneven Road to EV Adoption in the UK

The UK has a net zero target by 2050 enshrined in law, and with transport the largest emitting sector, accelerating electric vehicle (EV) uptake is fundamental to reaching that target. Luckily, the adoption of new technologies is rarely linear, but instead has been shown to follow an S-curve as uptake moves from the tech-savvy to the mainstream. As part of the Park and Charge Oxfordshire research project, we undertook S-curve analysis to understand the potential trajectories (see Enabling the Acceleration of Electric Vehicle Adoption (Policy Brief 1, February 2022)).

After a long hiatus from EV uptake analysis, we recently ventured to re-run the s-curve analysis (initially published in December 2021). Unsurprisingly, the data formats and granularity have changed since then. Although this meant an entire rejig of the code, the higher geographical granularity across the datasets meant more regional insights than before. The primary data file includes the cumulative number of electric vehicles and all vehicles by counties from 2010 through Quarter 4, 2023, from the UK Government’s Department for Transport (VEH0105.ods). It was processed using Python. 

The figure below shows the cumulative EV uptake forecasts for England, Scotland and Wales. As can be seen, EV growth forecasts for Scotland and England are quite similar and the vehicle fleet is on target to be near-enough 100% electric by 2045. However, Wales is lagging by almost 8 years.

As our initial interest was for Oxfordshire, we plotted the forecasts for England and Oxfordshire, which are also incredibly similar. Then we took advantage of the higher geographic granularity now available to look further within Oxfordshire. We plotted the five district councils (Oxford City Council, Cherwell, South Oxfordshire, Vale of White Horse, and West Oxfordshire). Though initial uptake was slower, Cherwell has picked up the pace in the last 5 years and is leading the way in achieving all EV status sooner than rest of the districts in the county. 

On the other hand, this analysis tells us little about why uptake is faster or slower in different places. We know from other research that commercial fleets and company car schemes are dominating the new EV market in the UK. Cherwell has the highest economic activity rate and the second highest number of jobs in Oxfordshire after Oxford City, where major employers like the University tend to promote non-car commuting. Could this be why they also have a higher EV adoption rates?

The S-curve analysis also does not account for external events nor national policy changes – until any changes in adoption rates enter historic data. As can be seen below, when we first started this research in 2019, the time until full adoption was much longer. Then the pandemic and energy crises hit, and the UK government set a 2030 target to end sales of internal combustion engine (ICE) cars and vans. There has also been substantial investment from the Government and industry  in public and private EV charging. Has this pulled the S-curves forward in time, as well as making the exponential growth phase steeper? And what impact has the subsequent push-back of targets and mixed messages had since late 2023? We don’t yet have the data to be able to see any more recent changes in our S-curve analysis.

With the political landscape in the UK altering overnight and the 2030 phase-out date for ICE vehicles potentially back in the frame, the mixed messages may have little to no impact. However, with car dealers reporting that EV sales are still below the Zero Emissions Mandate target, we think our S-curve analysis could help the government in Westminster, local authorities, and industry see where there is still work to do to get adoption on track across the country. Have a look at our online tool for your area of interest and see if you agree!

What do you think? We would welcome questions and comments on the applications of S-curve analysis to EV adoption in the UK, on the influence of different geodemographics or policy change, and thoughts about avenues for further research. Please respond to this post on LinkedIn.

Big Data Busting

You’ve heard the term before. Maybe from me. Big Data. It’s a catchphrase of our time. But have you ever asked what it means? Google’s search engine defines it as a noun referring to “extremely large data sets that may be analysed computationally to reveal patterns, trends, and associations, especially relating to human behaviour and interactions.” And Google should know, right? It’s their day job.

But I had another definition proposed to me at a workshop on the topic last week. Roger Downing of the Hartree Centre in Warrington, part of the Science and Technology Facilities Council, described big data as datasets that were “uncomfortably large to deal with on a single machine”. That’s one of the reasons why the Hartree Centre exists and why I and a group of other PhD students were being treated to a workshop on big data there – they have plenty of machines to deal with the datasets comfortably. But over the course of the week, I began to wonder whether big data was about not just the size of the datasets, but also the data analysis decisions that may be uncomfortable for individual humans to deal with.

Certainly the volume of data and the speed with which it’s generated is staggering for humans or machines. Even though it has to be translated at some point into a plethora of ones and zeros, the datasets themselves are made up of numbers, measurements, text, images, audio and visual recordings, shape files and mixed formats collected and stored in a variety of computer programming languages. The datasets come from sources around the world and are produced by scientists, machines, transactions, interactions and ordinary people. Therefore, it is no surprise that some of the data is meticulous, some is missing and some is mendacious.

And all of it only has value if it can be analysed in such a way that can help people in society make better decisions more efficiently and achieve their goals, whether they be health and well-being or the bottom line.  So if the analysis is uncomfortable for a single machine, then big data analytics requires tools that enable ‘cluster computing’ with processing in parallel and allowances for ‘fault-tolerance’ or duplication of original and subsequent datasets so that information is not corrupted or lost during processing. The performance of such tools are designed and judged for speed, efficiency, ease of use, compatibility and unity, i.e. the more different data types the tool can handle, programming languages it can interact with, and variety of output it can produce within a unified framework, the better.

Of course tools must be used by well-trained data scientists, because the analysis of data and its value depends upon asking the right questions. Those right questions are most likely to be asked if data scientists not only have statistical and computer science skills, but also expertise in their area of study and a combination of creativity and curiosity that seeks new paths for research. Which again, is why we were there, as it is felt in some circles that it may be easier to offer training in statistics and computer programming to those working and researching within specialist areas than to train statisticians and computer scientists in all the disciplines they may encounter in their work with big data. Furthermore, patterns and predictions coming out of big data analysis are not helpful if the data has not been cleaned first and checked for its accuracy, consistency and completeness, a much easier task with specialist knowledge at your disposal. Machines cannot learn if they are not trained on structured and then validated data. And people cannot trust the output without control over the input and an understanding of how data was transformed into information.

And so there is the issue of comfort again. The technology now exists to economically store big datasets and try to merge them even if there is no certainty that added value will result. Machines analyse big data and offer potential audiences instead of actual ones, probabilities and levels of confidence instead of facts. Machine learning and cognitive computing utilise big data to create machine assistants, enhancing and accelerating human expertise, rather than machine workers, undertaking mundane tasks for humans. Thus we enter a brave new world. But I still can’t say I’m entirely comfortable.

Data x3

Data, Data, Data. Does it have the same cachet as Location, Location, Location? Big data. Open data. Standardised data. Personal data. If it doesn’t yet, it soon will.

I attended the Transport Practitioners’ Meeting 2016 last week and the programme was full of presentations and workshops available to any delegate with an interest in data, including me. With multiple, parallel sessions, I could have filled my personal programme twice over.

Transport planning has always been rich in the production and use of data. The difference now is that data is producing itself, the ability for the transport sector to mine data collected for other purposes is growing, and the datasets themselves are multiplying. Transport planners are challenged to keep up, and to keep to their professional aims of using the data for the good of society.

The scale of this challenge is recognised by Research Councils and is probably why I won a studentship to undertake a PhD project that must use big data to assess environmental risk and resilience. Thus my particular interest in finding all the inspiration I could at the conference.

Talk after talk, including my own presentation on bike share, mentioned the trends in data that will guide transport planning delivery in the future, but more specific sources of data were also discussed.

Some were not so much new as newly accessible. In the UK, every vehicle must be registered to an owner and after 3 years must pass an annual service, called an MOT. A group of academics has been analysing this data for the government in part to determine what benefits its use might bring. Our workshop discussion at the conference on this agreed the possibilities were extensive.

Crowd-sourced data, on the other hand, could be called new; collected on social media platforms or by apps like Waze. Local people using local transport networks share views on the quality of operation, report potholes, raise issues, and follow operators’ social media accounts to get their personalised transport news. This data is the technological successor to anecdote; still qualitatively rich, but now quantitatively significant. It helps operators and highways authorities respond to customers more quickly. Can it also help transport professionals plan strategically for the future?

Another new source of data is records of ‘mobile phone events’ – data collected by mobile phone network operators that can be used to determine movement, speed, duration of stay, etc. There are still substantial flaws in translating this data for transport purposes, particularly the significant under-counting of short trips and the extent of verification required. However, accuracy will increase in time, and apps that are designed to track travel such as Strava and Moves can already be analysed with much greater confidence.

Even more reliable are the records now produced automatically by ticketing systems on public transport, sensors in roads and traffic signals, cameras, lasers, GPS trackers and more. Transport is not only at the forefront of machine learning, but the ‘Internet of Things’ is becoming embedded in its infrastructure. Will such data eventually replace traditional traffic counts and surveys, informing reliable models, accurate forecasts and appropriate interventions?

It is certainly possible that we will be able to plan for populations with population-size data sources on a longitudinal spectrum, rather than using sample surveys of a few hundred people or snapshots of a short period of ‘neutral’ time.

However…

Despite attempts to stop it (note impossibility of ignoring Brexit in any field; its shadow hung over the conference proceedings), globalisation is here to stay and data operates in an international ecosystem. Thus, it cannot be used to its full potential without international regulations on sharing and privacy and standards on format and availability.

Transport planners also need the passion and the skills to make data work for us. Substantial analysis of new datasets is required to identify utility and possibility, requiring not only statistical and modelling training, but also instruction in analytical methods. People with such skills are in limited supply, as is the time and money for both training and analysis of new datasets.

Therefore, perhaps the most important lesson is that sharing best practice and successful projects that employ data at conferences like TPM2016 is more important than ever.