Japanese brand Nissan has been working on its self-driving car technology for several years, and much of that work has gone on right here in the UK. To find out the latest on when self-driving autonomous vehicles might finally hit UK roads, CarGurus visited Nissan at its European Technical Centre in Cranfield, Bedfordshire for a crash-course (well, hopefully not) in self-driving tech. While we were there, we even hitched a ride in a real-life self-driven Nissan Leaf electric car on UK public roads.
Taking a ride in a self-driving car:
- Self-driving cars and the UK
- What exactly is a self-driving car?
- Nissan’s road to self-driving tech
- What’s the tech that enables autonomous driving?
- The challenge of decision-making
- Taking a ride in a self-driving car
- More challenging driving situations
- So, where do we go from here?

Self-driving cars and the UK
It seems like ever since the very inception of the motor car, humanity has been imagining a time when cars would be able to drive themselves. We’ve been seeing self-driving cars portrayed in sci-fi for years, along with dinners in pill form and robot butlers.
Between them, car manufacturers, independent companies and Government agencies have spent many, many millions on trying to make self-driving cars a reality, and yet, although it feels like we’ve been on the cusp for several years now, we’re still not quite there yet, not in the UK at least.
So with that being said, just how far off are we from seeing self-driving cars on UK roads? Well, that largely depends on who you talk to. The Conservative Government, led by then-PM Rishi Sunak, passed the Automated Vehicles Act in 2024, which stated that self-driving vehicles could be on UK roads by 2026. Subsequently, the Labour administration, led by Keir Starmer, throttled back on that landmark, saying it was more likely to happen in the second half of 2027, only to then change its mind shortly afterwards and announce that pilots of self-driving taxi schemes would be brought forward to spring 2026.
All the while, several independent companies, such as ride-hailing apps like Uber among others, have stated that the tech is 100% ready to go in the UK, with robotaxis already in use in the USA, China, the UAE and Singapore.
What exactly is a self-driving car?
So what’s the hold-up in the UK? In part it could simply be that there are many conflicting definitions over what a self-driving car actually is. Is it a car that only ever drives itself, and thus has no need for a driver’s seat or steering wheel? Is it a car that’s owned by its occupant, or simply hailed like an Uber? Is it a car that owners buy and drive themselves, and is only called upon to drive itself when the owner has something better to do, like sleep, work, or simply play on their phone? Is it all of the above? Is it none of the above?
With so many questions kicking around in the ether, CarGurus decided to seek the views of a major car manufacturer with a vast amount of skin in the game when it comes to self-driving vehicles. So, off I toddled to Cranfield, to learn about the work that’s gone on there in the area of autonomous vehicles, find out where the firm is at with self-driving tech, and to even have a go in a self-driving Nissan Leaf prototype on public roads.
Let’s begin with Nissan's view on the above questions, and how the firm sees self-driving vehicles operating in a variety of different ways. Execs told us that they take the view that an autonomous vehicle should always have the ability to be driven - so a steering wheel will always be needed - but the car should have the ability to drive itself 100% of the time if needed, and be able to cope with any driving situation thrown at it.
That said, those same execs also told us that the firm was conducting research into a possible ‘autonomous mobility service’ - in other words, an app-based ride-hailing service, probably operated in conjunction with a partner company - in Japan and the US, as well as at Cranfield. In theory, though, if the self-driving tech is good enough and reliable enough, then a single self-driving model should suit both purposes. And it’s safe to say that Nissan has ploughed a lot of time and resource into developing one.
Nissan’s road to self-driving tech
Nissan has been working hard on its self-driving tech for the last eight years. Over three development schemes during that time, conducted alongside a variety of industry partners, the projects have collectively resulted in the impressive-sounding feat of covering more than 16,000 autonomously driven miles on UK roads, with no accidents.
The three development schemes mentioned have tackled the self-driving conundrum in different stages for different types of driving. The first - dubbed (somewhat ironically) as HumanDrive - aimed to develop the tech for motorway driving: in other words, fast, straight roads with very few obstacles and no pedestrians. This ran from 2017 to 2020, involving the work of 135 people and costing around £13.5 million (which came partly from the UK Government and partly from the nine consortium partners), and the project culminated in a 230-mile self-navigated journey on UK motorways.

The second scheme - called ServCity - followed between 2020 and 2023, involving 116 people and costing £10.7 million. It tackled the more tricky challenge of fast arterial roads, characterised by their medium size, sometimes winding nature, their fast and dense traffic flow, and more complex obstacles including pedestrians, parked cars and oncoming traffic. Around 1,600 test miles were completed in this scheme.
The final stage - named evolvAD - began in 2023 and has just come to a close in 2025. This was worked on by a team of 75 people and cost £3.8 million. It was designed to tackle urban residential roads, which have an even more challenging level of complexity, with the self-driving prototypes having to deal with narrow streets, slow and highly dense traffic, a high volume of pedestrians, speed bumps, and mini roundabouts, on top of everything else. This stage also dealt with the most complex type of road of all, the rural intercity route, which is often fast and winding, with uneven road surfaces, blind spots caused by roadside vegetation, blind corners, and fast-moving oncoming traffic. The self-driving Nissan Leaf prototypes at this stage of the research clocked up 2,000 autonomous test miles.

It was one of these evolvAD prototypes I had a self-driven ride in - more on what that was like later - and also the one we learned most about. And it’s interesting to note that although learnings were taken from HumanDrive and ServCity in coming up with solutions for evolvAD, no single prototype exists that can replicate the achievements made by all three schemes.
What’s the tech that enables autonomous driving?
Next, let’s talk about the hardware fitted to the all-electric Leaf prototypes in order to enable autonomous driving. For evolvAD, the cars have four long-range Lidar sensors (if you’re not familiar with Lidar, then in simple terms, it’s a kind of like a more sophisticated version of radar, which uses laser light instead of radio waves to detect and locate objects), two wide-range Lidar sensors, one radar sensor, and 15 cameras, all built into a ‘halo’ mounted on the roof of the car.

The on-board sensors, cameras and Lidars aren’t the only data collection points for Nissan’s car, either. It can also receive information about its surroundings from a network of streetside cameras in its vicinity, where such cameras are fitted. For the evolvAD trial, these cameras were provided by another of the consortium partners, TRL, within the firm’s ‘Smart Mobility Living Lab’. This is an area of south east London furnished with a network of around 300 connected roadside cameras to deliver more than 200 monitored locations along a route of around 24km, making it essentially a live testbed on public roads.
With this additional infrastructure to support it, the car is able to ‘see’ much further up the road than its on-board sensors can, alerting it to potential upcoming obstacles, and helping it deal with closer ones by eliminating blind spots. For example, where the car might otherwise get stuck behind a delivery lorry parked at the roadside blocking the carriageway, the cameras allow it to see past the obstacle and decide on a safe way of proceeding.
And that’s where we come to perhaps the trickiest bit about autonomous driving: that decision on how to proceed.
The challenge of decision-making
Once all the data has been collected from all the various sources, it's processed by six system ECUs - and although we weren’t allowed to take a picture of it, a literal bootful of flashing lights and wires - to build a detailed 3D digital model of the car’s surroundings, and the driving situation at hand. And at that point the car has to make a decision - quickly and reliably - on what course of action to take.
Now, even for computing power as advanced as this, that’s difficult enough when the traffic and other drivers are all behaving as they should, and that’s before you factor in the possibility of other drivers doing things they shouldn’t (more on this in a while). As an everyday example, take the approach to a traffic pinch point. To decide whether to go or to yield, the car has to assess its own approach distance and speed, and the approach distance and speed of any oncoming cars, and the position and potential effect of any parked cars in the vicinity, while also planning a route through the entire manoeuvre and assessing whether it can be carried out in a safe and natural way, all while anticipating the potential of pedestrians coming into play, and predicting the behaviour of other drivers. Phew.

And talking of predicting the behaviour of other drivers, there are a couple of interesting things about the way in which the prototype does that. Take our previous pinch point example: if an oncoming car gives a flash of its headlights indicating that it’s going to yield, Nissan’s car won’t acknowledge it. Nissan says that’s because in other parts of the world, a flash of the headlights means the exact opposite: in other words, ‘I’m coming through’. And we can totally see the logic there.
However, neither does the prototype recognise indicator signals. At all. Nissan proclaims that these are human-controlled, and are as such fallible, and so they’re completely ignored. Instead, the car senses the road positioning and attitude of surrounding cars, and predicts their intended course that way.
Hmm. Now, I don’t want to tell Nissan’s engineers how to do their jobs, but personally I’m not so sure about the logic of this approach. Admittedly, I’ve seen many an erroneous signal flicked in my time, and I fully accept that when another car’s road positioning doesn’t quite match what its indicators are telling you, it’s sometimes safest to work on the basis of the former rather than the latter, but importantly, to be prepared for anything. The fact is, though, that while a car’s indicators are human-operated, so is its road positioning (well, until humans are banished from the roads entirely, that is), and for every erroneous signal I’ve witnessed, I’ve also witnessed a driver who’s poor spatial awareness leads to poor - and thus misleading - road positioning.
Neither approach is infallible, then, so why not use a hybrid of both? After all, sometimes - just sometimes - people do use their indicators correctly. Yes, I can understand the merit of not being absolutely governed by the indicator signals of others, but I reckon you can still be guided by them without ignoring them completely.

Anyway, regardless of the methods used to predict the behaviour of other drivers, Nissan's goal with the behaviour of its self-driving prototypes is to make them as natural and as human-like as possible. To do this, AI machine learning has been used to recognise and replicate human driving behaviour.
So, has it worked? To find out, I hopped into the passenger seat of one of the prototypes and was taken for a short 15-minute ride around the Bedfordshire roads surrounding Nissan’s Cranfield facility.
Taking a ride in a self-driving car
Before you ask, ‘did it feel weird to be driven in a car with no driver?’, the answer is no. That’s probably because there was a driver, or rather, a person sitting in the driver’s seat, making things feel utterly familiar. Why? Because fully autonomous driving is not yet legal in the UK, so there always has to be a human on hand in these prototypes to take over the controls if necessary.
There was also another chap sitting in the back seat glued to a laptop connected to the car, making sure all the hardware and software was working as it should. The fact that it takes two people to drive a driverless car wasn’t lost on me, either.

So what did we observe? Well, for the most part, yes, our progress along the urban roads and countryside lanes of the route felt fairly natural. That said, there were a couple of distinct characteristics that I noticed about the car’s behaviour.
Firstly, when approaching a junction, the car was very slow and deliberate about the way it decelerated and stopped, and the same was true when it pulled away again. No problem there in our eyes: always better to err on the side of caution, after all. I did find it slightly irksome, though (as I suspect many other road-users would), that on every single roundabout we encountered, the car would come to a complete stop before entering, even if there was no traffic whatsoever and all sight lines were obviously clear.
By contrast, once a junction had been dealt with and the road ahead was clear, the car was surprisingly keen to get on the power, accelerating in a surprisingly aggressive way. Nissan told me it had been set up this way so that other drivers didn’t feel like this autonomous vehicle (if indeed they recognised it as such from the halo on top) was getting in their way and slowing them down, causing them to behave erratically. Again, that makes plenty of sense.

We also noticed, however, that when positioning itself on the road, the car seemed to hug the kerb on the left, rather than the white line on the right. Again, I can see why this would be the case from a safety perspective, and it didn’t cause any problem most of the time, but on a few roundabouts and junctions where there was a particularly wide entrance to the road we were turning onto (and as such, a large distance between the kerb and the white line), it resulted in the car taking rather a strange wide line that I’m pretty sure most humans probably wouldn’t. And as explained earlier, when the car’s entire method of predicting the actions of other cars revolves around road positioning, it’s slightly ironic that its own road positioning occasionally comes across as quite unnatural.
More challenging driving situations
So, there were a few situations in which the car felt unnatural, but did it complete the journey totally unaided? Well, sadly, not. When it encountered a car pulled up onto the left-hand kerb of a narrow urban street, halfway onto the pavement, with its hazard lights on, the human back-up driver was required to intervene and steer around it, before re-engaging the autonomous drive straight afterwards (there were no streetside cameras in the area to help out).
There was another occasion a little later on where the car got slightly flummoxed: the backup driver didn’t feel the need to intervene on this occasion, but it was nevertheless an interesting situation to observe.
We were signalling right (yep, the car uses its own indicators, even though it doesn’t recognise the indicators of others) to turn off a major road onto a minor road, and as we approached the junction, another car was signalling to turn right out of the road we were trying to turn into. Our car waited patiently for the other car to complete its manoeuvre, leaving it lots of room to do so (incidentally, our car seemed to know from the other car’s road positioning that he was turning right, somewhat undermining my argument of earlier on!). The thing is, we technically had the right of way in that situation, which the other driver knew, and so he was waiting for us to go, while our car was waiting for him. And, when he flashed his lights for us to go first, there was obviously no reaction. After a few more agonising seconds of needless inactivity in this strange stand-off in the Bedfordshire countryside, the man driving the other car spun up his front wheels as he pulled away sharply from the junction, aiming an angry glare toward the nice chap in the driver’s seat of our car.

And this got us thinking about other instances of everyday human interaction that play a part in the business of driving that might cause a wrinkle in the progress of autonomous motoring. For example, the letter of the law says that as soon as someone sets foot on a zebra crossing, they have the right of way. But even so, you don’t usually step out until you’ve clocked the eyes of the approaching driver to make sure that they’ve spotted you, do you? If there’s no driver, you’ll instinctively wait until the approaching car has come to a complete stop before proceeding, which is just going to slow everybody down.
I can think of seemingly countless other examples of situations like this, but I guess that if this stuff was easy, we’d have cracked autonomous driving long ago.
So, where do we go from here?
Now, I’d like to say at this point that despite my apparent naysaying and critical observations throughout this piece, I’m certainly not against the principle of autonomous driving. Indeed, there was plenty about my experience in Nissan’s prototype that impressed me greatly: the fact that it completed around 90% of the journey with no problem or intervention shows how far the tech has come, and I found it really impressive that when the car was attempting to pull away from a junction that it couldn’t quite see out of sufficiently, it edged out gently little-by-little, in much the same way a human would. Neat, that.
Based on what I’ve seen, however, there’s evidently still quite a lot more work to do before this tech is ready for the UK’s complex road network, a fact that Nissan itself freely admits. Assuming for a moment that all the prototype’s autonomous journeys reflect ours - and that’s quite an assumption, by the way - and 90% of driving situations can be dealt with without any driver intervention, then obviously, Nissan needs a solution to that remaining 10% before it can claim to have a fully self-driving car. After all, if all that’s needed to derail your autonomously driven journey is a poorly parked car on the kerb, then you can’t really claim to have cracked it, can you? And from the consumer’s perspective, if your self-driving car can’t get you home from a night at the pub without your help, then it’s not worth having, right?

From the outside looking in, the work still left to be done is probably largely just a case of polishing up that decision-making process we talked about, and introducing greater problem-solving capability. That said, if the roadside camera infrastructure is going to be genuinely effective in supporting the tech on a widespread basis, then this either needs to be standardised, or the method of communication and data transfer needs to be standardised, in order to maintain the speed of the processing and the streaming.
Is Nissan any further ahead with this tech than any other car manufacturer? Well, that remains to be seen. Indeed, some claim to have already cracked it. Polestar, for example, proclaims that a Polestar 3 SUV equipped with the optional ‘Pilot Pack with Lidar’ (this pack isn’t yet available in the UK, but should be soon) has all the hardware needed to facilitate fully autonomous driving once it becomes legal, and once that happens, this capability has to merely be ‘switched on’ via an over-the-air update to become fully functional.
It’s very interesting to note, however, that the hardware used by the Polestar is very different to that used by Nissan’s prototypes (if you’re interested, while the Nissan uses four long-range Lidar sensors, two wide-range Lidar sensors, one radar sensor, and 15 cameras, the thusly-equipped Polestar has just one Lidar sensor, along with 12 ultrasonic sensors, one front radar sensor, and eight cameras). The question is, will both conflicting approaches work? Or will neither? Or will another company have an even better solution?
Guess we’ll find out on the day that the big switchover happens, and when it does, we - along with everyone else - will be watching with interest.
