Alissa Knight’s new book is a how-to guide for hacking autonomous cars. Why? Because everyone should know where the weak spots are… including insurers
Successfully navigating the UK’s main north/south highway any day of the week is stressful enough. So it was probably just as well
no one knew they were sharing their lane with two driverless cars at the start of this year.
Part of the government-backed HumanDrive project to accelerate development of autonomous vehicles, the 230-mile road trip went off uneventfully; but the implications are profound.
The longest test drive ever undertaken in the UK, it challenged the cars – both Nissan electric vehicles (EVs) – to negotiate unmarked, high-speed country lanes, complex junctions, roundabouts, and motorways; the onboard system making its own judgments on speed, positioning, changing lanes, merging, stopping and starting. The only time the passengers – both engineers on the project – intervened was when they pulled in for a quick coffee and a recharge.
If you’re rubbish at parallel parking, you collect traffic tickets like other people collect stamps and you’re the kind of driver who can’t wait to flick on the speed limiter, then being able to delegate your daily commute to a supercomputer on wheels probably sounds very appealing – although at around £170k, the cost of such vehicles is likely to be beyond the reach of mere mortals for some time.
If, on the other hand, the moment when Charlize Theron steered hundreds of hacked cars remotely down 7th Avenue in 2017’s The Fate Of The Furious (as in Fast And…) left you with a queezy feeling about our future transport plans, you’re not alone.
While governments and insurers have been quick to point out the environmental, cost-saving and even life-saving benefits of autonomous technology – around 28,000 people a year are killed or seriously injured on Britain’s roads and 95 per cent of those incidents are caused by human error, according to the Royal Society for the Prevention of Accidents – there are others who believe we’re jumping a glaring red light.
Alissa Knight is one of them. A self-described ‘recovering hacker’, she’s used her inside knowledge of the vulnerability of connected devices to build a career advising challenger brands and market leaders on cybersecurity. She points out that most cars released after 2011 share the same communication network as your mobile phone or tablet – a standard called the Global System for Mobile Communications, or GSM.
“If you think about it, cars today are pretty much like cell phones on wheels, so original equipment manufacturers can communicate with them and push what’s called OTA, or over-the-air updates – which opens up a potential attack vector for these cars,” says Knight.
“If you can communicate with a car over GSM, you can theoretically do it from anywhere. Being able to take remote control of a vehicle over GSM, or in close proximity over Wi-Fi, is becoming easier and easier to do. We’re making it possible to be able to connect with them, and with that connectivity comes vulnerability.”
A thief, in theory, could already hack your security system to steal your car; if you know what you’re looking for, the hardware to instigate a replay attack can be picked up for $20 on eBay. But combine that capability with a vehicle that can think for itself and you’ve potentially got an army of robotic devices that can be mobilised remotely to cause havoc and create panic on our streets. Smart city transportation systems built
on the Internet of Things and plugged into V2X, or vehicle-to-infrastructure, communication systems, could similarly be hacked, creating the spectre of ‘spam jams’ and potential collisions. In every case, the question of where liability rests – the vehicle owner, manufacturer,
civic authority, etc – will come to haunt whichever insurer picks up the claim.
Knight’s argument – which she brings home forcibly in her new book Hacking Connected Cars: Tactics, Techniques And Procedures – is that tomorrow’s passengers of driverless cars (and the pedestrians and other road users who share their environment) are being asked to trust
– with their lives – that manufacturers will have tested the 1.2 billion lines of code in every autonomous EV sufficiently to know there are no potential security flaws. And, as a professional penetration tester herself, who somewhat shockingly revealed at Money20/20 USA that she had (legitimately) downloaded 30 leading financial services apps and managed to reverse engineer them, she says that’s just not happening.
“The problem is, unlike with home Wi-Fi, you can’t just go to Best Buy and pick up a wireless firewall to protect your home network. It’s one thing if I were to compromise your web server and deface your website; it’s another if you and your family are in your car and I drive it into a wall. Unfortunately, there’s really nothing that the average consumer can do, except ask different questions when shopping for a car, like ‘has this car been penetration tested? Does this car have ECU firewalls in it? Is the infotainment system connected to the CAN bus so if my car were to get hacked, they can’t jump to the steering column?’. It’s those kinds of questions we need to ask as a society and hold manufacturers’ feet to the fire, and say ‘hey, are you thinking about these things?’.
But the responsibility isn’t on the consumer to address this,” continues Knight, “it’s on the manufacturer and Tier 1 suppliers to make sure that when they put out a request for a proposal, it contains language like ‘if we’re going to award you this contract, you need to produce a penetration test report You need to prove to us that you’ve done a vulnerability analysis and a risk assessment on this equipment to make sure all the weaknesses have been identified, and those that are unacceptable to the business have been remedied’. It’s really making sure, on the OEM side, that they’re doing what they should be doing. That they’re eating their own dog food.”
This is no longer Hollywood hype. Between 2017 and 2018, cyberattacks involving IoT devices, from hacked doorbell cameras to rogue nanny cams, rose to more than 32 million, according to the 2020 SonicWall Cyberthreat Report, which described it as ‘an alarming year for the security and privacy of IoT devices’, adding that trending data suggests more IoT-based attacks are on the horizon.
Earlier this year, Mary Joyce, Global Vice President & General Manager of Mobility & Automotive at safety testing specialist UL (a division newly created in response to the astonishing speed with which manufacturers including General Motors, Daimler AG, Ford Motor Company, Volkswagen Group, BMW AG, the Renault-Nissan-Mitsubishi alliance, the Volvo-Autoliv-Ericsson-Zenuity alliance, Groupe SA, AB Volvo, Toyota Motor Corporation, and Tesla Inc, plus major auto suppliers, technology providers and autonomous vehicle-as-a-service providers such as Uber, are getting into the market), warned that regulation in the States and globally was being left behind.
“The danger is that without federal regulations and a universal standard in determining safety, there could be great damage if these cars get on the road without being deemed safe. And, of course, part of safety is security. Can you imagine a hacker being able to take control of an autonomous car? It’s very important to not only have a company indicate that its autonomous vehicle technology is safe, but to have third parties, government agencies or agreed-upon standards assure that it truly is safe, followed by, at the very least, minimum guidelines,” said Joyce.
AXA, one of the world’s biggest insurers, has been involved with the development of autonomous vehicles from the start, making the case for the technology as an agent of good, citing its potential to reduce accidents and increase mobility for those denied it by conventional cars. Both of which are true. But it also acknowledges the complexity around adjusting for risk, the likely higher cost of claims linked to such sophisticated technology and the challenge of fashioning cover for the car-sharing economy that autonomous vehicles will encourage. It would like to see something similar to the clear structure of liability laid down in the UK government’s 2018 Automated and Electric Vehicles (AEV) Act adopted across Europe.
Axa sees the insurance industry facing fewer claims under new policies that cover driverless cars in both fully automated and driver-controlled mode – and of those it does, a proportion being passed on to manufacturers. Forbes, meanwhile, has forecast that third-party damage claims could largely disappear and that premiums reduce by as much as 75 per cent as a result.
“I think the insurance industry is going to have to adapt,” adds Knight. “Insurers are currently providing cyber insurance and hacking insurance for businesses on their infrastructure side; they’re going to need to start thinking about how their products can be applied to connected cars and autonomous vehicles. Because, as more and more connectivity happens with connected passenger vehicles, hackers are going to shift their attention. It’s one thing for ransomware and malware on a network to affect your infrastructure; what if ransomware and malware were to target vehicles, and hackers were to go after individuals or businesses, saying ‘we’ve got remote control of your vehicle. We want 100,000 Bitcoin for you to get it back’? Hackers are going to evolve to monetise the data that comes especially from vehicular infrastructure, data being produced from cars, including personal identifiable information, that hackers can now go after because vehicles aren’t adequately protected,” says Knight.
“You can’t protect yourself against an enemy that you know nothing about. That’s why I want to uncover the vulnerabilities in these connected vehicles and products; so that we can make them more secure, so people are educated and manufacturers start doing something about it.”
The public might not know what they don’t know, but their gut instinct is not to trust driverless cars. According to AXA research, three out of four Brits do not believe they are safer than conventional vehicles, while a poll by Reuters/Ipsos last year revealed that half of US adults think they are more dangerous, not less, and nearly two-thirds would not buy a fully autonomous vehicle. So, would Knight put herself and her family in one? “If I did the penetration testing of that infotainment system and helped the OEM or manufacturer to secure it, then yes.”
Not quite sure where that leaves the rest of us…