Philip Hammond has just unveiled plans in the Budget to invest millions of pounds into driverless cars and technology, with the aim to have fully driverless cars on our roads by 2021. These cars will be electric, and he has promised £400m to boost the UK electric car driving structure, £100m plug in car growth and £40m for research and development into electric and autonomous cars.
Manufacturers such as Land Rover Jaguar and Audi have already started rolling out their version of driverless cars. Claims that road traffic accidents caused by human error will reduce considerably or even be eliminated completely. This in turn will boost the UK economy and that we will lead the new industrial revolution.
But at what cost? Can technology be so advanced and super secure that it will never fail? What if the software fails? What about bugs getting into the system or cyber security? Or if there is an inherent fault within the vehicle. Can we trust ourselves completely to let a vehicle take complete charge? If there is an accident (and accidents have happened) issues such as could a human driver or a comparable automated system perform better will surely be raised.
Jeremy Clarkson has scoffed Hammond’s plans, announcing that he was almost killed twice whilst testing a driverless car. And that we are “… miles away from it.” Clarkson also challenges Audi, “You drive one of your driverless cars over Death Road in Bolivia and I’ll buy one…. Sit there with your hands folded and let it drive you up there, then squeeze past a lorry with half a tyre hanging over a 1,000ft drop whilst the car drives itself. Fine. I’ll buy into it.” Clarkson has a point. What if a child runs out in front of the car and the only alternative is to swerve into another car on the neighbouring lane? How will the car decide whether to hit the child or the other vehicle?
But if Hammond wins, who pays when things go wrong? The government has published the Automated and Electric Cars Bill and the general position is that the insurer pays, but they will have the right of recovery from software and product manufacturers. Liability can also be excluded or limited if the insured driver has made or allowed unauthorised modifications to the vehicle or failed to install safety-critical software updates. Insurers will also not be held liable if the vehicle user was negligent in allowing the vehicle to drive itself when it was not safe to do so. It is safe to say that we can expect litigation over the interpretations of “safety critical” and “negligent”.
There will be no doubt be a shift from driver liability to product/software liability. Changes to the Road Traffic Act and the Highway Code will need to be made. With three potential defendants for each vehicle, there will surely be a rise in litigation for the most serious of injuries. These are the issues facing the insurers impacting heavily upon their risk management and reserves.
Plexus Law will be hosting a driverless car seminar in the Spring. Watch this space.
If you would like to find out more about this topic, please speak to your contacts at Plexus:
Jo Pizzala, Partner
T: 0345 245 4783 | M: 07957 726 576 | E: firstname.lastname@example.org
Kathryn Oldfield, Partner
T: 0344 245 4462 | M: 07500 781 177 | Kathryn.email@example.com
Petty Abrams, Solicitor
Solicitor T: 0344 245 4462 | M: 07500 781 177 | E: firstname.lastname@example.org
Whilst we take care to ensure that the material in this article is correct, it is made available for information only, and no representation is given as to its quality, accuracy, fitness for purpose, or usefulness. In particular, the article does not give specific legal advice, should not be relied on as so doing, is not a substitute for specific advice relevant to particular circumstances. Plexus Law accepts no responsibility for any loss which may arise from reliance on information or materials published in this article.