The possibility of a fatal crash involving an autonomous car

Google’s self-driving cars have traveled more than 1.4 million test miles, and none of them were at fault in accidents until Feb. 14, when a city bus scraped one of the company’s Lexus SUVs. Google said that no one was injured but that its autonomous vehicle was partially responsible. This might make some Washington drivers wonder whether or not driverless-vehicle projects should continue if they are responsible for traffic fatalities.

Artificial intelligence researchers, computer scientists and engineers have questioned what would happen if a driverless car was to blame for a fatal crash. The goal for Google and many other companies is to create self-driving cars that are reliable and safe, but like the fender bender on Feb. 14, the fault of an autonomous vehicle in a fatal collision is practically inevitable. Such a case may mean that the many self-driving projects around the world are halted if the public responds poorly.

The automobile was in this position in 1899, when the first recorded traffic death occurred in the United States. A New York City taxi ran into a man as he stepped off of a trolley. Over the next 30 years, justices and scholars debated if the automobile was naturally evil. Vehicle-related deaths were happening so much in the 1920s that the public started having parades in Detroit and New York to highlight the need for improved safety.

With advances in automobile safety, the 60,000 fatalities in 1970 was almost slashed in half by 2013. One of the purposes of autonomous cars is to reduce the number of traffic deaths even more. However, the public may not accept a completely autonomous vehicle that is not 100 percent safe, even if it is safer than driving themselves. The law will eventually emerge to cover auto accidents that are caused by a self-driving vehicle. Many attorneys and other observers believe that the manufacturers will bear financial responsibility in some cases.

Contact Information