We’ve previously looked at general purpose AI, and mentioned how there are some specific, or narrow focus AIs out there as well. These are designed to do a specific task, such as drive a car, detect intruders on a camera system, etc.
Today, we walk to look at Self Driving, or Autonomous cars. This is something people have been talking about for quite a while actually. Google has started this in 2009, and researchers have been working on this for much longer actually. The History of Google’s Driverless Car: PHOTOS (businessinsider.com)
Now you might ask “Why?” Well, the number one issue with car accidents is the driver. They can be tired, distracted, inebriated, make a mistake, etc. Any one of these might lead to an accident. Machines, don’t get tired, drunk/high, distracted, etc. (although, a we’ll see they can make mistakes.)
So if companies and researchers have been working on them for so long, why haven’t they solved the problem yet?
Well, the problem isn’t as easy to solve as one would think. Sure, street signs are fairly standardized. However, street widths, unexpected obstructions, and more get in the way. Then, there is the issue of other drivers. While a computer, or human, might know the rules, they may not know what a person is going to do. How many times have you seen someone driving incorrectly? Often would be my guess.
Self driving cars use a variety of sensors ranging from Radar, LiDar, GPS, and cameras to determine where the car is, what the environment is, and how to drive safely.
There are 5 basic levels for a self driving vehicle – Self-Driving Cars Science Projects (sciencebuddies.org) Level 0 starts with no automation, and Level 1-5 increases automation until it is fully reliant on the computer system and no human.
Tesla is one of the best known instances for their autonomous driving with the Full Self Driving. However, this is not truly an autonomous feature, and it’s not available in all locations, despite the extra cost associated with it. https://www.tomsguide.com/reference/telsa-autopilot In fact, it’s listed as a level 2. So despite the name, it is not a Fully Self Driving car… at this time. There are some ethical issues with the marketing in naming and advertising it, when they know it’s not ready for that. What do you think?
In fact the Tesla beta statement list that the car may “do the wrong thing at the worst time, so you must always keep your hands on the wheel and pay extra attention to the road. Do not become complacent”. It also states: “Use Full Self-Driving in limited Beta only if you will pay constant attention to the road, and be prepared to act immediately, especially around blind corners, crossing intersections, and in narrow driving situations.” At which point you might ask, what’s the point if it doesn’t do it for me? Especially at the cost that it is.
That is one of the first ethical issues. Can we trust the technology? And should we?
The NHTSA is currently investigating several Tesla crashes, potentially due to the FSD feature. U.S. safety agency reviewing 23 Tesla crashes, three from recent weeks | Reuters
Proponents of self driving cars say that it can increase the number of cars safely, especially if the cars can communicate with one another. They also say it can reduce stress for drivers, and increase over all speeds as they can react faster than a person driver can. They also say, if applied to the transportation industry, it could reduce delays in shipping, and we wouldn’t have to worry about trucker shortages. And let’s not forget that drunk driving might be completely a thing of the past.
Opponents of self driving cars will point to a loss of jobs for cab drivers, truckers, and any other professional driver potentially. (Imagine watching a Formula 1 or NASCAR race where all the cars are computer driven!) We’ve talked about technology taking jobs before, and can revisit those arguments here as well. Likewise they will ask who will the fault be if there is an accident. The software developer? The camera/radar manufacture? The automobile manufacture? What happens if a bug is found? Can you halt everyone with certain cars so they can’t travel?
They may also point to increase in both initial cost and maintenance. Who gets a ticket for speeding, missing a road sign that was partially obstructed? And can we trust them if there are still accidents?
One person I listened to said that as long as the number of accidents are lower than the accident rate for human drivers, we should move to autonomous cars as quickly as possible. At which point, we have to debate, should the government mandate them? Will there be an effort to improve them if they are required? Who’s then responsible for their updates and collisions?
What about when they go the wrong way down a road?
Car accident statistics
- There were over 5 million police-reported car accidents in the U.S. in 2020, according to the National Highway Traffic Safety Administration (NHTSA), a 22% decrease from 2019.
- A total of 38,824 traffic-related deaths resulted from 35,766 fatal accidents in 2020, which is the highest fatality count since 2007.
- 43% of car accidents resulted in injuries, which equals out to four car-accident injuries per minute.
- The state with the most fatal accidents per capita is Mississippi — there were 25 fatal crashes per 100,000 residents in 2020.
- Driving under the influence, speeding and seatbelt nonuse are the top behaviors that lead to car accident deaths.
- Car Accident Statistics: Fatalities, Injuries and Top Risk Factors – ValuePenguin
- 72 Car Accident Statistics 2020/2021: Causes, Injuries & Risk Factors | CompareCamp.com
Looking at those statistics, and the arguments before, what are the ethical dilemmas surrounding autonomous cars? And should we try to push one way or another?
Narrow AI – Autonomous Cars was originally found on Access 2 Learn