The Tesla Autopilot Ethics That No One Considers (But Should?)


The Tesla Autopilot Ethics That No One Considers (But Should?)

Tesla’s autopilot features are loved by some and distrusted by many. While recent conversations have turned away from the safety of autopilot to in-drive entertainment options, most of the world is cautiously watching for long-term evidence that Tesla Autopilot is a technology that is worthy of trust. Ethical discussions still abound, and not all ethical questions or considerations have been satisfactorily answered.

There are Tesla autopilot ethics that no one considers, but should. Ethics include employment impacts, technology inclusion in autonomous vehicles, in-drive entertainment, and the impact on pedestrians. Car makers should consider sensor compatibility that enables cars to maneuver in close proximity.

Every week seems to bring a new Tesla buzz to the automotive world and the artificial intelligence world simultaneously. Tesla is at the forefront of a world that many have dreamed of where the daily commute is relaxing or a time of productivity rather than lane switching and traffic frustration. Here are some ethical concerns that should be at the forefront of the evolving discussion around vehicle autonomy.

What Are the Ethical Issues of Self Driving Cars?

The term ethics is derived from the Greek word, ethos, which means character. It is used to describe how things ought to be rather than how they actually are. Ethical considerations revolve around what ought to be done, believed, or acted upon aside from what people are doing in reality. Ethics are the high standard, or how something would be if it attained perfection.

When considering the ethics of autopilot, we are considering the ideals of autonomous vehicles and autopilot technology. Because there is no technology outside of human design, this discussion is ultimately about how ethical people should approach the design and use of autopilot technology. This discussion is not just about Tesla engineers. It includes the actions of those who drive these cars.

  • How Tesla ought to be designing autopilot and autonomous functions
  • What ought to be done with the technology for safety
  • How drivers ought to view and use autopilot capabilities
  • How pedestrians and small vehicle motorists ought to view autopilot
  • How cities ought to be viewing vehicles with autopilot
  • How insurance companies ought to view autonomous vehicles
  • How commercial companies ought to use autopilot and autonomous fleet vehicles
  • Technology integration considerations for autonomous vehicles

Due to the evolving nature of Tesla autopilot, we will consider the ethics in light of what is being developed and how Tesla technology is advancing. Much of what ought to be done is being done, and there are more advances on the way.

Also, there are ethical considerations that rest solely on drivers who use this technology which can not be ignored. Autopilot capabilities do not take away the personal responsibility of drivers, and these personal responsibilities are equally weighted with Tesla’s responsibility to develop safe technology.

How Tesla Ought to Be Designing Autopilot Functions

First, it should be pointed out that Tesla’s current autopilot functionality is not the same as completely autonomous driving. Autopilot allows the car to steer, accelerate, and brake in its lane without driver interaction. This allows drivers to relax and enjoy the ride without the constant stress and tension of driving, even in complex traffic situations and on tight roads. However, fully autonomous driving is in the near future.

Tesla has additional navigation capability that enables the car to make lane changes as needed to reach a destination without driver input. This also takes GPS information into its calculations and suggests lane changes based on traffic movement. The full functionality of this system aims to maximize trip efficiency and is the basis for the full autonomy that is being rolled out in testing.

  • Tesla has installed eight cameras on the vehicle. Some are wide-angle cameras that capture a wider field of view at a lesser distance. Others are narrow-angle cameras that capture less of the field but see a long distance.
  • Tesla cameras capture a complete 360-degree view around the vehicle.
  • Cameras see 100 meters behind the vehicle and 250 meters in front of the vehicle.
  • Ultrasonic sensors are used as a secondary measure to add additional “sight” coverage for items that might be missed by vision camera detection.
  • The onboard computer processes all input simultaneously from every input sensor and makes decisions based on all of this information. As the car drives, additional input and experience make the system “smarter.”

Elon Musk has recently stated that new models will have a camera-only system and will be discontinuing additional sensor technology. While it remains to be seen how these more streamlined systems will perform under testing, it may be unwise to reduce the number of sensor safety systems on the vehicle, and here is why.

System Redundancy

These 360-degree cameras combined with ultrasonic sensors and adaptable computers create a sensory driving capability that is far advanced from human driving capabilities. No human can see in all directions simultaneously and make decisions based on all input at once. This AI driving system is, in theory, much safer than human-only driving.

However, the system also relies on sensor redundancy. Sensor redundancy can be seen as a “crutch” or a costly system that does not deliver enough additional information

to warrant the additional manufacturing cost. However, system redundancy is something that is incorporated into other automated systems such as cobots as a safety measure for the humans who work alongside them.

  • Sensory redundancy increases safety for the humans inside. Autopilot systems ought to have backup cameras and ultrasonic sensors in place to counteract camera failure.
  • Sensor redundancy provides additional reassurance for cities that are skeptical about the wisdom of allowing autonomous fleets on the roads.
  • Minimal CPU redundancy should be considered in the case of a computer glitch on the road. This type of system should be able to steer the car to safety in the case of a total failure.
  • System redundancy should be able to detect catastrophic events caused by primary system lapses and immediately stop the car and/or steer it to safety and stop.

System redundancy is costly and increases manufacturing time which is a major hurdle that Tesla is still struggling to overcome. However, sensor redundancy also increases safety and will help Tesla to defeat naysayers who point to catastrophic failures as the reasons why autopilot vehicles should not be on the road. Decreasing sensor input adds fuel to the fire.

Technology That Should Be Considered for Safety

For the safety of those who do and do not drive cars with autopilot, additional sensors should be standard on new vehicles from other car makers. We see many new sensors and new autopilot technology coming on the market with every new year model. These new capabilities include auto parking, passing lane sensors, and blind-spot sensors. Cars are now offering lane assist to help drivers maintain a safety cushion around the vehicle.

Standardization is needed for all new vehicles, and retrofit sensor packages need to be made available for older vehicles. The new sensor advances are wonderful and increase the safety of everyone on the road, but many are no more than a warning light that can be ignored by drivers.

Cars should be able to communicate intention with each other to aid with collision avoidance. In the case of autopilot vehicles of all kinds, basic information could be made available to surrounding cars. This is the kind of information that humans can not intuitively know from each other. Drivers frequently fail to inform surrounding drivers of their intentions causing collisions and near-misses.

  • Acceleration speed and intention
  • Deceleration speed and intention
  • Lane position
  • Turning, lane changing and merging intention

These particular bits of information do not violate privacy and would go a long way toward helping vehicles avoid collisions. This technology would greatly advance the capability of cars to protect the humans inside. Cars with autopilot could use this information to adjust car speed and lane positions to safely accommodate the needs of surrounding vehicles so that everyone gets to their destinations safely and efficiently.

How Drivers Ought to Use Autopilot and Autonomous Vehicles

The main problem with autopilot and autonomous technology is that humans tend to view it as another human. This happens with all robotics and artificial intelligence technology. Humans seek to humanize technology, and this is a mistake.

Artificial intelligence is a human assist, not a human substitute. Automation is meant to make human lives easier by doing things that humans do inefficiently, enabling humans to do more meaningful tasks.

Artificial intelligence does not have human intuition. Humans try to interact with robotics by personifying it and seeking to interact with it as they would another human. This includes attributing human emotions, intuition, and irrationality to artificial intelligence.

Cars with autopilot and autonomous driving technology are not capable of making decisions based on human intuition, error, or irrational behavior. They can use sensors to make trained decisions based on what is located around the vehicle and the movement of other vehicles in the area. Irrational driver behavior and a lack of system redundancy should give drivers reasons to stay attentive while the car is on autopilot.

  • Drivers must stay attentive to the conditions on the road. While the car may be very adept at driving in many different scenarios, the human should be aware of conditions that may cause an error in the car’s computational driving.
  • Drivers should not assume that the car will always make the right decision. Sleeping, playing video games, and having no one present to take over driving in the driver’s seat is a mistake that has cost people their lives.
  • Tesla currently offers in-car video games that are available for play when the car is parked. Musk has made it clear that he hopes to be a leader of in-drive gaming when cars are fully autonomous.
  • These goals are out of line, considering that full autonomy has not been reliably achieved and is a long way from being acceptable in most cities around the world.
  • Drivers ought to be more concerned about protecting the lives of themselves and the passengers in their car than they are about zoning out on entertainment during their commute.

The technology is advancing in the right direction, thanks to Tesla’s innovative approach to autonomous vehicles. However, drivers are guilty of making unfounded assumptions as to the intuitiveness of artificial intelligence and the reliability of systems. Because of this unfounded bias, drivers take unnecessary risks by relying completely on the vehicles, and some have paid with their lives.

Ethics of Autopilot Entertainment

There will be a time in the future when the technology is fully developed and humans can leave the driving to the machines, just like we leave the laundry to machines. That time is not yet here, and it is too early to be worrying about how to waste time inside the car while it is driving.

Now is the time for drivers to be attentive and willing to use the technology in a way that advances the push toward autonomy by accentuating its safety. This can be done by using the technology responsibly which helps to dispel widespread fears and delays adoption of the technology.

In-drive entertainment will be a fun “problem” to solve once vehicle autonomy is fully and safely implemented with a proven track record of delivering passengers without incidents requiring driver intervention.

How Autopilot Concerns Pedestrians and Small Vehicles

Pedestrians and drivers who use small vehicles such as mopeds, bicycles, and motorcycles should be especially interested in autopilot and autonomous vehicle capabilities. The 360-degree cameras offer more safety for less visible vehicles and pedestrians than human drivers can.

  • Autonomous vehicles are programmed to obey crosswalks and stop for pedestrians. This makes pedestrian travel much safer than it currently is.
  • Bicyclists are often harassed or even pushed out of bicycle lanes by vehicles that disobey the laws and veer into bike lanes or use them as turn lanes. Autonomous vehicles are programmed to make turns safely, which is a big safety benefit for those who commute on bicycles.
  • Motorcycles are often unseen on the roads because they share a lane with cars but take up a fraction of the space. Drivers often do not see motorcycles when they are changing lanes or overtaking other cars. Motorcycles are always seen by 360-degree cameras, and the car will react accordingly.

The safety of pedestrians, bicyclists, and motorcyclists is enhanced greatly by the use of autopilot and autonomous vehicles. This is a very strong argument for autonomous vehicle adoption by metro areas where traffic conditions are congested and pedestrian and bicycle traffic are difficult for cars to navigate.

How Cities Ought to View Autopilot and Autonomous Driving

Cities are naturally concerned about the safety of citizens on the roads. Every city has particular areas of concern, be it snow and ice, traffic congestion, narrow bridges, high pedestrian traffic, or trolley and light rail systems. The vehicle capability is well proven on rural, single-lane roads. However, metro areas offer complicated driving conditions.

Rather than offering blanket negative statements about Tesla and autopiloting cars, cities can offer help with development by taking a hands-on approach to vehicle testing and feedback. Cities should seek to join Tesla development teams to test vehicles in the most complicated urban areas.

  • Cities should look to offer feedback to Tesla on new technology. This will help to ensure that the autopilot cars that will inevitably be driving their roadways will be as safe as possible.
  • Cities should offer to be testing grounds for new Tesla vehicles, with Department of Transportation oversight and official roadway testers. This would serve drivers, metro areas, and car companies alike, helping to ensure proper technology development and driver safety.
  • Cities should seek to gather feedback from citizens who are using this technology. Feedback could include areas where the cars do well, and where they do poorly. As the future moves more toward autonomy, cities can consider planning schemes that account for these types of vehicles.

Cities and metro areas could be doing a lot more to enable the advancement of autopilot and autonomous vehicle development. Many areas believe that ignoring these technological developments will make them go away, but it will not. Rather than ignoring these developments, cities can be at the forefront of helping to develop the technology for the safety of their citizens.

How Insurance Companies Ought to View Autopilot Vehicles

Insurance companies use actuarial math to calculate the risk posed by vehicles on the road. Actuarial calculations take into consideration all factors that work into how likely consumers are to make a claim, and how much that claim is likely to cost. Here are a few of the points that actuaries take into consideration when determining insurance premium costs.

  • State. Some states have more vehicle insurance claims than others. States with higher claims rates cost more to insure vehicles whether drivers have individually made claims or not.
  • City. Some cities have higher claims rates than others. This can vary your overall state cost by quite a bit. While the overall state rate may be higher, someone living in a small quiet town will still pay less than someone in a bustling metro in the same state because of the likelihood of claim filings.
  • Vehicle make and model. Some vehicles cost more to repair than others. Some vehicles are known to be much safer due to safety integrations and are less likely to be involved in a collision. Both of these factors are used to determine the cost to insure individual vehicles.
  • Personal insurance history. Each person on an insurance policy has a different cost based on their driving history. Those with tickets and prior claims have an increased likelihood of making future claims, so they pay more insurance premiums.

Insurance companies are behind the times when looking at the cost to insure vehicles with autopilot. While there have been accidents that are well-publicized that include Tesla vehicles, they are overall safer vehicles due to the supplementation of sensor driving. Tesla cars are still liable to mistakes from other drivers as well, but the overall major claims are much fewer.

This is why Tesla offers an insurance plan that is backed by a national company. Due to Tesla’s deep integration with this insurance plan, it is as near to Tesla-owned insurance as a Tesla owner can get. This insurance takes into consideration the safety benefits of self-driving cars and delivers much lower insurance premiums to Tesla owners and drivers.

Tesla is also able to use computer technology to track the driving habits of individual owners and give lower rates to those who use the autopilot properly and demonstrate a history of safe driving. The number of autonomous vehicles in an area could also be used as an insurance rate determiner due to the safety in numbers. This is not an assessment that can be fully implemented by third-party insurers.

The former head of Tesla Insurance is now making a viable and logical argument that OEM manufacturers could and should offer their own insurance programs to those who purchase their vehicles for the same reasons that Tesla can offer lower insurance rates.

How Commercial Companies Ought to View Autonomous Driving

Many companies rely on drivers to provide goods and services of all kinds. From interstate residential movers to home improvement contractors, to pizza delivery, autonomous vehicles could offer much higher safety for all concerned.

Long-haul semi-truck drivers are renowned for swerving on the roads during long trips. There have been many widely-publicized accidents involving semi-trucks that have killed entire families on the road. These could have been prevented by eliminating driver negligence and inattentiveness. Ethical concerns over driver employment are balanced by the ethical concerns about the safety of families on the road.

  • Autonomous vehicles do not eliminate the need for human drivers. Technology is a boost for human capability. Autopilot would allow long-haul truckers to make drives more safely and eliminate the extreme fatigue that comes from driving hundreds of miles at a time.
  • Autonomous vehicles do not eliminate the need for human workers. Whether a vehicle is hauling a load or delivering a package, humans are still needed as part of the supply chain. Autopilot can take some of the fatiguing stress away from the job, allowing them to be more productive in other aspects of the job.
  • Corporations that adopt autonomous vehicles are responsible for ensuring that drivers have work to do. Ethical considerations include employment and not rushing to adopt complete automation.
  • Companies who try to use complete automation and eliminate human interaction will be held fully culpable for vehicle failures that could have been avoided by the presence of a human employee to control the technology and make wise decisions in the case of a technological failure.
  • All artificial intelligence, including autonomous vehicles, is a human assist. They are being developed to help humans live to their fullest potential by eliminating things such as road fatigue and failure to notice roadside objects.
  • Autonomous fleets could cut insurance rates for corporations, deliver workers safely to work and repair sites, and allow delivery personnel to deliver more packages per day due to reduced fatigue from navigation and traffic delays.

The technology assist possibilities are almost endless in this field of work. So many industry-related vehicles are on the road every day, doing countless numbers of jobs in a vast array of fields. The majority of these fields could be augmented in productivity by the elimination of navigation. People’s energy could be saved for their more important work rather than spent in stress on the road.

Ethical Technology Integration Considerations

When designers are planning vehicles with autopilot and full autonomy, some fun possibilities come to mind even before safety considerations. These possibilities always center around other things that could be done instead of driving and navigating. These extra possibilities include both entertainment and productivity. Both of these have severe ethical implications that need to be mitigated as soon as possible.

  • Cars with autopilot should not ever integrate visual entertainment systems that can be used while driving. This does not stop a driver from using a handheld video game or any other entertainment device while driving. However, it does send the message that the car is for driving, not for playing video games.
  • Cars with autopilot should not be able to function without a licensed driver in the driver’s seat. The car should be automatically disabled and unable to start without a licensed driver.
  • If the licensed driver leaves the driver’s seat during a trip or reclines the seat for sleeping, the car should automatically steer to the side of the road and stop. This could be integrated with an in-car emergency alert system.
  • Sleeping detection systems could easily be integrated into the camera system of the car. If the car determines that there is not an alert licensed driver in the driver’s seat, it should steer to the side of the road, stop, and alert the driver or emergency personnel.
  • “Hacked” cars that allow drivers to play video games on the navigation console while driving should either be automatically disabled or Tesla should be notified immediately.
  • The car should be able to detect the presence of a non-licensed driver in the driver’s seat and disable itself with no override. This will mitigate the risk of thievery as well as children driving autonomous vehicles. Children as young as five have been pulled over on the freeway, having taken the parent’s keys.

Conclusion

Tesla’s autopilot technology is an incredible feat of engineering. The future seems closer than ever with new advances in vehicle technology, artificial intelligence, and machine autonomy. However, none of these advances are made apart from the humans who dream, design, make, test, and use this technology.

Ultimately, a discussion about the ethics of Tesla’s autopilot, and that of other car makers who will follow, is a discussion about the ethics of the people who design and use the technology. Makers should consider the safety of their future passengers above everything else. Buyers should also consider the safety of their passengers above everything else. Entertainment, luxurious perks, and cost concerns are secondary.

Tesla Discounts:

Greg

Hi, I'm Greg. My daily driver is a Tesla Model 3 Performance. I've learned a ton about Teslas from hands-on experience and this is the site where I share everything I've learned.

Recent Posts