Stories‎ > ‎

Securing the Autonomous Revolution

posted Jan 25, 2017, 4:54 AM by Michael Lenart   [ updated Jan 25, 2017, 5:25 AM ]
By Paul L. Jordan

A self-driving car travelling through a two-lane tunnel has lost control of its brakes due to a mechanical failure. In the lane ahead, a road construction crew is making repairs. The software that drives the car faces a choice: continue straight ahead, almost certainly killing the construction workers, or change lanes causing a head-on collision and almost certainly killing the two drivers of the vehicles involved in the accident. This is an adaptation of the classic ethical thought experiment: the trolley problem [1]. This problem presents several ethical concerns with regard to autonomous vehicles, but how does cybersecurity affect this landscape? The answer? Nobody cares. 


As our military cyber community is acutely aware, neither industry nor society will slow down for security. In this case, industry is showing us that it is incapable of even slowing down for tough ethical dilemmas or drastic economic consequences, but for arguably good reasons. According to the CDC, in 2014, approximately 35 thousand people died in motor vehicle accidents in the United States.1 Further, according to a 2015 report from the National Highway Traffic Safety Administration, 94% of automobile accidents were caused by human error.2 In a May 2015 report, Google announced that it had logged over 1.8 million miles driven by their autonomous cars with only two minor incidents, both of which were caused by other vehicles with human drivers.3 Incidentally, if all automobiles were automated overnight, roughly 33 thousand lives could be saved each year!



Unfortunately, this progress has potentially massive economic impacts. According to a 2016 report by the Bureau of Labor and Statistics, transportation makes up roughly 5% of our labor force.4  Furthermore, the second and third order affects aren’t insignificant. According to the American Truckers Association, there are approximately 3.5 million truck drivers employed in the United States.5 Automating transportation won’t only affect those jobs, but also all of the hotels, restaurants, and convenience centers that these truckers use every day. Should these impacts slow down the potential benefits of automating vehicles? They don’t seem to be. 


And there are serious cybersecurity concerns about automating transportation as well. In 2015, researchers were able to take control of a Jeep Cherokee through the internet.6 Just recently, a group of Chinese researchers were able to remotely control the brakes of a Tesla Model S.7 Hacks like this could have life-threatening consequences if not handled properly. But should these consequences slow the progression of technology that stands to save tens of thousands of lives each year? Fortunately, in recent years, it seems the sentiment is changing. Security is being talked about on major news outlets, and security is being considered in system design processes instead of after deployment. However, this is just a first step in the right direction. 


Autonomous travel is no longer a technical problem. Companies like Google and Tesla are racing toward an autonomous consumer vehicle, and a few commercial vehicles already exist. In recent years, it has become clear that computers will make better drivers than humans, and an enormous amount of money stands to be made by the company that does it first. As a result, there exist ethical and financial imperatives to automate transportation. To that end, many of the concerns that exist are being ignored. But the cybersecurity community cannot allow this to prevent us from working toward a secure autonomous vehicle. We all know the narrative: brand new shiny toy is introduced that makes everyone’s life easier; that shiny toy comes with security concerns; our recommendation is to hold off on implementing the new toy until we can secure it; our concerns are heard, but ignored; we throw our hands in the air and give up. That cannot be allowed to happen this time- especially in the realm of military hardware.


Now more than ever, we need to stay engaged in this effort. We must develop and innovate ways of securing this nascent autonomous revolution. Advances in automating military weapon systems are being pursued and made every day. Our role in securing those systems is more important than ever. We’re already seeing our military become increasingly dependent upon remotely piloted aircraft. Today, these systems are remotely piloted by humans and have limited autonomous capability, and they’re already the target of cyber-attack. From an operational perspective, these systems would ideally behave with complete autonomy. Unfortunately, this change would make them an even more valuable target for cyber-attack. Without the proper protection, these systems could be used against us.


But does this mean that complete autonomy should not be pursued? Again, the answer is that it does not matter. This technology will continue to be pursued because the end result is savings and efficiency in a period of time during which our senior leaders are looking for any such opportunity. 


Some critics of autonomy argue that some tasks are just too complex for computers to handle.  They argue that a computer could never identify a target and deploy ordinance to neutralize that target because that task is too complex. (Before the industrial revolution, factory workers probably shared this same sentiment about many of the tasks they performed.) But at the cutting edge, some of the artificial neural networks are performing far better than expected.8 For instance, accurately identifying objects in images is rapidly becoming a trivial task for intelligent systems. Why couldn’t these same systems be used to identify and target known combatants? Eventually, these systems will be able to target and neutralize threats much better than we can today while reducing unnecessary or unintended casualties. As such, we have a moral obligation to pursue them and arguably more importantly, secure them.


Ultimately, given the importance and relative similarity of artificial intelligence (AI) to the cybersecurity profession, we must ensure we understand the technical capabilities and limitations of AI so that we can contribute in a meaningful way to discussions on it. People are looking to us to be experts in these types of systems, and more specifically, the security of these systems. Let’s focus on getting this right so we can be known as the community that was part of the solution, instead of the community that let Skynet happen because we thought it never would.



About the Author

Paul Jordan is the founder of the St. Louis chapter of the Military Cyber Professionals Association(MCPA), and the current chief of MCPA Chapter Operations.  He holds an MS in Computer Science from the Air Force Institute of Technology (AFIT) and currently works as a cyberspace operations officer for the Air Force.








References

[1] J. Thomson, “Double effect, triple effect and the trolley problem: Squaring the circle in looping cases,” Yale Law Journal, vol. 94, no. 6, pp. 1395–1415, 1985.

End Notes


1. http://www.cdc.gov/nchs/data/hus/hus15.pdf

2. https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812115

3. https://www.documentcloud.org/documents/2094029-report-0515.html

4. http://www.bls.gov/cps/cpsaat18.htm

5. http://www.alltrucking.com/faq/truck-drivers-in-the-usa/

6. https://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway/

7. https://www.theguardian.com/technology/2016/sep/20/tesla-model-s-chinese-hack-remote-control-brakes

8. http://karpathy.github.io/2015/05/21/rnn-effectiveness/


Photo credits (in order of appearance): Google, dronewars.net