Avascent First Person: Dr. Linton Wells II on the Intersection of Artificial Intelligence, Machine Learning & Cybersecurity
About Dr. Linton Wells II
Avascent advisor Dr. Wells has more than 20 years of senior civilian leadership experience with the U.S. government in national security affairs, including service as acting Assistant Secretary of Defense for Networks and Information Integration and Department of Defense (DoD) Chief Information Officer (CIO). Other executive positions have been related to Command, Control, Communications and Intelligence (C3I), and the interface between decision-making and technology. From 2010 to 2014 he led the Center for Technology and National Security Policy, a research center at the U.S. National Defense University. He also was a career naval officer. He completed 51 years with DoD in 2014 and is Executive Advisor to the Center of Excellence in C4I and Cyber at the George Mason University (GMU), as well as Managing Partner of Wells Analytics LLC, linking technology, strategy and decision-making.
You are a regular at the black hat and DEF CON conferences, what did you see this year?
While it often seems like there’s little reason for hopefulness about cybersecurity, there was an important assertion this year related to DARPA’s Cyber Grand Challenge (CGC) that could make a real difference.
The CGC involved the extensive use of artificial intelligence (AI) and machine learning (ML), plus a focus on security operations at the binary level in pre-execution environments.
These could be integrated with big data analytics and formal verification of code to offer ways, as DARPA Director Dr. Arati Prabhakar put it, to “Imagine a future with some likelihood of cybersecurity.”
That’s a pretty dramatic statement, but it’s still well in the future and a lot of work will be needed to achieve it.
Black hat and DEF CON are separate, but related events. Black hat is a computer security conference that provided many useful insights and engagement with industry. DEF CON is a hackers’ conference, and that community brings enormous talent, energy, and out-of-the-box thinking.
Bridging cultural boundaries, translating tribal languages and aligning procedures will be challenging, but the risks posed by insecure interrelationships among increasingly critical functions and infrastructures demand “radically inclusive” outreach.
What were the takeaways from the DARPA Cyber Grand Challenge (CGC) and “capture the flag” (CTF)?
Little that I hear at these conferences makes me sleep better at night but if there’s one cause for optimism this year it was DARPA’s Cyber Grand Challenge, as mentioned above. It was a $50+ million effort with seven supercomputers that competed against each other.
They identified and fixed code that had never been seen before, and patched vulnerabilities at machine speed without damaging code. DARPA considers this a first step, not unlike the first self-driving cars were in the 2005 Grand Challenge. DARPA also is putting a lot of emphasis on formal verification and has funded code that’s now open source and formally verified.
I had a fascinating discussion with some members of the CGC winning team about cyber wargaming. They stressed the importance of “autonomy” and “counter-autonomy.” Basically, this means that the machines can probe opponents’ machines and launch attacks (autonomously) and also detect probes and attacks on their own machines, analyze those attacks to understand vulnerabilities in the attacking machine, and strike back (counter-autonomy), or choose not to strike back as a matter of tactics/strategy.
The problem is that many present DoD cyber wargames may play attacks via cyber ranges, but almost never play the full autonomy-counter-autonomy engagement that, according to the people I spoke with, is where the future fight will be won or lost. They believe DoD needs to be doing more to train our people to win in this space.
DEF CON traditionally has had a “Capture the Flag” (CTF) contest. The team that won the CTF this year was not the pure machine that had won the CGC, but a human-machine combination, sometimes called a “centaur.” The reasons for this outcome were complicated, but the CTF illustrated the value of human-machine teaming.
What is important to know about this intersection of artificial intelligence, machine learning and cybersecurity?
This is the area where I noted the most change from 2015. This year there was less focus on using AI and ML at user interfaces to show pretty displays and much more focus on the application of AI and ML to run-time environments, as well as to security oversight.
AI and ML can be used to examine code very quickly to see if it should be allowed to run, and then apply light anti-virus functions or similar protection if the code is executed. Stepping back, it’s clear security in general will have to become more automated, yet there’s a serious shortage of human talent that is only getting worse.
Automation will have to fill in the gaps. Once a zeroday exploit is discovered it can take only a few minutes for attackers to check worldwide databases for targets that might be vulnerable and begin operations. Without AI and ML, network defenders can’t possibly keep up.
At the same time, the caveat remains that today’s deep learning algorithms themselves are vulnerable to tampering and misdirection, as Clarence Chio noted in a recent paper on the subject.
Speed is crucial here also. An effective defensive approach is to update code faster than opponents can reverse engineer it. Companies may not be able to control attacks, but they can affect the time to remediate.
Aggressive companies can remediate in two days. Such metrics on a dashboard can incentivize people to be fast, which aligns with CEOs’ general interest in speed–speed to market, speed to react to attacks, etc. Improved security gives companies the confidence to take more risk.
Infrastructure vulnerability to hacking seems to be a recurring theme, was it this year?
There was little change in last year’s assessment that the general security architecture for ICS/SCADA is not secure, even though such systems still are being used for critical functions. The problem is that most of these systems were never designed to be connected to the Internet.
There are many access channels and pervasive ways to cause damage. You don’t have to look to hard to find components of industrial systems that are still running Windows XP.
The multi-faceted cybersercurity challenges for cities and industrial infrastructures were also highlighted. This year provided more information on human-machine interface (HMI) vulnerabilities in ICS/SCADA environments.
What about hacking cars, particularly driverless ones?
In addition to updates to past briefings on hacking cars, this year added a new talk on hacking autonomous vehicles, “Can You Trust Autonomous Vehicles?” It primarily addressed vulnerabilities of sensors — including ultrasound, millimeter-wave sensors (some at 76-78 GHz), cameras, etc. — and autopilots.
Countermeasures will involve fail-safe modes for sensors, anomaly detection, sensor redundancy and data fusion. The conclusion is that attacks on autonomous vehicle are feasible but the sky is not falling.
Design matters. Most cars have two networks, with one for controls and the other for entertainment systems, yet the digital barrier between them often is of questionable strength. Tesla last year demonstrated one of the best mobile security systems yet seen with physical separation between the two networks, a cryptographically secure bridge between them, and a way to issue patches rapidly “over the air.”
In terms of autonomous vehicles, the next steps will involve more experimentation with moving cars, as Uber is doing now, and data analysis. Three recommendations continue to apply to any vehicle, be it land, sea, air or space: Isolate vehicle control and other key systems, harden each critical component individually, and have a quick way to provide updates without a dealership visit.
Of course it’s more than cars that are in the crosshairs: Other talks addressed vulnerabilities of transport in smart cities, security flaws in airline-related avionics, and hacking drones.
What are “chatbots,” and why are they important?
Chatbots, or “conversational user interfaces” (CUI), will be powerful tools in many areas, as they are growing in importance and doubtless will play heavily in future conferences. As an example of their potential power, DoNotPay.co.uk, is a “robot lawyer” designed by a British teenager, which has helped overturn over 160,000 parking tickets in London and New York. It is now tackling homelessness.
Based on responses to answers the chatbot can help an individual apply for assistance or shelter. It is not a panacea but, when combined with the likelihood of worldwide 4G internet in a few years, such approaches offer exceptional opportunities for people, both literate and illiterate, to interact with knowledge, learn, and engage with heretofore impenetrable bureaucracies. The flip side is this technology can and will be a factor in future conflicts.
From what you saw at the conferences, is institutional cybersecurity ahead, pacing or lagging the threat? Particularly with the development of the Internet of Things (IoT) and increasing reliance by government and industry on cloud computing?
The rollout of IoT is well underway, yet there’s no demand for security in the marketplace. In some respects it reminds me of the financial markets in 2005-6 when there was lots of money to be made in derivatives and subprime mortgages, and almost no one understood the risks.
There seems to be much the same state of denial in IoT-related risk analyses. Moreover, we’re entering an era when nearly everything we engage with, including the human body itself, is going to be a platform with multiple IP addresses, and more will be added each year.
So the context of cybersecurity is going to change from defined environments to a pervasive one. At DEF CON, many talks have shown how to target security devices like door locks that are now being connected to the Internet. Despite such demonstrations there’s no sign that security is pacing adoption.
One area that received a fair bit of attention was the role of “hypervisors” that manage virtual servers in the cloud. They are the key to the efficiency that such networks offer users, but if an intruder can “own” a hypervisor they can control an entire cloud or software-defined network.
This puts a burden on an organization to have highly skilled managers who understand how physical systems and networks interact. An example would be an expert who understands both how a generator or a radar system works and the internet-connected switches that tie it to the cloud.
Such people are very hard to find and are only going to become more important to have on your team.
The military significance of cyber being integrated with social media and unconventional operations is increasingly apparent, as Russia is showing. What might we see next?
In Ukraine, a sophisticated, ongoing campaign involving cyber, media, economic and counter-infrastructure attacks points to the future for hybrid civil-military operations. The challenge for the US, and NATO, is that the center of gravity in this conflict is not Europe’s capitals or military installations. It’s the living rooms and mobile devices of the citizens of the nations in the Alliance.
Undermining trust and confidence in existing political systems at the citizen level will be a goal for any nation or alliance targeting Western interests. The ways to do this with precision are going to expand at the same time that personalized messaging and information operations will be enhanced through the use of AI and ML to influence and shape social media content within larger multi-domain campaigns.
At the same time, the responses must integrate people, organizations, processes and technology—technology alone is never enough. The fact that these challenges stress the seams between policy, technology, sociology, and economics only makes them harder to deal with, and increases the importance of developing tools to address them in government and through public-private partnerships.