European nations may be hesitant to trust AI for cybersecurity
WASHINGTON – When U.S. leaders talk about the promise of artificial intelligence, one application they regularly discuss is cybersecurity. But experts say European countries have thus far proven to be more measured in their approach to AI, fearing the technology is not yet reliable enough to remove human analysts.
Consider France, which along with the United Kingdom and Germany, has become one of Europe’s AI hubs. According to a report by France Digitale, a company that advocates for the rights of start-ups in France, French startups were using AI 38 percent more than they did a year earlier.
But the advancement of AI in the defense sector has not been as prominent in some European countries. That’s in part because the systems need a large amount of data to be reliable, according to Nicolas Arpagian, vice president of strategy and public affairs at Orange Cyberdefense, a French-based company working with Europol and other cybersecurity companies to build strategic and technological counter-measures to face cybersecurity attacks.
“It’s very difficult to know what the data can be used for, and if you let the computer or if you let the algorithm take decisions [to prevent cyberattacks,] and that’s a false positive, you won’t be able to intervene early enough to stop decisions that were taken on the basis of this [erroneous] data detected by the algorithm,” he said.
Orange Cyberdefense’s approach is training human analysts to detect the behavioral patterns hackers reveal. The company also relies on artificial intelligence to act as an assistant and to keep humans in the lead role.
“You need the analyst, the human being, the human brain and the human experience to deal with and to understand a changing situation,” Arpagian said.
At the same time, pressure from Russia, China and other adversaries in the AI market has pushed the United States to designate more resources for the development of the technology in the defense sector, according to a 2019 Congressional Research Service report. In recent years, China has focused on the development of advanced AI to make faster and well-informed decisions about attacks, the report found. Russia has focused on robotics, although it’s also active in the use of AI in the defense sector.
Moving to use AI in U.S. cybersecurity ops
In February, the Department of Defense adopted five principles to ensure the ethical use of the technology. Secretary of Defense Mark Esper said the United States and its allies must accelerate the adoption of AI and “lead in its national security applications to maintain our strategic position, prevail on future battlefields, and safeguard the rules-based international order.”
In September, Lt. Gen. Jack Shanahan, the director of the Joint Artificial Intelligence Center, said the center’s mission was to accelerate the Pentagon’s adoption and integration of AI in cybersecurity and battlefield operations.
“We are seeing initial momentum across the department in terms of fielding AI-enabled capabilities,” he said on a call with reporters. “It is difficult work, yet it is critically important work. It demands the right combination of tactical urgency and strategic patience.¨
The Pentagon has taken the first step to increase the use of AI and machine learning during its operations as implementing AI in cybersecurity operations “is essential for protecting the security of our nation,” according to the department’s formal artificial intelligence strategy released in 2019. The technology will be incorporated to reduce inefficiencies from manual, data-focused tasks, and shift human resources to higher-level reasoning cybersecurity operations, the strategy laid out.
Artificial intelligence can play a key role identifying unknown attacks, as human analysts normally know enough about recurring threats to accurately detect cyber risks such as evasion techniques and malware behaviors, said Shimon Oren, head of cybersecurity and threat research at Deep Instinct, an American company that uses AI and deep learning technology to prevent and detect malware used in cyberattacks.
Oren said the use of artificial intelligence and deep learning technology is crucial to train and teach the systems to make decisions and draw conclusions on new threat scenarios that will be presented to it post-training. The technology will free human analysts to do the type of work computers “absolutely cannot do,” he said.
For example, the U.S. intelligence community is looking to fully automate well-defined AI processes, as AI systems can perform tasks “significantly beyond what was possible only recently, and in some cases, even beyond what humans can achieve,” according to the 2019 Augmenting Intelligence using Machines Initiative.