Artificial intelligence and machine learning are transforming endpoint security, and as a result, the way security teams operate. However, security pros shouldn’t worry about being automated out of a job organizations still require skilled people to defend against advanced adversaries. Today’s organizations have increasingly complex IT infrastructure, a greater reliance on cloud and a larger remote workforce – all of which increase the size of their attack surface. Technologies such as extended detection and response (XDR) and security orchestration, automation and response (SOAR) appeal to organizations that lack the staff needed to address a relentless wave of security alerts. These autonomous technologies can also scan and prioritize events to address, automate repetitive tasks so analysts can focus on those that are most important, and identify and organize newly-detected threats. These tools have given defenders a big boost in the fight against adversaries. The continuous adoption of AI-powered tools stems from their ability to deliver far greater speed and scale in analyzing data and making decisions compared to manual analysis alone.
The combined power of machine and human intelligence
While it’s tough to predict the future, it’s unlikely AI will entirely replace security practitioners as capable and effective as they are, these tools are not the sole solution to enterprise security issues. A combination of machine and human intelligence is essential to meet the security challenges businesses face. Rather than being pushed out by technology, security practitioners will work hand-in-hand with AI to defend against advanced threats.
However, while they won’t fully replace people, AI-powered tools will likely reshape the nature of today’s security jobs. Consider the job of a level-one analyst who spends much of their time evaluating security alerts and investigating potential issues. With AI, more data and facets flow into automated analysis, resulting in higher fidelity alerts and less alert fatigue. Analysts can spend more time analyzing higher-priority events, especially in hard-to-judge and ambiguous situations. This lets organizations focus human expertise where it matters most.
In some ways, autonomous tools will require new skills from security pros. AI technology relies on data to effectively automate tasks and identify threats. Not all of this data is created equal, and there’s another type of data to which humans can contribute. We call this ground truth this type of data that describes how we want an AI model to behave based on certain input, or the “target” for training the model. Human insights are an invaluable source of ground truth that let AI models learn from human expertise.
The ability of autonomous technology to make decisions depends on the amount and quality of data it processes, as well as its ability to learn and improve. Security pros who can train the algorithms to better look for, detect, and react to security events will likely find these skills in high demand. Human expertise becomes essential here to maximize true positive detections while keeping false positives to a minimum. As far as AI technologies have come, the industry has still plenty of room for improvement in driving model efficacy.
Further, while AI technology can process large amounts of data and automate repetitive tasks, it’s no replacement for human insight or experience in the field. Companies will need people to do threat hunting and investigate security threats; people who can analyze data, derive insights, and make decisions on how to best respond. Autonomous tools may not detect new threats concocted by well-resourced and motivated adversaries that fall outside the scope of data they’ve ingested. In these cases, companies will need skilled humans to analyze potential threat activity, make decisions accordingly, and ensure these new insights are added to the corpus of information that AI learns from. Well-designed AI systems constantly learn and improve, and this process relies on the ability to introduce new human-derived insights into the underlying corpus of knowledge.
Organizations will also need security practitioners to help defend against adversaries who use AI to defeat AI in increasingly sophisticated attacks. AI has its own attack surface, for example, the MITRE ATLAS framework, modeled after the popular ATT&CK framework for adversary techniques. The reliance on AI as a pivotal tool to work with increasingly large and complex datasets makes it imperative that humans don’t turn a blind eye to inherent limitations of AI and unconditionally trust in its performance.
Prepare for an AI-focused future
As technology continues to evolve, security practitioners must adjust as needed. We see this in the case of AI-powered tools, which are already changing the way security teams operate by delivering capabilities needed to do their jobs more effectively and stay a step ahead of the adversaries.
But as powerful as technology becomes, it’s only one part of a strong defense. Human expertise plays a critical role in this AI-powered future. People are still needed to manage the technology, analyze data, derive insights, make strategic decisions and train products to learn and adapt over time. AI may never fully replace humans, but it can let them create a stronger defense against increasingly capable adversaries.
About the author:
Sven Krasser, Senior Vice President, Chief Data Scientist at Crowdstrike.