The Pace Of Ai Innovation For Cybersecurity Is Fast And Furious

In the recent edition of Forbes, the growth of AI in the cybersecurity sector and its limitless capabilities discussed.

David Schiffer is the CEO of RevBits and formerly of Safe Banking Systems (SBS). RevBits develops cybersecurity software for organizations.

Given the speed and scope of digital transformation and related technologies, our vision of what these innovations can achieve encompasses what is possible today and the endless possibilities of tomorrow. Artificial intelligence (AI) has become a particularly hot topic for its potential application in numerous industries and within the realm of cybersecurity. In this latter context, there are capabilities touted that are often associated with the idea of AI but not actually realized as yet.

 

AI as a fully autonomous entity that self-learns and self-directs with human-like ability in order to combat the wiles of cybercrime is not, at this point, a truly realized dream. Fueled by the creative imaginings of sci-fi literature, cinema and visionary entrepreneurs, we have come to associate AI with startling—and even alarming—humanoid functionality. A weekend spent watching the movies Ex Machina, Upgrade or M3GAN could have you running to disconnect your Alexa and reconnect a landline phone.

However, within the broad scope of AI, the real applications of machine learning (ML) and deep learning (DL) algorithm models are offering significant benefits to bolster enterprise cybersecurity.

What are the current use case possibilities of AI in cybersecurity?

The most relevant disciplines of AI currently used in cybersecurity are ML and its subfield, DL. These subdomain AI technologies can parse through massive datasets to analyze relationships between previously detected threat patterns and new threats, providing descriptive, prescriptive and predictive guidance.

ML is a subset of AI that uses algorithms to analyze vast quantities of data and learn to detect patterns that can signify cyber threats. These algorithms can be trained to detect various types of malware, detect anomalies in network traffic, conduct user and entity behavior analysis (UEBA) and provide real-time threat intelligence. ML does still require the assistance of human technicians. Engineers intercede and make adjustments if algorithms return inaccurate information or predictions.

 

DL goes a step further, with algorithms that use artificially constructed deep neural networks to "think" more like a human. They can adjust themselves without human intervention, using the data sources they've been exposed to and trained on. DL has layers of algorithms that can form a synthetic neural network to learn and make decisions without human assistance. DL systems continually adapt and analyze data with an apparent human-like reasoning structure to assist in drawing conclusions for real-time threat intelligence. They perform endpoint detection and response (EDR) with greater accuracy than ML and with far fewer false alerts.

Hackers are capitalizing on the benefits of AI.

While advancements in AI are benefiting defenders, they're also helping attackers. Hackers are using the sophistication of AI technology to analyze computer systems to discern any weak points in software or programs for exploitation opportunities.

Of major concern is the ability of cybercriminals to launch AI-generated phishing email campaigns. These emails are more convincing—and, hence, more frequently opened—than previous email scams. Skilled hackers can use the same AI tools that boost cyber protections against malware to actually create malware that avoids detection by constantly changing its character.

This ability to write intelligent, human-like communications and scripts—which holds astonishing current and future potential—also presents challenges for security, privacy, data accuracy and legality. In November 2022, OpenAI released its AI chatbot, ChatGPT. In addition to its ability to mimic human conversation, ChatGPT can author computer programs and emails; compose music, stories and poetry; and simulate myriad processes. ChatGPT's training is derived from large language models and data gleaned primarily from public resources.

Concerns surrounding ChatGPT and its capabilities are mounting. Because it is trained from massive datasets on the open web that cannot be sanitized, it can generate misinformation, inaccurate and potentially harmful data, and disruptive perceptions. Its ability to author very realistic writing has implications (good and bad) for the educational, financial and healthcare industries, to name a few.

How do we keep pace with advancements in AI?

Organizations and consumers alike can appreciate the almost limitless possibilities and positive impacts of AI as it evolves. However, there is equal—and, perhaps, greater—concern revolving around maintaining control, security and our humanity before it all becomes a runaway train. The potential advantages for the fields of medicine, climate science, manufacturing and education must be weighed against global consumer safety.

An Acumen Research and Consulting report (via CNBC) estimates that the global market for AI-based cybersecurity products will reach $133.8 billion by 2030, up from $14.9 billion last year. There is no question that AI innovations will continue to rapidly evolve and that they hold tremendous promise for assisting humanity to solve some of our great problems and enhance the quality of life. It is also feared that AI could cause great disruption and harm while eroding human privacy and rights.

In March 2023, Elon Musk joined more than 1,000 industry experts in calling for a six-month pause in AI development. The group, which includes influential tech leaders, expressed the need for more time until sufficient regulatory policies and boundaries are set in place to rein in an "out of control" global race for AI technologies. Google CEO Sundar Pichai has expressed warnings and fears about the readiness for the rapid advancement of AI technology while proclaiming that it will "touch everything: every sector, every industry, every aspect of our lives."

There has been an increased outcry for more thorough auditing, careful certification systems and more regulating authorities with FDA-like supervision for AI development. As a result, the National Institute of Science and Technology (NIST) is taking measures to increase the safety and trustworthiness of AI technologies. On January 26, 2023, NIST launched the AI Risk Management Framework (RMF), followed by the Trustworthy and Responsible AI Resource Center on March 30 that will assist with and internationally align the AI RMF.

The future outlook of AI in cybersecurity and all sectors seems limitless. As with any historical transformational era, it holds the potential for good and bad equally. How we prosper while retaining our values remains to be seen.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

License: You have permission to republish this article in any format, even commercially, but you must keep all links intact. Attribution required.