Organizations, particularly their security and compliance teams, are becoming increasingly aware that generative artificial intelligence (AI) brings unique risks and threats that are not readily understood in depth. However, many aspects of these technologies are still unknown.

Techopedia spoke with experts to understand how security professionals are adapt, how executives should work to close this knowledge gap, and whether AI frameworks and laws are helping them. These conversations have led to some interesting conclusions. First, security teams are rapidly adopting generative AI tools, but not everyone fully understands the risks involved. This knowledge gap creates vulnerabilities that cybercriminals can exploit.

A Splunk study found that 91% of security teams are using GenAI, but only 70% understand the implications.

The analysis also uncovered "shadow IT" with the undeclared use of generative AI and the opaque nature of AI algorithms, making it difficult for security teams to assess risk and trust AI results.

Kevin Breen, Senior Director of Cyber Threat Research at Immersive Labs, has a clear opinion on why teams do not fully understand this technology.

"The rapid adoption of generative AI in organizations is causing security and risk teams to struggle to keep up," he says.

Breen believes that GenAI is often activated without prior notice in many online services and that an organization's data may be processed by a service provider such as OpenAI or Anthropic that has not been declared as a data processor.

In addition, Breen believes that there is a general lack of education and understanding about how GenAI works and how data is processed.

Cache Merrill, CTO of Zibtek, argues that the main problem for security experts is usually the complexity and opacity of AI algorithms.

He mentions that the "black box" nature of AI systems makes it difficult for security teams to trust or fully understand the AI's risk assessments and decisions.

The hype around AI and the desire of companies to label their products with an AI label to ride the wave is also not helping. Erich Kron of KnowBe4 explains that much of the blame lies with marketing departments, who are pushing to integrate AI-related features into products as much as possible.

"The overuse of the buzzword 'AI' makes it very difficult for security professionals to understand what is truly useful and what is a gimmick," he explains.

"This pressure from vendors often leads to products being launched without sufficient testing because they want to keep up with the competition," he adds.

Ryan Smith, cybersecurity and AI expert and founder of QFunction, says that some questions need to be asked and clarified, such as whether and how AI will make their jobs easier.

"Until these questions are asked and answered, cybersecurity professionals will not fully understand the implications," he explains.

 

What executives should do

Faced with this new landscape, executives must choose one of the following options: improve the skills of existing employees, hire new AI experts (which is costly and complex) or outsource certain projects. Experts' opinions on this topic vary.

Merrill from Zibtek claims that improving the skills of the current team is "critical", but to acquire specialized knowledge, it is best to hire new talent with experience in artificial intelligence and machine learning.

Kron also advocates improving the skills of employees and ensuring that "plans are in place to train and upskill employees as more AI-based tools and capabilities are introduced."

 

Lack of guidelines

The Splunk report emphasizes that AI guidelines are still uncharted territory. Although frameworks such as the NIST AI RMF or the EU AI Act exist to manage the risks associated with AI, many teams are unaware of them. Furthermore, 34% of organizations have no GenAI policies at all.

For another 45%, "better alignment with compliance requirements" is a key area for improvement.