A report reveals that while many cybersecurity professionals are exploring generative AI tools, significant concerns about security risks persist.

A recent report by CrowdStrike has highlighted an ongoing debate within the cybersecurity sector regarding the adoption of generative AI technologies amidst concerns over security implications. Conducted in 2024, the survey gathered insights from 1,022 security practitioners and researchers across various regions including the U.S., APAC, and EMEA. The findings reveal a cautious approach towards generative AI among cybersecurity professionals, with only 39% believing that the benefits outweigh the potential risks associated with these technologies.

The report indicates that while a significant 64% of respondents have either procured or are researching generative AI tools for their respective organisations, a mere 6% are actively using them. The predominant motivation for exploring these tools appears to be their potential to improve responses to and defences against cyberattacks, rather than addressing talent shortages or responding to management pressures. Approximately 40% view the rewards and risks of generative AI as “comparable,” while 39% feel that the benefits surpass the dangers.

Commenting on the findings, CrowdStrike noted that “security teams want to deploy GenAI as part of a platform to get more value from existing tools, elevate the analyst experience, accelerate onboarding and eliminate the complexity of integrating new point solutions.” However, the difficulty in measuring return on investment (ROI) remains a major concern for many professionals in the field. The study categorised the ways to evaluate AI ROI into four main priorities, the foremost being cost optimisation through platform consolidation, which constituted 31% of responses. This was closely followed by reduced security incidents at 30%.

Among the survey participants, concerns regarding the security of generative AI tools emerged prominently. Notable apprehensions included data exposure to large language models (LLMs), the potential for attacks targeting generative AI platforms, lack of regulatory frameworks, instances of AI hallucinations, and inadequate controls within AI systems. Almost 90% of participants acknowledged that their organisations are either implementing new security policies or are in the process of formulating such policies specifically focused on generative AI within the next year.

Despite these concerns, the application of generative AI within cybersecurity is seen as advantageous for several functions including threat detection and analysis, automated incident response, phishing detection, enhanced security analytics, and the generation of synthetic data for training purposes. However, organisations are advised to incorporate safety and privacy measures as essential components of any generative AI integration. Adhering to safety protocols will serve to protect sensitive information, comply with existing regulations, and mitigate risks like data breaches or misuse. Without these safeguards, there is a risk of AI tools inadvertently exposing vulnerabilities or producing harmful results, which could lead to significant financial, legal, and reputational repercussions.

The ongoing exploration of generative AI within cybersecurity highlights a complex landscape, as professionals weigh its benefits against the myriad risks that accompany its deployment.

Source: Noah Wire Services

More on this

Share.
Leave A Reply

Exit mobile version