A recent analysis from JFrog reveals significant security flaws in widely used machine learning frameworks, highlighting the urgent need for improved security measures to prevent data breaches and operational risks.
Recent analysis from JFrog has underscored significant security vulnerabilities within popular machine learning (ML) frameworks, revealing that ML software is more susceptible to threats than older, more established technologies such as DevOps or web servers. This evaluation is timely as the utilisation of machine learning grows across various sectors, emphasising the necessity for effective security measures to prevent potential data breaches and operational disruptions.
The report identifies MLflow as particularly vulnerable, with JFrog noting that it, along with 14 other open-source ML projects, has experienced a rise in critical vulnerabilities. In total, 22 vulnerabilities have been recorded in these projects, drawing attention to severe risks associated with server-side components and the potential for privilege escalation within these frameworks.
One notable vulnerability highlighted involves Weave, a toolkit developed by Weights & Biases (W&B), which is widely used for tracking and visualising ML model metrics. The WANDB Weave Directory Traversal vulnerability (CVE-2024-7340) permits low-privileged users to gain access to arbitrary files across the filesystem due to inadequate input validation. Attackers can exploit this flaw to uncover sensitive information, including admin API keys, leading potentially to unauthorized privilege escalation.
ZenML, a management tool for MLOps pipelines, also exhibits critical access control vulnerabilities. These flaws allow attackers with limited access to elevate permissions within ZenML Cloud, the managed version of ZenML, enabling them to gain entry to restricted information, including confidential secrets or model files. This escalated access could lead to significant disruptions by allowing malicious actors to alter ML pipelines or tamper with essential data.
Additionally, a serious vulnerability in the Deep Lake database (CVE-2024-6507) has been identified. This data storage solution, designed for AI applications, has issues with command sanitisation when importing external datasets. An attacker could exploit this to execute arbitrary commands, which may jeopardise both the database and any associated applications.
Another instance of concern is within Vanna AI, a tool that generates natural language SQL queries. The Vanna.AI Prompt Injection vulnerability (CVE-2024-5565) enables attackers to inject malicious code into SQL prompts, which the system then processes. Such an attack can lead to remote code execution, putting the integrity of visualisations at risk and potentially facilitating SQL injections or data exfiltration.
Mage.AI, another MLOps tool, is reported to have various vulnerabilities including unauthorised shell access and weak path traversal checks, posing a risk of data pipeline control loss, file leaks, and the execution of malicious commands. The variety of vulnerabilities discovered within Mage.AI represents a significant threat to data integrity and security across ML operations.
The findings from JFrog draw attention to the operational vulnerabilities present in MLOps security. It appears that many businesses have yet to effectively incorporate AI and ML security practices into their overall cybersecurity strategies, potentially exposing themselves to unrecognised risks. As AI and ML technologies continue to evolve and shape industries, securing the frameworks, datasets, and models that underpin these advancements is becoming increasingly critical.
Source: Noah Wire Services
- https://siliconangle.com/2024/11/04/jfrog-report-highlights-critical-security-flaws-machine-learning-platforms/ – Corroborates the JFrog report highlighting critical security flaws in machine learning platforms, including vulnerabilities in Weights & Biases’ Weave toolkit and ZenML Cloud.
- https://www.csoonline.com/article/1293302/frequent-critical-flaws-open-mlflow-users-to-imminent-threats.html – Supports the identification of MLflow as particularly vulnerable with multiple critical vulnerabilities, including those allowing remote code execution and file overwrites.
- https://www.enterprisesecuritytech.com/post/inside-the-hidden-risks-of-machine-learning-vulnerabilities-what-jfrog-s-new-research-unveils-about – Details JFrog’s research on 22 unique vulnerabilities across 15 ML projects, including server-side vulnerabilities in Weights & Biases and ZenML.
- https://siliconangle.com/2024/11/04/jfrog-report-highlights-critical-security-flaws-machine-learning-platforms/ – Explains the WANDB Weave Directory Traversal vulnerability (CVE-2024-7340) and its potential for unauthorized privilege escalation.
- https://www.enterprisesecuritytech.com/post/inside-the-hidden-risks-of-machine-learning-vulnerabilities-what-jfrog-s-new-research-unveils-about – Describes the critical access control vulnerabilities in ZenML Cloud, allowing attackers to elevate permissions and access restricted information.
- https://siliconangle.com/2024/11/04/jfrog-report-highlights-critical-security-flaws-machine-learning-platforms/ – Discusses the command injection vulnerability in Deep Lake database (CVE-2024-6507) and its implications for executing arbitrary commands.
- https://siliconangle.com/2024/11/04/jfrog-report-highlights-critical-security-flaws-machine-learning-platforms/ – Details the Vanna.AI Prompt Injection vulnerability (though the CVE number is not provided in this source) and its potential for remote code execution.
- https://siliconangle.com/2024/11/04/jfrog-report-highlights-critical-security-flaws-machine-learning-platforms/ – Mentions the vulnerabilities in Mage.AI, including unauthorized shell access and weak path traversal checks, posing risks to data pipeline control and integrity.
- https://www.csoonline.com/article/1293302/frequent-critical-flaws-open-mlflow-users-to-imminent-threats.html – Highlights the critical vulnerabilities in MLFlow, including path traversal and file overwrite flaws, which can lead to remote code execution and system takeover.
- https://www.securityweek.com/critical-vulnerabilities-found-in-ai-ml-open-source-platforms/ – Corroborates the identification of multiple severe vulnerabilities in MLFlow, ClearML, and Hugging Face, emphasizing the risks associated with server-side components.
- https://www.securityweek.com/critical-vulnerabilities-found-in-ai-ml-open-source-platforms/ – Details the path validation bypass and other critical flaws in MLFlow, which could allow attackers to read sensitive files and achieve remote code execution.