ClickCease Alert: Vertex AI Flaws Lead To Privilege Escalation Risk - TuxCare

Únase a nuestro popular boletín

Únase a más de 4.500 profesionales de Linux y el código abierto.

2 veces al mes. Sin spam.

Alert: Vertex AI Flaws Lead To Privilege Escalation Risk

por Wajahat Raja

November 29, 2024 - TuxCare expert team

As per recent media reports, cybersecurity researchers have identified and disclosed two Vertex AI flaws. These flaws in Google’s machine learning (ML) algorithm, if exploited, could allow threat actors to escalate privileges which can then be used for exfiltrating models from the cloud. In this article, we’ll dive into the details of the flaws and the fixes that have been deployed by Google. Let’s begin!  

Vertex AI Flaws Overview

Two Vertex AI flaws were recently discovered by cybersecurity experts at Palo Alto Networks. Comprehending what these vulnerabilities are is crucial given these vulnerabilities and others like them, if exploited, can allow threat actors to replicate custom tuning and optimizations. 

Malicious actors with such capabilities can expose sensitive information that is embedded with fine-tuning patterns. Before we dive into these Vertex AI flaws, it’s worth noting that they have been reported to Google, and the cloud giant since then has applied appropriate fixes to keep them from being exploited for malicious purposes. 

The two Vertex AI flaws that have been discovered by cybersecurity experts include privilege escalation via custom jobs and model exfiltration via malicious models. During discovery, researchers were able to exploit custom job permission allowing them to escalate privileges.

These escalated privileges then led to them having unauthorized access to all data services in the project. In addition, they were also able to deploy a positioned model in Vertex AI. By exploiting this Vertex AI flaw and deploying the model researcher could exfiltrate all other fine-tuned modes, posing the risk of an attack orchestrated to steal data. 

Vertex AI Pipelines, Privilege Escalation, And Custom Code Injection

Out of the two Vertex AI flaws that were identified, the first is a privilege escalation vulnerability that can be exploited using custom code injections. To comprehend this flaw and ensure protection, it’s essential to comprehend the function of the Vertex AI pipelines. Providing insights pertaining to this, experts have stated that: 

“A key feature of this platform is Vertex AI Pipelines, which allow users to tune their models using custom jobs, also referred to as custom training jobs. These custom jobs are essentially code that runs within the pipeline and can modify models in various ways. 

While this flexibility is valuable, it also opens the door to potential exploitation. Our research focused on how attackers could abuse custom jobs. By manipulating the custom job pipeline, we discovered a privilege escalation path that allowed us to access resources far beyond the intended scope.”

Experts have also identified that a custom job executes within a tenant project and is under a service agent. It’s worth noting that service agents have permission to a lot of services in the source project. Common examples of such may include all the source project’s Cloud Storage and BigQuery datasets. 

Using the service agent’s identity, the researchers had the ability to list, read, and export data stored in buckets and datasets. It is worth noting that such data should have never been accessible and could only be reached during the researcher’s attempt at identifying Vertex AI flaws

The next task for researchers, while discovering the Vertex AI flaws, was to run a specific code using a custom job. The first of the two Vertex AI flaws gave them two options for injecting the code. The team could either inject commands into the shell containing spec JSON configuration or create an image used for opening a reverse shell. 

While running the custom job, the team uncovered that their identity was “service-<PROJECT_NUMBER>@gcp-sa-aiplatform-cc.iam.gserviceaccount[.]com.” With the service agent acting in this role, they could perform the following activities:   

  • Access metadata service.
  • Acquire service credentials.
  • Extract user-data script.

In addition, to these actions, the account had excessive permission that included having the ability of listing all service accounts. Furthermore, it could also be used for initiatives such as creating, deleting, reading, and writing storage buckets and accessing BigQuery tables. Researchers also had visibility into the virtual machine (VM) creation. 

Apart from this, metadata on GCP internal Artifactory repositories was also accessible. Providing further insights about this, they stated that: 

“We used the metadata to access the internal GCP repositories and downloaded images that we didn’t have permissions for with our original service account. Although we gained access to restricted internal GCP repositories, we could not understand the extent of the vulnerability that we discovered, since permissions on the repository are granted at the repository level.”

ML Model Exfiltration Attacks

The second of the two Vertex AI flaws discovered by experts, if exploited, could lead to the deployment of a malicious model which would transition to severe consequences. A common example of such consequences include model exfiltration of other models within the environment. 

For such an exploit to be successful, a malicious actor would need to upload a poisoned model to a public model within the repository. From here onward a data scientist within the organization would import and deploy the model in Vertex AI. After deployment, the model could exfiltrate other ML and LLM models in the project. 

The exfiltration can carry over to sensitive fine-tuned models which would put critical organizational assets at risk. Such an attack, exploiting the Vertex AI flaws, would require two steps that include:  

  1. Deploying a poisoned model in a tenant project for access to restricted GCP repositories and sensitive data pertaining to the model. 
  2. Using the poisoned model for exfiltrating AI models including fine-tuned LLM adapters. 

In an attempt to safeguard against online threats, the researchers mentioned that: 

“By deploying a malicious model, we were able to access resources in the tenant projects that allowed us to view and export all models deployed across the project. This includes both ML and LLM models, along with their fine-tuned adapters.”

It’s worth noting that this method demonstrates a clear risk pertaining to a model-to-model infection scenario. Teams unaware of online threats and the Vertex AI flaws can fall prey to malicious online activities where members of the team unknowingly deploys a malicious model to a public repository.

The model, once active, could then cause significant harm as it’s capable of exfiltrating all ML models and fine-tuned LLMs which would put sensitive assets at risk. The attack of an exploit, had one prevailed, would be based on the following sequence:  

  1. A poisoned model being prepared and uploaded to a public repository.
  2. A data engineer downloading and importing the model.
  3. The model being deployed and the attacker having access.
  4. The attacker downloading the model images.
  5. The attacker downloading the trained models and LLM adapter layers.

As cyber threats become increasingly complex, comprehending such attacks and their mitigation protocols becomes paramount. To ensure a secure environment for all, experts have stated that: 

“We have shared these findings with our partners at Google, and they have since implemented fixes to eliminate these specific issues for Vertex AI on the Google Cloud Platform (GCP).”

Conclusión 

The discovery of these Vertex AI flaws emphasizes the critical need for vigilance in securing machine learning platforms. The vulnerabilities, if exploited, could have resulted in privilege escalation and model exfiltration, potentially exposing sensitive organizational data and fine-tuned models. Thanks to the efforts of cybersecurity experts, Google has implemented fixes to address these issues, ensuring a safer environment for Vertex AI users. 

However, these findings serve as a stark reminder of the importance of proactive security measures, robust access controls, and regular vulnerability assessments in safeguarding AI and ML systems. As cyber threats evolve, staying informed and vigilant remains essential to mitigating risks and protecting valuable digital assets.

Las fuentes de este artículo incluyen artículos en The Hacker News y Unidad 42.

¿Desea automatizar la aplicación de parches de vulnerabilidad sin reiniciar el núcleo, dejar el sistema fuera de servicio o programar ventanas de mantenimiento?

Conviértete en escritor invitado de TuxCare

Correo

¡Ayúdenos a comprender
el panorama de Linux!

Complete nuestra encuesta sobre el estado del código abierto y podrá ganar uno de varios premios, ¡el máximo valorado en 500 dólares!

Su experiencia es necesaria para dar forma al futuro de Enterprise Linux.