LLM servers like Flowise expose various types of data

September 6, 2024
Flowise LLM Automation Tools Data Theft Sensitive Data Cyberattack

Flowise LLM automation tools and vector databases that store sensitive information are susceptible to data theft. Based on reports, hundreds of open-source large language model (LLM) builder servers and dozens of vector databases expose sensitive data to the public Internet.

Almost all organisations are eager to employ artificial intelligence (AI) in their business operation immediately. However, they commonly disregard the safety of these technologies before entrusting the data to these tools.

According to a recent report, two types of potentially susceptible open-source (OSS) AI services are prone to exploitation. The first is vector databases, which hold data for AI tools, and the second is LLM application builders, especially the open-source program Flowise.

The investigation uncovered various sensitive personal and corporate data inadvertently exposed by organisations attempting to adopt generative AI tools. Moreover, many programmers find these tools on the Internet and try to install them in their workplaces, but they take security concerns very lightly.

 

There are hundreds of unpatched and susceptible Flowise servers that could endanger sensitive information.

 

Flowise is a low-code tool for developing LLM applications. Whether it’s a tool for producing and extracting data for downstream programming and other tasks or a customer care bot, program developers who use Flowise typically access and manage massive amounts of data. Most Flowise servers are password-protected, but these passwords have insufficient security features to protect the sensitive data users give to these platforms.

Earlier this year, a researcher uncovered an authentication bypass issue in Flowise 1.6.2 and older. The research revealed that unauthorised individuals can exploit the vulnerability by capitalising a few characters in the program’s API endpoints.

Further investigation also showed that the flaw (CVE-2024-31621) is present in about 438 Flowise servers. The endangered contents included GitHub access tokens, OpenAI API keys, plaintext Flowise passwords and API keys, configurations and prompts for Flowise apps, and more.

Researchers advise that organisations limit access to the AI services on which they rely and monitor and log associated behaviour to reduce the danger of exposed AI tooling. Companies should also encrypt sensitive data transmitted by LLM apps and always deploy software upgrades, even if unnecessary, to ensure the safety of stored data.

About the author

Leave a Reply