Rezilion, a renowned automated software supply chain security platform, conducted groundbreaking research that shed light on the risk factors associated with open-source Language Model (LLM) projects. The study provided valuable insights into the vulnerabilities and challenges that AI systems faced in the open-source landscape.
Open-source LLM projects garnered significant attention due to their collaborative and accessible nature, enabling developers to leverage powerful language models for various AI applications. However, Rezilion’s research underscored the inherent risks associated with these projects. The study emphasized the need for a comprehensive understanding of potential security gaps and the importance of maintaining a robust cybersecurity strategy when implementing open-source LLMs.
It has been discovered that the majority of open-source Language Models (LLMs) and projects are plagued by significant security concerns that can be categorized as follows:
- Trust Boundary Risk:
Within the concept of trust boundaries, risks such as inadequate sandboxing, unauthorized code execution, SSRF vulnerabilities, insufficient access controls, and even prompt injections pose significant threats. These vulnerabilities allow for the injection of malicious NLP masked commands, which can propagate through various channels and severely impact the entire software chain. An example of this is the CVE-2023-29374 vulnerability found in LangChain, the third most popular open-source GPT.
- Data Management Risk:
Data leakage and training data poisoning represent risks related to data management. These risks are not limited to Large Language Models alone but pertain to any machine-learning system. Training data poisoning involves the intentional manipulation of an LLM’s training data or fine-tuning procedures by an attacker. This manipulation introduces vulnerabilities, backdoors, or biases that undermine the model’s security, effectiveness, or ethical behavior. The goal of this malicious act is to compromise the integrity and reliability of the LLM by injecting misleading or harmful information during the training process.
- Inherent Model Risk:
Security concerns stemming from the limitations of the underlying ML model include inadequate AI alignment and overreliance on LLM-generated content.
- Basic Security Best Practices:
Issues such as improper error handling or insufficient access controls fall under general security best practices. These issues are common not only to machine learning models in general but also to LLMs.
The alarming and worrisome fact is the security scores received by these models. Among the checked projects, the average security score was only 4.6 out of 10, with an average project age of 3.77 months and an average number of stars of 15,909. Projects that gain popularity quickly are at a much higher risk compared to those developed over a longer period.