Category: System Security

  • 9 Key Elements for AI Audit and Why They Matter

    9 Key Elements for AI Audit and Why They Matter

    AI audit refers to the evaluation of AI systems to ensure they work securely, without bias or discrimination, and are aligned with ethical and legal standards. While AI audit has existed for years, recent technological enhancements have triggered a new wave of AI adoption across industries and organizations. We share aigos’ framework for AI Audit […]

  • Securing Multimodal Language Models

    Securing Multimodal Language Models

    We believe that the end-state for most AI systems will be multimodal. Security is key, especially in high exposure sectors like government and financial services. Securing a multimodal AI system is however challenging given input complexity, the breadth of potential risk scenarios as well as latency considerations in production environment. Having clear guidelines in place […]

  • Safeguarding Language Model Copyright: Introducing EmbMarker 

    Safeguarding Language Model Copyright: Introducing EmbMarker 

    In the realm of cutting-edge language models, the emergence of Large Language Models (LLMs) like GPT-3 has transformed natural language understanding and generation. Capitalizing on their capabilities, these models are now available as an Embedding as a Service (EaaS), catering to various natural language processing tasks. However, this accessibility raises concerns about potential model extraction […]

  • OWASP Top 10 Vulnerabilities for LLM Applications

    OWASP Top 10 Vulnerabilities for LLM Applications

    As developers, data scientists, and security experts work with LLM technologies to design and build applications and plug-ins, it is essential to be mindful of potential security risks. The Open Web Application Security Project (OWASP) identifies several key vulnerabilities in LLM-based systems that need attention: 1. Prompt Injection Prompt Injection involves manipulating LLMs through clever […]

  • Surveying open-source LLM Security Risks

    Surveying open-source LLM Security Risks

    Rezilion, a renowned automated software supply chain security platform, conducted groundbreaking research that shed light on the risk factors associated with open-source Language Model (LLM) projects. The study provided valuable insights into the vulnerabilities and challenges that AI systems faced in the open-source landscape. Open-source LLM projects garnered significant attention due to their collaborative and […]

  • AI/LLMs: Implications for CSOs and VSA processes

    AI/LLMs: Implications for CSOs and VSA processes

    With the rapid advancement and widespread adoption of Artificial Intelligence (AI) and Large Language Models (LLMs), organizations across various industries are harnessing the power of these technologies to drive innovation and gain a competitive edge. However, as AI systems become increasingly integral to critical business operations, it is crucial for Chief Security Officers (CSOs) to […]