Introducing the Coalition for Safe AI (CoSAI)

Jul 18, 2024
Immediately, I'm delighted to share the launch of the Coalition for Safe AI (CoSAI). CoSAI is an alliance of trade leaders, researchers, and builders devoted to enhancing the safety of AI implementations. CoSAI operates underneath the auspices of OASIS Open, the worldwide requirements and open-source consortium. CoSAI’s founding members embody trade leaders resembling OpenAI, Anthropic, Amazon, Cisco, Cohere, GenLab, Google, IBM, Intel, Microsoft, Nvidia, and PayPal. Collectively, our aim is to create a future the place know-how isn't solely cutting-edge but additionally secure-by-default. CoSAI’s Scope & Relationship to Different Tasks CoSAI enhances present AI initiatives by specializing in how one can combine and leverage AI securely throughout organizations of all sizes and all through all phases of growth and utilization. CoSAI collaborates with NIST, Open-Supply Safety Basis (OpenSSF), and different stakeholders by means of collaborative AI safety analysis, greatest follow sharing, and joint open-source initiatives. CoSAI’s scope contains securely constructing, deploying, and working AI techniques to mitigate AI-specific safety dangers resembling mannequin manipulation, mannequin theft, information poisoning, immediate injection, and confidential information extraction. We should equip practitioners with built-in safety options, enabling them to leverage state-of-the-art AI controls while not having to change into consultants in each side of AI safety. The place doable, CoSAI will collaborate with different organizations driving technical developments in accountable and safe AI, together with the Frontier Mannequin Discussion board, Partnership on AI, OpenSSF, and ML Commons. Members, resembling Google with its Safe AI Framework (SAIF), could contribute present work by way of thought management, analysis, greatest practices, initiatives, or open-source instruments to reinforce the accomplice ecosystem. Collective Efforts in Safe AI Securing AI stays a fragmented effort, with builders, implementors, and customers usually going through inconsistent and siloed pointers. Assessing and mitigating AI-specific dangers with out clear greatest practices and standardized approaches is a problem, even for probably the most...

0 Comments