Microsoft Counterfit that can help organisations test the security of their artificial intelligence (AI) and machine learning (ML) systems has been released as an open source tool. Microsoft says that the tool can test if the algorithms used in AI and ML systems are “robust, reliable, and trustworthy”. The Redmond-based company says that it uses the Counterfit internally to test its AI systems for vulnerabilities before launching them. Microsoft will be holding a walk-through of Counterfit and a live tutorial on May 10.
As per a blog post by Microsoft, Counterfeit is a tool to secure AI systems that are used in various industries such as healthcare, finance, and defence. Citing a survey of 28 organisations, spanning Fortune 500 companies, governments, non-profits, and small- and medium-sized businesses (SMBs), Microsoft says that it found out that 25 out of 28 businesses don’t have the right tools to secure their AI systems.
“Consumers must have confidence that the AI systems powering these important domains are secure from adversarial manipulation,” reads the blogpost.
Microsoft says that it engaged with a diverse profile of partners to test the tool against their machine learning models in their environments and to ensure that Counterfit addresses a broader set of security professionals’ needs. Counterfit is also highlighted as a tool to empower engineers to securely develop and deploy AI systems. Apart from having workflows and terminology similar to popular offensive tools, Counterfeit is said to make published attack algorithms accessible to the security community.
Microsoft also has Counterfit Git Hub repository, and is holding a walk-through as well as live tutorial on May 10. If you are a developer, or work in an organisation that wants to use the tool to secure AI systems, you can register for the webinar.
For the latest tech news and reviews, follow Gadgets 360 on Twitter, Facebook, and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel.
Leave a reply