Microsoft, in collaboration with MITRE, IBM, NVIDIA, and Bosch, has released a new open framework that aims to help security analysts detect, respond to, and remediate adversarial attacks against machine learning (ML) systems.
Called the Adversarial ML Threat Matrix, the initiative is an attempt to organize the different techniques employed by malicious adversaries in subverting ML systems.
Just as artificial intelligence (AI) and ML are being deployed in a wide variety of novel applications, threat actors can not only abuse the technology to power their malware but can also leverage it to fool machine learning models with poisoned datasets, thereby causing beneficial systems to make incorrect decisions, and pose a threat to stability and safety of AI applications.
Indeed, ESET researchers last year found Emotet — a notorious email-based malware behind several botnet-driven spam campaigns and ransomware attacks — to be using ML to improve its targeting.
Then earlier this month, Microsoft warned about a new Android ransomware strain that included a machine learning model that, while yet to be integrated into the malware, could be used to fit the ransom note image within…
http://feedproxy.google.com/~r/TheHackersNews/~3/XcTJVlqWwWY/adversarial-ml-threat-matrix.html