Header Ads

How triggerless backdoors could dupe AI models without manipulating their input data


In the past few years, researchers have shown growing interest in the security of artificial intelligence systems. There’s a special interest in how malicious actors can attack and compromise machine learning algorithms, the subset of AI that is being increasingly used in different domains. Among the security issues being studied are backdoor attacks, in which a bad actor hides malicious behavior in a machine learning model during the training phase and activates it when the AI enters production. Until now, backdoor attacks had certain practical difficulties because they largely relied on visible triggers. But new research by AI scientists at the…

This story continues at The Next Web

from The Next Web https://ift.tt/3nBHVrP

No comments