Monday 20 May 2019

How we might protect ourselves from malicious AI

New research could make deep-learning models much harder to manipulate in harmful ways, reports Karen Hao.
Hidden danger: Adversarial examples are a class of tiny changes that, when fed into a deep-learning model, cause it to misbehave. They can affect everything from image classifiers to cancer diagnosis systems. Despite their danger, however, they are poorly understood.
A classic example: By adding a little noise to an image of a panda, a system will classify it as a gibbon with nearly 100% confidence. That’s because of the way it detects patterns of pixels.
A possible solution: A new MIT paper suggests a potential fix, but it will involve fundamentally rethinking how AI models are trained in the first place. Read the full story here

No comments:

Post a Comment

10 WAYS TO EARN IN DOLLARS

 Remote Work or Freelancing: With the rise of remote work opportunities, you can find freelance work in various fields such as writing, grap...