This article explains how to generate adversaries for convolutional neural networks (CNNs), which can be used to fool the model to recognize a cat as a goldfish. The author, Robert, uses adversarial machine learning techniques to manipulate the ResNet50 model to classify an image of his cat as a goldfish, without humans noticing the difference. He adds noise to the image and modifies the loss function to force the model to classify the image as a goldfish. The article explores the use of adversarial ML to manipulate image classification models, ultimately highlighting the potential vulnerabilities of such models.
source update: My Cat Is a Goldfish, so Dont Tax It. – Towards AI