Abstract
In this paper, we consider the problem of augmenting a set of histological images with adversarial examples to improve the robustness of the neural network classifiers trained on the augmented set against adversarial attacks. In recent years, neural network methods have been developed rapidly, achieving impressive results. However, they are subjected to the so-called adversarial attacks; i.e., they make incorrect predictions on input images with added imperceptible noise. Hence, the reliability of neural network methods remains an important area of research. In this paper, we compare different methods for training set augmentation to improve the robustness of neural histological image classifiers against adversarial attacks. For this purpose, we augment the training set with adversarial examples generated by several popular methods.