Overview of Fast Minimum-Norm Attack
Fast Minimum-Norm Attack, or FNM, is an adversarial attack that aims to deceive machine learning algorithms by making small modifications to the input data. This type of attack works by finding the sample that can be misclassified with maximum confidence within an $\ell_{p}$-norm constraint of size $\epsilon$, while minimizing the distance of the current sample to the decision boundary.
Understanding Adversarial Attacks
Adversarial attacks are techniques
Generalizable Node Injection Attack (G-NIA): Overview
Generalizable Node Injection Attack (G-NIA) is a form of graph neural network (GNN) attack where an attacker introduces malicious nodes to the graph to impair the GNN's performance. Unlike conventional methods where attackers modify existing edges and nodes, G-NIA models the most crucial feature propagation by jointly modeling the malicious attributes and the edges. G-NIA uses Gumbel-Top-𝑘 to generate discrete edges and captures the couplin
Many machine learning models, such as those used in image recognition and speech processing, are vulnerable to attacks from adversarial examples. Adversarial examples are inputs that have been intentionally manipulated to trigger the model into making an incorrect prediction. This can have serious implications, such as misidentification in security systems or misdiagnosis in medical applications.
Introducing Morphence
Morphence is an approach to adversarial defense that aims to make a model a