Publications

Refine Results

(Filters Applied) Clear All

Complex Network Effects on the Robustness of Graph Convolutional Networks

Summary

Vertex classification—the problem of identifying the class labels of nodes in a graph—has applicability in a wide variety of domains. Examples include classifying subject areas of papers in citation net-works or roles of machines in a computer network. Recent work has demonstrated that vertex classification using graph convolutional networks is susceptible to targeted poisoning attacks, in which both graph structure and node attributes can be changed in anattempt to misclassify a target node. This vulnerability decreases users’ confidence in the learning method and can prevent adoption in high-stakes contexts. This paper presents the first work aimed at leveraging network characteristics to improve robustness of these methods. Our focus is on using network features to choose the training set, rather than selecting the training set at random. Our alternative methods of selecting training data are (1) to select the highest-degree nodes in each class and (2) to iteratively select the node with the most neighbors minimally connected to the training set. In the datasets on which the original attack was demonstrated, we show that changing the training set can make the network much harder to attack. To maintain a given probability of attack success, the adversary must use far more perturbations; often a factor of 2–4 over the random training baseline. This increase in robustness is often as substantial as tripling the amount of randomly selected training data. Even in cases where success is relatively easy for the attacker, we show that classification performance degrades much more gradually using the proposed methods, with weaker incorrect predictions for the attacked nodes. Finally, we investigate the potential tradeoff between robustness and performance in various datasets.
READ LESS

Summary

Vertex classification—the problem of identifying the class labels of nodes in a graph—has applicability in a wide variety of domains. Examples include classifying subject areas of papers in citation net-works or roles of machines in a computer network. Recent work has demonstrated that vertex classification using graph convolutional networks is...

READ MORE

Topological effects on attacks against vertex classification

Summary

Vertex classification is vulnerable to perturbations of both graph topology and vertex attributes, as shown in recent research. As in other machine learning domains, concerns about robustness to adversarial manipulation can prevent potential users from adopting proposed methods when the consequence of action is very high. This paper considers two topological characteristics of graphs and explores the way these features affect the amount the adversary must perturb the graph in order to be successful. We show that, if certain vertices are included in the training set, it is possible to substantially an adversary's required perturbation budget. On four citation datasets, we demonstrate that if the training set includes high degree vertices or vertices that ensure all unlabeled nodes have neighbors in the training set, we show that the adversary's budget often increases by a substantial factor---often a factor of 2 or more---over random training for the Nettack poisoning attack. Even for especially easy targets (those that are misclassified after just one or two perturbations), the degradation of performance is much slower, assigning much lower probabilities to the incorrect classes. In addition, we demonstrate that this robustness either persists when recently proposed defenses are applied, or is competitive with the resulting performance improvement for the defender.
READ LESS

Summary

Vertex classification is vulnerable to perturbations of both graph topology and vertex attributes, as shown in recent research. As in other machine learning domains, concerns about robustness to adversarial manipulation can prevent potential users from adopting proposed methods when the consequence of action is very high. This paper considers two...

READ MORE

Improving robustness to attacks against vertex classification

Published in:
15th Intl. Workshop on Mining and Learning with Graphs, 5 August 2019.

Summary

Vertex classification—the problem of identifying the class labels of nodes in a graph—has applicability in a wide variety of domains. Examples include classifying subject areas of papers in citation networks or roles of machines in a computer network. Recent work has demonstrated that vertex classification using graph convolutional networks is susceptible to targeted poisoning attacks, in which both graph structure and node attributes can be changed in an attempt to misclassify a target node. This vulnerability decreases users' confidence in the learning method and can prevent adoption in high-stakes contexts. This paper presents work in progress aiming to make vertex classification robust to these types of attacks. We investigate two aspects of this problem: (1) the classification model and (2) the method for selecting training data. Our alternative classifier is a support vector machine (with a radial basis function kernel), which is applied to an augmented node feature-vector obtained by appending the node’s attributes to a Euclidean vector representing the node based on the graph structure. Our alternative methods of selecting training data are (1) to select the highest-degree nodes in each class and (2) to iteratively select the node with the most neighbors minimally connected to the training set. In the datasets on which the original attack was demonstrated, we show that changing the training set can make the network much harder to attack. To maintain a given probability of attack success, the adversary must use far more perturbations; often a factor of 2–4 over the random training baseline. Even in cases where success is relatively easy for the attacker, we show that the classification and training alternatives allow classification performance to degrade much more gradually, with weaker incorrect predictions for the attacked nodes.
READ LESS

Summary

Vertex classification—the problem of identifying the class labels of nodes in a graph—has applicability in a wide variety of domains. Examples include classifying subject areas of papers in citation networks or roles of machines in a computer network. Recent work has demonstrated that vertex classification using graph convolutional networks is...

READ MORE

Showing Results

1-3 of 3