Publications

Refine Results

(Filters Applied) Clear All

PATHATTACK: attacking shortest paths in complex networks

Summary

Shortest paths in complex networks play key roles in many applications. Examples include routing packets in a computer network, routing traffic on a transportation network, and inferring semantic distances between concepts on the World Wide Web. An adversary with the capability to perturb the graph might make the shortest path between two nodes route traffic through advantageous portions of the graph (e.g., a toll road he owns). In this paper, we introduce the Force Path Cut problem, in which there is a specific route the adversary wants to promote by removing a minimum number of edges in the graph. We show that Force Path Cut is NP-complete, but also that it can be recast as an instance of the Weighted Set Cover problem, enabling the use of approximation algorithms. The size of the universe for the set cover problem is potentially factorial in the number of nodes. To overcome this hurdle, we propose the PATHATTACK algorithm, which via constraint generation considers only a small subset of paths|at most 5% of the number of edges in 99% of our experiments. Across a diverse set of synthetic and real networks, the linear programming formulation of Weighted Set Cover yields the optimal solution in over 98% of cases. We also demonstrate a time/cost tradeoff using two approximation algorithms and greedy baseline methods. This work provides a foundation for addressing similar problems and expands the area of adversarial graph mining beyond recent work on node classification and embedding.
READ LESS

Summary

Shortest paths in complex networks play key roles in many applications. Examples include routing packets in a computer network, routing traffic on a transportation network, and inferring semantic distances between concepts on the World Wide Web. An adversary with the capability to perturb the graph might make the shortest path...

READ MORE

Attacking Embeddings to Counter Community Detection

Published in:
Network Science Society Conference 2020 [submitted]

Summary

Community detection can be an extremely useful data triage tool, enabling a data analyst to split a largenetwork into smaller portions for a deeper analysis. If, however, a particular node wanted to avoid scrutiny, it could strategically create new connections that make it seem uninteresting. In this work, we investigate theuse of a state-of-the-art attack against node embedding as a means of countering community detection whilebeing blind to the attributes of others. The attack proposed in [1] attempts to maximize the loss function beingminimized by a random-walk-based embedding method (where two nodes are made closer together the more often a random walk starting at one node ends at the other). We propose using this method to attack thecommunity structure of the graph, specifically attacking the community assignment of an adversarial vertex. Since nodes in the same community tend to appear near each other in a random walk, their continuous-space embedding also tend to be close. Thus, we aim to use the general embedding attack in an attempt to shift the community membership of the adversarial vertex. To test this strategy, we adopt an experimental framework as in [2], where each node is given a “temperature” indicating how interesting it is. A node’s temperature can be “hot,” “cold,” or “unknown.” A node can perturbitself by adding new edges to any other node in the graph. The node’s goal is to be placed in a community thatis cold, i.e., where the average node temperature is less than 0. Of the 5 attacks proposed in [2], we use 2 in our experiments. The simpler attack is Cold and Lonely, which first connects to cold nodes, then unknown, then hot, and connects within each temperature in order of increasing degree. The more sophisticated attack is StableStructure. The procedure for this attack is to (1) identify stable structures (containing nodes assigned to the same community each time for several trials), (2) connect to nodes in order of increasing average temperature of their stable structures (randomly within a structure), and (3) connect to nodes with no stable structure in order of increasing temperature. As in [2], we use the Louvain modularity maximization technique for community detection. We slightly modify the embedding attack of [1] by only allowing addition of new edges and requiring that they include the adversary vertex. Since the embedding attack is blind to the temperatures of the nodes, experimenting with these attacks gives insight into how much this attribute information helps the adversary. Experimental results are shown in Figure 1. Graphs considered in these experiments are (1) an 500-node Erdos-Renyi graph with edge probabilityp= 0.02, (2) a stochastic block model with 5 communities of 100nodes each and edge probabilities ofpin= 0.06 andpout= 0.01, (3) the network of Abu Sayyaf Group (ASG)—aviolent non-state Islamist group operating in the Philippines—where two nodes are linked if they both participatein at least one kidnapping event, with labels derived from stable structures (nodes together in at least 95% of 1000 Louvain trials), and (4) the Cora machine learning citation graph, with 7 classes based on subjectarea. Temperature is assigned to the Erdos-Renyi nodes randomly with probability 0.25, 0.5, and 0.25 for hot,unknown, and cold, respectively. For the other graphs, nodes with the same label as the target are hot, unknown,and cold with probability 0.35, 0.55, and 0.1, respectively, and the hot and cold probabilities are swapped forother labels. The results demonstrate that, even without the temperature information, the embedding methodis about as effective as the Cold and Lonely when there is community structure to exploit, though it is not aseffective as Stable Structure, which leverages both community structure and temperature information.
READ LESS

Summary

Community detection can be an extremely useful data triage tool, enabling a data analyst to split a largenetwork into smaller portions for a deeper analysis. If, however, a particular node wanted to avoid scrutiny, it could strategically create new connections that make it seem uninteresting. In this work, we investigate...

READ MORE

Complex Network Effects on the Robustness of Graph Convolutional Networks

Summary

Vertex classification—the problem of identifying the class labels of nodes in a graph—has applicability in a wide variety of domains. Examples include classifying subject areas of papers in citation net-works or roles of machines in a computer network. Recent work has demonstrated that vertex classification using graph convolutional networks is susceptible to targeted poisoning attacks, in which both graph structure and node attributes can be changed in anattempt to misclassify a target node. This vulnerability decreases users’ confidence in the learning method and can prevent adoption in high-stakes contexts. This paper presents the first work aimed at leveraging network characteristics to improve robustness of these methods. Our focus is on using network features to choose the training set, rather than selecting the training set at random. Our alternative methods of selecting training data are (1) to select the highest-degree nodes in each class and (2) to iteratively select the node with the most neighbors minimally connected to the training set. In the datasets on which the original attack was demonstrated, we show that changing the training set can make the network much harder to attack. To maintain a given probability of attack success, the adversary must use far more perturbations; often a factor of 2–4 over the random training baseline. This increase in robustness is often as substantial as tripling the amount of randomly selected training data. Even in cases where success is relatively easy for the attacker, we show that classification performance degrades much more gradually using the proposed methods, with weaker incorrect predictions for the attacked nodes. Finally, we investigate the potential tradeoff between robustness and performance in various datasets.
READ LESS

Summary

Vertex classification—the problem of identifying the class labels of nodes in a graph—has applicability in a wide variety of domains. Examples include classifying subject areas of papers in citation net-works or roles of machines in a computer network. Recent work has demonstrated that vertex classification using graph convolutional networks is...

READ MORE

Topological effects on attacks against vertex classification

Summary

Vertex classification is vulnerable to perturbations of both graph topology and vertex attributes, as shown in recent research. As in other machine learning domains, concerns about robustness to adversarial manipulation can prevent potential users from adopting proposed methods when the consequence of action is very high. This paper considers two topological characteristics of graphs and explores the way these features affect the amount the adversary must perturb the graph in order to be successful. We show that, if certain vertices are included in the training set, it is possible to substantially an adversary's required perturbation budget. On four citation datasets, we demonstrate that if the training set includes high degree vertices or vertices that ensure all unlabeled nodes have neighbors in the training set, we show that the adversary's budget often increases by a substantial factor---often a factor of 2 or more---over random training for the Nettack poisoning attack. Even for especially easy targets (those that are misclassified after just one or two perturbations), the degradation of performance is much slower, assigning much lower probabilities to the incorrect classes. In addition, we demonstrate that this robustness either persists when recently proposed defenses are applied, or is competitive with the resulting performance improvement for the defender.
READ LESS

Summary

Vertex classification is vulnerable to perturbations of both graph topology and vertex attributes, as shown in recent research. As in other machine learning domains, concerns about robustness to adversarial manipulation can prevent potential users from adopting proposed methods when the consequence of action is very high. This paper considers two...

READ MORE

Improving robustness to attacks against vertex classification

Published in:
15th Intl. Workshop on Mining and Learning with Graphs, 5 August 2019.

Summary

Vertex classification—the problem of identifying the class labels of nodes in a graph—has applicability in a wide variety of domains. Examples include classifying subject areas of papers in citation networks or roles of machines in a computer network. Recent work has demonstrated that vertex classification using graph convolutional networks is susceptible to targeted poisoning attacks, in which both graph structure and node attributes can be changed in an attempt to misclassify a target node. This vulnerability decreases users' confidence in the learning method and can prevent adoption in high-stakes contexts. This paper presents work in progress aiming to make vertex classification robust to these types of attacks. We investigate two aspects of this problem: (1) the classification model and (2) the method for selecting training data. Our alternative classifier is a support vector machine (with a radial basis function kernel), which is applied to an augmented node feature-vector obtained by appending the node’s attributes to a Euclidean vector representing the node based on the graph structure. Our alternative methods of selecting training data are (1) to select the highest-degree nodes in each class and (2) to iteratively select the node with the most neighbors minimally connected to the training set. In the datasets on which the original attack was demonstrated, we show that changing the training set can make the network much harder to attack. To maintain a given probability of attack success, the adversary must use far more perturbations; often a factor of 2–4 over the random training baseline. Even in cases where success is relatively easy for the attacker, we show that the classification and training alternatives allow classification performance to degrade much more gradually, with weaker incorrect predictions for the attacked nodes.
READ LESS

Summary

Vertex classification—the problem of identifying the class labels of nodes in a graph—has applicability in a wide variety of domains. Examples include classifying subject areas of papers in citation networks or roles of machines in a computer network. Recent work has demonstrated that vertex classification using graph convolutional networks is...

READ MORE

Artificial intelligence: short history, present developments, and future outlook, final report

Summary

The Director's Office at MIT Lincoln Laboratory (MIT LL) requested a comprehensive study on artificial intelligence (AI) focusing on present applications and future science and technology (S&T) opportunities in the Cyber Security and Information Sciences Division (Division 5). This report elaborates on the main results from the study. Since the AI field is evolving so rapidly, the study scope was to look at the recent past and ongoing developments to lead to a set of findings and recommendations. It was important to begin with a short AI history and a lay-of-the-land on representative developments across the Department of Defense (DoD), intelligence communities (IC), and Homeland Security. These areas are addressed in more detail within the report. A main deliverable from the study was to formulate an end-to-end AI canonical architecture that was suitable for a range of applications. The AI canonical architecture, formulated in the study, serves as the guiding framework for all the sections in this report. Even though the study primarily focused on cyber security and information sciences, the enabling technologies are broadly applicable to many other areas. Therefore, we dedicate a full section on enabling technologies in Section 3. The discussion on enabling technologies helps the reader clarify the distinction among AI, machine learning algorithms, and specific techniques to make an end-to-end AI system viable. In order to understand what is the lay-of-the-land in AI, study participants performed a fairly wide reach within MIT LL and external to the Laboratory (government, commercial companies, defense industrial base, peers, academia, and AI centers). In addition to the study participants (shown in the next section under acknowledgements), we also assembled an internal review team (IRT). The IRT was extremely helpful in providing feedback and in helping with the formulation of the study briefings, as we transitioned from datagathering mode to the study synthesis. The format followed throughout the study was to highlight relevant content that substantiates the study findings, and identify a set of recommendations. An important finding is the significant AI investment by the so-called "big 6" commercial companies. These major commercial companies are Google, Amazon, Facebook, Microsoft, Apple, and IBM. They dominate in the AI ecosystem research and development (R&D) investments within the U.S. According to a recent McKinsey Global Institute report, cumulative R&D investment in AI amounts to about $30 billion per year. This amount is substantially higher than the R&D investment within the DoD, IC, and Homeland Security. Therefore, the DoD will need to be very strategic about investing where needed, while at the same time leveraging the technologies already developed and available from a wide range of commercial applications. As we will discuss in Section 1 as part of the AI history, MIT LL has been instrumental in developing advanced AI capabilities. For example, MIT LL has a long history in the development of human language technologies (HLT) by successfully applying machine learning algorithms to difficult problems in speech recognition, machine translation, and speech understanding. Section 4 elaborates on prior applications of these technologies, as well as newer applications in the context of multi-modalities (e.g., speech, text, images, and video). An end-to-end AI system is very well suited to enhancing the capabilities of human language analysis. Section 5 discusses AI's nascent role in cyber security. There have been cases where AI has already provided important benefits. However, much more research is needed in both the application of AI to cyber security and the associated vulnerability to the so-called adversarial AI. Adversarial AI is an area very critical to the DoD, IC, and Homeland Security, where malicious adversaries can disrupt AI systems and make them untrusted in operational environments. This report concludes with specific recommendations by formulating the way forward for Division 5 and a discussion of S&T challenges and opportunities. The S&T challenges and opportunities are centered on the key elements of the AI canonical architecture to strengthen the AI capabilities across the DoD, IC, and Homeland Security in support of national security.
READ LESS

Summary

The Director's Office at MIT Lincoln Laboratory (MIT LL) requested a comprehensive study on artificial intelligence (AI) focusing on present applications and future science and technology (S&T) opportunities in the Cyber Security and Information Sciences Division (Division 5). This report elaborates on the main results from the study. Since the...

READ MORE

Classifier performance estimation with unbalanced, partially labeled data

Published in:
Proc. Machine Learning Research, Vol. 88, 2018, pp. 4-16.

Summary

Class imbalance and lack of ground truth are two significant problems in modern machine learning research. These problems are especially pressing in operational contexts where the total number of data points is extremely large and the cost of obtaining labels is very high. In the face of these issues, accurate estimation of the performance of a detection or classification system is crucial to inform decisions based on the observations. This paper presents a framework for estimating performance of a binary classifier in such a context. We focus on the scenario where each set of measurements has been reduced to a score, and the operator only investigates data when the score exceeds a threshold. The operator is blind to the number of missed detections, so performance estimation targets two quantities: recall and the derivative of precision with respect to recall. Measuring with respect to error in these two metrics, simulations in this context demonstrate that labeling outliers not only outperforms random labeling, but often matches performance of an adaptive method that attempts to choose the optimal data for labeling. Application to real anomaly detection data confirms the utility of the approach, and suggests direction for future work.
READ LESS

Summary

Class imbalance and lack of ground truth are two significant problems in modern machine learning research. These problems are especially pressing in operational contexts where the total number of data points is extremely large and the cost of obtaining labels is very high. In the face of these issues, accurate...

READ MORE

SIAM data mining "brings it" to annual meeting

Summary

The Data Mining Activity Group is one of SIAM's most vibrant and dynamic activity groups. To better share our enthusiasm for data mining with the broader SIAM community, our activity group organized six minisymposia at the 2016 Annual Meeting. These minisymposia included 48 talks organized by 11 SIAM members.
READ LESS

Summary

The Data Mining Activity Group is one of SIAM's most vibrant and dynamic activity groups. To better share our enthusiasm for data mining with the broader SIAM community, our activity group organized six minisymposia at the 2016 Annual Meeting. These minisymposia included 48 talks organized by 11 SIAM members.

READ MORE

Intersection and convex combination in multi-source spectral planted cluster detection

Published in:
IEEE Global Conf. on Signal and Information Processing, GlobalSIP, 7-9 December 2016.

Summary

Planted cluster detection is an important form of signal detection when the data are in the form of a graph. When there are multiple graphs representing multiple connection types, the method of aggregation can have significant impact on the results of a detection algorithm. This paper addresses the tradeoff between two possible aggregation methods: convex combination and intersection. For a spectral detection method, convex combination dominates when the cluster is relatively sparse in at least one graph, while the intersection method dominates in cases where it is dense across graphs. Experimental results confirm the theory. We consider the context of adversarial cluster placement, and determine how an adversary would distribute connections among the graphs to best avoid detection.
READ LESS

Summary

Planted cluster detection is an important form of signal detection when the data are in the form of a graph. When there are multiple graphs representing multiple connection types, the method of aggregation can have significant impact on the results of a detection algorithm. This paper addresses the tradeoff between...

READ MORE

Analytical models and methods for anomaly detection in dynamic, attributed graphs

Published in:
Chapter 2, Computational Network Analysis with R: Applications in Biology, Medicine, and Chemistry, 2017, pp. 35-61.

Summary

This chapter is devoted to anomaly detection in dynamic, attributed graphs. There has been a great deal of research on anomaly detection in graphs over the last decade, with a variety of methods proposed. This chapter discusses recent methods for anomaly detection in graphs,with a specific focus on detection within backgrounds based on random graph models. This sort of analysis can be applied for a variety of background models, which can incorporate topological dynamics and attributes of vertices and edges. The authors have developed a framework for anomalous subgraph detection in random background models, based on linear algebraic features of a graph. This includes an implementation in R that exploits structure in the random graph model for computationally tractable analysis of residuals. This chapter outlines this framework within the context of analyzing dynamic, attributed graphs. The remainder of this chapter is organized as follows. Section 2.2 defines the notation used within the chapter. Section 2.3 briefly describes a variety of perspectives and techniques for anomaly detection in graph-based data. Section 2.4 provides an overview of models for graph behavior that can be used as backgrounds for anomaly detection. Section 2.5 describes our framework for anomalous subgraph detection via spectral analysis of residuals, after the data are integrated over time. Section 2.6 discusses how the method described in Section 2.5 can be efficiently implemented in R using open source packages. Section 2.7 demonstrates the power of this technique in controlled simulation, considering the effects of both dynamics and attributes on detection performance. Section 2.8 gives a data analysis example within this context, using an evolving citation graph based on a commercially available document database of public scientific literature. Section 2.9 summarizes the chapter and discusses ongoing research in this area.
READ LESS

Summary

This chapter is devoted to anomaly detection in dynamic, attributed graphs. There has been a great deal of research on anomaly detection in graphs over the last decade, with a variety of methods proposed. This chapter discusses recent methods for anomaly detection in graphs,with a specific focus on detection within...

READ MORE