Skip to main content

A robust information source estimator with sparse observations

Abstract

Purpose/Background

In this paper, we consider the problem of locating the information source with sparse observations. We assume that a piece of information spreads in a network following a heterogeneous susceptible-infected-recovered (SIR) model, where a node is said to be infected when it receives the information and recovered when it removes or hides the information. We further assume that a small subset of infected nodes are reported, from which we need to find the source of the information.

Methods

We adopt the sample path-based estimator developed in the work of Zhu and Ying (arXiv:1206.5421, 2012) and prove that on infinite trees, the sample path-based estimator is a Jordan infection center with respect to the set of observed infected nodes. In other words, the sample path-based estimator minimizes the maximum distance to observed infected nodes. We further prove that the distance between the estimator and the actual source is upper bounded by a constant independent of the number of infected nodes with a high probability on infinite trees.

Results

Our simulations on tree networks and real-world networks show that the sample path-based estimator is closer to the actual source than several other algorithms.

Conclusions

In this paper, we proposed the sample path-based estimator for information source localization. Both theoretic analysis and numerical evaluations showed that the sample path-based estimator is robust and close to the real source.

1 Background

In this paper, we are interested in locating the source of information that spreads in a network by using sparse observations. The solution to this problem has important applications such as locating the sources of epidemics, the sources of news/rumors in social networks, or the sources of online computer virus. The problem has been studied in [1]-[5] under a homogeneous susceptible-infected (SI) model for information diffusion and in [6] under a homogeneous susceptible-infected-recovered (SIR) model for information diffusion, assuming that a complete snapshot of the network is given.

While [1]-[6] answered some basic questions about information source detection in large-scale networks, a complete snapshot of a real-world network, which may have hundreds of millions of nodes, is expensive to obtain. Furthermore, these works assume homogeneous infection across links and homogeneous recovery across nodes, but in reality, most networks are heterogeneous. For example, people close to each other are more likely to share rumors, and epidemics are more infectious in the regions with poor medical care systems. Therefore, it is important to take sparse observations and network heterogeneity into account when locating information sources. In this paper, we assume that the information spreads in the network following a heterogeneous SIR model and assume that only a small subset of infected nodes are reported to us. The goal is to identify the information source in a heterogeneous network by using sparse observations.

We use the sample path-based approach developed in [6] for locating the information source with sparse observations. Surprisingly, we find that the sample path-based estimator is robust to network heterogeneity and the number of observed infected nodes. In particular, our results show that even under a heterogeneous SIR model and with sparse observations, the sample path-based estimator remains to be a Jordan infection center in infinite trees, where the Jordan infection centers with a partial observation are the nodes that minimize the maximum distance to observed infected nodes. We further show that in an infinite tree, the distance between a Jordan infection center and the actual source can be bounded by a value independent of the size of an infected subnetwork with a high probability, where the infected subnetwork is the subnetwork consisting of nodes which are either infected or recovered, and is a connected component. Assume that the size of the infected subnetwork is n, and the result says that a Jordan infection center is a distance of O(1) from the actual source.

We remark that the locations of the Jordan centers only depend on the network topology and are independent of the infection and recovery probabilities, so the sample path-based estimators (or the Jordan infection centers) are also robust to the information diffusion model, which makes it very appealing in practice since the accurate knowledge of the SIR parameters can be difficult to measure in reality.

1.1 Related works

Other than [1]-[6], there are several related works in this area including the following: (1) detecting the first adopter of an innovation based on game theory [7], in which the maximum likelihood estimator is derived but the computational complexity of finding the estimator is exponential in the number of nodes; (2) distinguishing epidemic infection from random infection under the SI model [8]; and (3) geospatial abduction which deals with reasoning certain locations in a two-dimensional geographical area that can explain observed phenomena [9],[10]. A recent paper [11] also proposed a dynamic message passing algorithm (DMP) to detect the information source under a general SIR model with complete or partial observations. However, the algorithm needs the complete information of infection and recovery probabilities. In addition, the complexity of DMP is very high under partial observations since almost all nodes in the network are candidates of the source, and the calculation needs to be repeated for every possible candidate. In the simulations, we will show that our algorithm significantly outperforms DMP in terms of both accuracy and speed. We will see that our algorithm is 400 times faster even when we limit the DMP algorithm to a subnetwork.

2 Methods

2.1 A heterogeneous SIR model

In this section, we introduce the heterogeneous SIR model for information propagation. Different from the homogeneous SIR model in which infection and recovery probabilities are both homogeneous [6], the heterogeneous SIR model we consider allows different infection probabilities at different links and different recovery probabilities at different nodes.

Consider an undirected graph G={V,E}, where is the set of nodes and is the set of edges. Denote by (u,v)E the edge between node u and node v. Each node v has three states: susceptible (S), infected (I), and recovered (R). A node is said to be susceptible if it has not received the information, infected after it receives the information, and recovered if the node removes or hides the information. Time is slotted. At the beginning of each time slot, each infected node attempts to contact all its susceptible neighbors. A contact from node u to node v succeeds with probability q uv . A susceptible node becomes infected after being successfully contacted by one of its infected neighbors. At the middle of each time slot, an infected node, if it is infected before the current time slot, recovers with probability p v . A recovered node cannot be infected again. We assume that contacts succeed independently across links and time slots and that nodes recover independently across nodes and time slots.

Consider a network shown in Figure 1, where node e is in the susceptible state, nodes a and c are in the infected state, and nodes b and d are in the recovered state. Then, at the next time slot, node e becomes infected with probability

1 ( 1 q ae ) ( 1 q ce ) ,

and nodes a and c recover with probability p a and p c , respectively.

Figure 1
figure 1

An example for illustrating the heterogeneous SIR model.

2.2 Problem formulation

In this section, we formally define the problem of information source detection. Table 1 summarizes the notations used in the paper. Adopting the notation in [6], we define X v (t) to be the states of node v at the end of time slot t such that

X v ( t ) = S , if v is in state S at time t ; I , if v is in state I at time t ; R , if v is in state R at time t .
Table 1 Notation table

Let X(t)={ X v (t):vV} denote the states of all nodes at time instant t.

In this paper, we assume that we only have one partial snapshot of the network, which is a subset of the infected nodes. This observation can be sparse, and details will be given in the next section. We assume that the states of other nodes are unknown. We let Y v denote the state of node v in the snapshot such that

Y v = 1 , if node v is observed to be infected; 0 , otherwise.

Let Y={ Y v :vV}. We denote by v the information source. The problem of information source detection is to locate v based on the partial observation Y and the network topology G.

Due to recovery and partial observations, all nodes in the network are potential candidates of the information source. The maximum likelihood estimator of the problem is therefore computationally expensive to find as pointed out in [6]. In this paper, we follow the sample path-based approach proposed in [6] to find an estimator of v.

Since X(t) is the state of the network at time t, the sequence {X(τ)}0≤τt specifies the complete infection process. Therefore, we call X[ 0,t]={X(τ):0≤τt} a sample path which is the states of all nodes from time 0 to time t. We further define a function F(·) such that

F ( X v ( t ) ) = 1 , if X v ( t ) = I and v is observed; 0 , otherwise.

This function maps the actual state of a node to the observed state of the node. F(X(t))=Y if and only if F( X v (t))= Y v ,vV. The optimal sample path X[ 0,t] is defined to be the most likely sample path that results in the observed snapshot, i.e., it solves the following optimization problem:

X [ 0 , t ] = arg max t , X [ 0 , t ] X ( t ) Pr ( X [ 0 , t ] ) ,
(1)

where X(t)={X[0,t]|F(X(t))=Y} and Pr(X[ 0,t]) is the probability that the sample path X[ 0,t] occurs. The source that associates with X[ 0,t] is called the sample path-based estimator. It is proved in [6] that the sample path-based estimator on an infinite tree is a Jordan infection center under the homogeneous SIR model with a complete snapshot. The focus of this paper is to identify the sample path-based estimator under the heterogeneous SIR model with sparse observations.

2.3 Main results

In this section, we summarize the main results of this paper.

2.3.1 Main result 1: the Jordan infection centers as the sample path-based estimators

In our theoretical analysis, we consider tree networks with infinitely many levels (or called infinite trees) to derive the sample path-based estimator under the heterogeneous SIR model with a partial snapshot. Let I Y denote the set of observed infected nodes. We define the observed infection eccentricity e ~ (v, I Y ) of node v to be the maximum distance between v and any observed infected node where the distance is defined to be the shortest distance between two nodes. The Jordan infection centers of the partial snapshot are then defined to be the nodes with the minimum observed infection eccentricity. The following theorem states that on an infinite tree, the sample path-based estimator is a Jordan infection center of the partial snapshot.

Theorem 1.

Consider an infinite tree and assume that the partial snapshot Y contains at least one infected node. The sample path-based estimator, denoted by v, is a Jordan infection center, i.e.,

v arg min v V e ~ ( v , I Y ) .
(2)

The proof of this theorem consists of the following key steps.

  1. 1.

    In the first step, we focus on the sample paths originated from node v (i.e., we assume node v is the source). We consider two groups of sample paths: X v (t) and X v (t+1), where X v (t) is the set of the sample paths that are originated from v , have time duration t, and are consistent with the partial snapshot, i.e., F(X(t))=Y for any X[0,t] X v (t). The set X v (t+1) is similarly defined. We show that for any t e ~ (v, I Y ), the sample path with the highest probability in X v (t) occurs more likely than the one in X v (t+1). In other words,

    max X [ 0 , t ] X v ( t ) Pr ( X [ 0 , t ] ) > max X [ 0 , t + 1 ] X v ( t + 1 ) Pr ( X [ 0 , t + 1 ] ) .

As a consequence of this result, we conclude that the sample path that has the highest probability among those originated from node v has a duration of e ~ (v, I Y )(the observed infection eccentricity of node v). This result will be proved in Lemma 1 in the ‘Proofs’ section.

  1. 2.

    In the second step, we consider two neighboring nodes, say nodes u and v, and assume node v has a smaller observed infection eccentricity than node u. Based on Lemma 1, we will prove that the optimal sample path associated with node v occurs with a higher probability than that of node u. The key idea is to construct a sample path originated from node v based on the optimal sample path originated from node u and show that it occurs with a higher probability. This result will be proved in Lemma 2 in the ‘Proofs’ section.

  2. 3.

    We will finally prove that starting from any node, there exists a path from the node to a Jordan infection center such that the observed infection eccentricity strictly decreases along the path. Consider an example in Figure 2. Nodes b and f are two observed infected nodes. So node a is a Jordan infection center with observed infection eccentricity 1. The path from node e to node a is

    e d c b a ,

along which the observed infection eccentricity decreases as

5 4 3 2 1 .
Figure 2
figure 2

The key intuition behind Theorem 1.

By repeatedly using Lemma 2, it can be shown that the optimal sample path originated from a Jordan infection center occurs with a higher probability than the optimal sample path originated from a node which is not a Jordan infection center, which implies that the sample path-based estimator must be a Jordan infection center.

2.3.2 Main result 2: an O(1) bound on the distance between a Jordan infection center and the actual information source

Unlike the maximum likelihood estimator, the sample path estimator does not guarantee that the estimator is the node that most likely leads to the observation. It has been shown in [6] that on tree networks and under the homogeneous SIR model, the distance between the estimator and the actual source is a constant with a high probability. It is easy to see that with a partial observation, the distance between the estimator and the actual source cannot be bounded if the observed infection nodes are arbitrarily chosen. In this paper, we consider a class of fairly general sampling algorithms that generate the partial observation (and maybe sparse). The sampling algorithms have the following property: for any set of M infected nodes, the probability that at least one node in the set is reported approaches to 1 as M goes to infinity. We call such a sampling algorithm unbiased; in other words, any subset of infected nodes is likely to contain an observed infected node when the size of the subset is large enough. Note that if an infected node is reported with probability at least δ for some δ>0, independent of other nodes, then it satisfies the property above. Our second main result is that the sample path estimator is within a constant distance from the actual source independent of the size of the infected subnetwork if the sampling algorithm is unbiased. We also emphasize that the observation generated by an unbiased sampling algorithm can be very sparse since we only require that one observed infected node is reported with a high probability among M nodes when M is sufficiently large.

Theorem 2.

Consider an infinite tree. Let gmin be the lower bound on the number of children and qmin>0 be the lower bound on q. Assume gmin>1, gminqmin>1, and the observed infection topology Y contains at least one infected node and is generated by an unbiased sampling algorithm. Then given ε>0, the distance between the sample path estimator and the actual source is d ε with probability 1−ε, where d ε is independent of the size of the infected subnetwork. In other words, the distance is O(1) with a high probability. □

The idea of the proof is illustrated using Figure 3, which consists of the following key steps:

Figure 3
figure 3

The key intuition behind Theorem 2.

  1. 1.

    We first define a one-time-slot infection subtree to be a subtree of the infected subnetwork such that each node on the subtree is infected in the next time slot after the parent is infected, except the source node. Note that the depth of a one-time-slot infection subtree grows by 1 deterministically until it terminates. We further say a node survives at time t if it is the root of a one-time-slot infection subtree which has not terminated by time t.

  2. 2.

    In the first step, we will prove that there exist at least two survived nodes within a distance L from the information source. In Figure 3, node a is the information source, and nodes b and c are two survived nodes.

  3. 3.

    In the second step, we will show that with a high probability, at least one infected node at the bottom of a one-time-slot infection subtree, which has not terminated, is observed under an unbiased sampling algorithm. In Figure 3, nodes d and f are two sampled nodes corresponding to the two one-time-slot infection subtrees starting from nodes b and c, respectively.

  4. 4.

    Since a one-time-slot infection subtree grows by 1 deterministically at each time slot, the depth of a one-time-slot infection subtree is t t k I , where k is the root node of the one-time-slot infection subtree. Recall that the Jordan infection centers minimize the maximum distance to observed infected nodes, so a Jordan infection center must be within a O(1) distance from the two survived nodes (nodes b and c). Considering Figure 3, we know that the actual source (node a) has an infection eccentricity ≤t since the information can propagate at most t hops at time t. So the infection eccentricity of the Jordan infection centers is no more than t according to the definition. Assume node e in Figure 3 is a Jordan infection center, then it is within a distance of O(t) from nodes d and f, and so is within a distance of O(1) from nodes b and c. Since nodes b and c are no more than L hops from the actual source a, we can conclude that the distance between the actual source a and the estimator e is O(1).

2.3.3 Reverse infection algorithm

The Jordan infection centers for general graphs can be identified by the reverse infection algorithm proposed in [6]. In the algorithm, each observed infected node broadcasts its identity (ID) to its neighbors. All nodes in the network record the distinct IDs they received. When a node receives a new distinct ID, it records it and then broadcasts it to its neighbors. This process stops when there is a node which receives the IDs from all observed infected nodes. It is easy to verify that the set of nodes which first receive all infected IDs is the set of Jordan infection centers. When there are multiple Jordan infection centers in the graph, we select the one with the maximum infection closeness centrality as the information center. The infection closeness centrality is defined as the inverse of the sum of the distances from one node to all observed infected nodes.

We explain the reverse infection algorithm using an example in Figure 4. The red nodes are the observed infected nodes, and the black nodes are the unobserved nodes. The array next to each node records the IDs that the node has received. When an ID is received, it is colored in red. For example, node 7 in iteration 1 has received the ID of node 2 which is colored in red and has not received the ID of nodes 1 and 9 which are in black. At each iteration, each node broadcasts its newly received IDs to its neighbors. For example, node 4 just received the ID of node 1 in iteration 1 so it will broadcast node 1’s ID to its neighbors in iteration 2. The algorithm terminates when some nodes receive the IDs of all observed infected nodes, and this node is the Jordan infection center. In iteration 3, node 5 received all IDs and so node 5 is the Jordan infection center in the example.

Figure 4
figure 4

An example of the reverse infection algorithm.

2.3.4 Discussion: robustness

According to the two main results above, we know that the sample path-based estimator remains to be a Jordan infection center. This is a somewhat surprising result since the locations of the Jordan infection centers are determined by the topology of the network and are independent of the parameters of the heterogeneous SIR model. In other words, the locations of the Jordan infection centers remain the same for different SIR processes as long as the set of observed infected nodes is the same. This property suggests that the sample path-based estimator is a robust estimator and can be used in the case when the parameters of the SIR model are unknown, which is a very desirable property since knowing these parameters can be difficult in practice.

In the simulations, we also consider a weighted graph with the link weights chosen proportionally according to the SIR parameters and use the weighted Jordan infection centers as the estimator. Interestingly, we will see that the performance is worse than the unweighted Jordan infection centers, which again demonstrates the robustness of the sample path-based estimator.

Furthermore, the main results hold as long as the sampling algorithm is unbiased and are independent of the number of samples. So the results are valid for sparse observations and are robust to the number of observations.

3 Results and discussion

3.1 Simulations

In this section, we evaluate the performance of the reverse infection algorithm for the heterogeneous SIR model on different networks including tree networks and real-world networks.

We first describe the heterogeneous SIR model we used in the simulation. Each edge e is assigned with a weight q e which is uniformly distributed over (0,1). The infection time over each edge e is geometrically distributed with mean 1/q e . Similarly, each node v is assigned with a weight p v generated by a uniform distribution over (0,1), and the recovery time is geometrically distributed with mean 1/p v . The information source is randomly selected. The total number of infected and recovered nodes in each infection graph is within the range of [ 100,300]. Each infected node v in the infection graph reports with probability σ, independently. The snapshots used in the simulations have at least one infected node. We changed σ and evaluated the performance on different networks.

We briefly introduce the three main algorithms which were used to compare with the reverse infection algorithm (RI).

  1. 1.

    Closeness centrality algorithm (CC): The closeness centrality algorithm selects the node with the maximum infection closeness as the information source.

  2. 2.

    Weighted reverse infection algorithm (wRI): The weighted reverse infection algorithm selects the node with the minimum weighted infection eccentricity as the information source where the weighted infection eccentricity is similar to the infection eccentricity except that the length of a path is defined to be the sum of the link weights instead of the number of hops, and the link weight is the average time it takes to spread the information over the link, i.e., 1/q e on edge e.

  3. 3.

    Weighted closeness centrality algorithm (wCC): The weighted closeness centrality algorithm selects the node with the maximum weighted infection closeness as the information source.

3.1.1 Tree networks

We first evaluated the performance of the RI algorithm on tree networks.

Regular trees

A g-regular tree is a tree where each node has g neighbors. We set the degree g=5 in our simulations.

We varied the sample probability σ from 0.01 to 0.1. The simulation results are summarized in Figure 5a, which shows the average distance between the estimator and the actual information source versus the sampling probability. When the sample probability increases, the performance of all algorithms improves. When the sample probability is larger than 6%, the average distance becomes stable which means that a small number of infected nodes is enough to obtain a good estimator. We also notice that the average distance of RI is smaller than all other algorithms and is less than one hop when σ≥0.04. wRI has a similar performance with RI when the sample probability is small (=0.01) but becomes much worse when the sample probability increases.

Figure 5
figure 5

The Performance of RI, CC, wRI, and wCC on different graphs. (a) Regular tree. (b) Binomial tree. (c) The power grid network. (d) The Internet autonomous systems network.

Binomial trees

We further evaluated the performance of RI and other algorithms on binomial trees T(ξ,β) where the number of children of each node follows a binomial distribution such that ξ is the number of trials and β is the success probability of each trial. In the simulations, we selected ξ=10 and β=0.4. Again, we varied σ from 0.01 to 0.1. The results are shown in Figure 5b. Similar to the regular trees, the performance of RI dominates CC, wRI, and wCC, and the difference in terms of the average number of hops is approximately 1 when σ≥0.03.

3.1.2 Real-world networks

In this section, we conducted experiments on two real-world networks: the Internet autonomous systems (IAS) network which is available at http://snap.stanford.edu/data/index.html and the power grid (PG) network which is available at http://www-personal.umich.edu/~mejn/netdata/.

The power grid network

The power grid network has 4,941 nodes and 6,594 edges. On average, each node has 1.33 edges. So the power grid network is a sparse network. The simulation results are shown in Figure 5c. In the power grid network, we can see that RI and wRI have similar performance, and both outperform CC and wCC by at least one hop when σ≥0.04.

The internet autonomous systems network

The Internet autonomous systems network is the data collected on 31 March 2001. There are 10,670 nodes and 22,002 edges in the network. The simulation results are shown in Figure 5d. wRI and wCC always perform worse than RI. Although RI and CC have similar performance when the sample probability is large, RI outperforms CC when σ≤0.03.

3.1.3 RI versus DMP

We finally compared the performance of RI and DMP. We conducted the simulation on the power grid network and fixed the sample probability to be 10%. Under this setting, the complexity of DMP is very high since the DMP computation needs to be repeated for every node in the network. Since nodes far away from the observed infected nodes are not likely to be the information source, we ran DMP over a small subset of nodes close to the Jordan infection centers (roughly 10%) to reduce the complexity of the algorithm.

We tested the speed of RI and DMP on a machine with 1.8 GB memory, 4 cores 2.4 GHz Intel i5 CPU and Ubuntu 12.10. The algorithms are implemented in Python 2.7. On average, it took RI 0.57 s to locate the estimator for one snapshot and took DMP 229.12 s. So RI is much faster than DMP.

Figure 6 shows the cumulative distribution function (CDF) of the distance from the estimator to the actual source under DMP and RI. We can see that RI dominates DMP; in particular, 71% of the estimators under RI are no more than seven hops from the actual source compared to 57% under DMP. Therefore, RI outperforms DMP in terms of both speed and accuracy. We remark that we did not compare the performance of RI and DMP on the IAS network because the complexity of running DMP on a large-sized network like the IAS network is prohibitively high.

Figure 6
figure 6

The CDF of RI and DMP on the power grid network.

3.2 Proofs

In this section, we present the proofs of the main results.

3.2.1 Proof of Theorem 1

Denote by I Y ={v| Y v =1} the set of observed infected nodes and Y ={v| Y v =0} the set of unobserved nodes. Given a node v, define the optimal time t v to be

t v arg t max t , X [ 0 , t ] X ( t ) Pr X [ 0 , t ] | v is information source ,

i.e., it is the duration of the optimal sample path with node v as the information source.

Lemma 1 (Time Inequality).

Consider an infinite tree rooted at v r . Assume that v r is the information source and the observed snapshot Y contains at least one infected node. If e ~ ( v r , I Y ) t 1 < t 2 , the following inequality holds:

max X [ 0 , t 1 ] X ~ ( t 1 ) Pr ( X [ 0 , t 1 ] ) > max X [ 0 , t 2 ] X ~ ( t 2 ) Pr ( X [ 0 , t 2 ] ) ,

where X ~ (t)={X[0,t]|Y=F(X(t))}. In addition,

t v r = e ~ ( v r , I Y ) = max u I Y d ( v r , u ) ,

i.e., t v r is equal to the observed infection eccentricity of v r with respect to I Y .

Proof.

We adopt the notations defined in [6], which are listed below:

C(v) is the set of children of v.

black ϕ(v) is the parent of node v.

Y k is the set of infection topologies where the maximum distance from v r to an infected node is k. All possible infection topologies are then partitioned into countable subsets { Y k }.

T v is the tree rooted in v.

T v u is the tree rooted in v without the branch from its neighbor u.

X([0,t], T v u ) is the sample path restricted to topology T v u .

t v I , t v R are the infection time and recovery time of node v.

Considering the case where the time difference of two sample paths is 1, we will show that

max X [ 0 , t ] X ~ ( t ) Pr ( X [ 0 , t ] ) > max X [ 0 , t + 1 ] X ~ ( t + 1 ) Pr ( X [ 0 , t + 1 ] ) .

Next, we use induction over Y k .

Step 1 k=0 v r is the only observed infected node in this case. Given a sample path X[0,t+1] X ~ (t+1), the probability of the sample path can be written as

Pr X [ 0 , t + 1 ] = Pr X [ 0 , t ] Pr ( X ( t + 1 ) | X [ 0 , t ] ) .

Since v r is the only observed infected node and all other nodes’ states are unknown, we assign X [0,t] X ~ (t) to be same as the first t time slots in X[ 0,t+1], i.e., X[ 0,t]=X[ 0,t]. Hence, we obtain that

Pr X [ 0 , t ] = Pr X [ 0 , t ] > Pr X [ 0 , t + 1 ] .

Therefore, the case k=0 is proved.

Step 2 Assume the inequality holds for kn and consider k=n+1, i.e., Y Y n + 1 . Clearly, tn+1≥1 for each X[ 0,t]. Furthermore, the set of subtrees T={ T u v r |uC( v r )} are divided into two subsets:

T h = { T u v r | u C ( v r ) , T u v r I Y = }

and

T i = T T h .

Given t v r R , the infection processes on the subtrees are mutually independent.

We construct X[ 0,t] which occurs more likely than X[ 0,t+1] according to the following steps, where X [0,t+1]=arg max X [ 0 , t + 1 ] X ~ ( t + 1 ) Pr(X[0,t+1]).

Part 1 T i . For a subtree in T i the proof follows Step 2.b and Step 2.c of Lemma 1 in [6]. The intuition is as follows: Consider a subtree and a sample path on it with duration t+1. If u is not infected at the first time slot, we can construct a sample path with duration t by moving the events one time slot earlier. The new sample path (with duration t) has a higher probability to occur than the original one. If u is infected in the first time slot, we can invoke the induction assumption to the subtree rooted at u, which belongs to Y n .

Part 2 v r . In this part, we have the freedom to assign the unobserved node as infected or healthy. In part 1, the infection time of each root u in subtrees T i of X[ 0,t] is either the same as or one time slot earlier than its infection time in X[ 0,t+1]. Therefore, if t v r R t, the recovery time of the source v r in X[ 0,t] can be assigned the same as that in X[ 0,t+1].

If t v r R =t+1, the source v r recovers at time slot t+1 which means v r is not observed since the observation set only contains infected nodes. Therefore, in X[ 0,t] we assign the source to be in state I at time t, which is the same as the state of v r at time t in X[ 0,t+1].

If t v r R >t+1,v r remains infected in the sample path X[0,t+1]. We assign the source to be in state I in X[0,t].

As a summary, according to the assignment above, the states of the source v r in X[ 0,t] are the same as those of the first t time slots in X[ 0,t+1].

Part 3 T h . Based on the conclusion of part 2, the subtrees belonging to T h in X[ 0,t] mimic the behaviors of the first t time slots in X[ 0,t+1].

Since X[ 0,t+1] has one extra time slot during which some extra events occur, X[ 0,t] occurs with a higher probability on the subtrees in T h .

According to the discussion above, we conclude that time inequality holds for k=n+1 and hence for any k according to the principle of induction. Therefore, the lemma holds. □

Lemma 2 (Adjacent nodes inequality).

Consider an infinite tree with partial observation Y which contains at least one infected node. For u,vV such that (u,v)E, if t u > t v

Pr ( X u [ 0 , t u ] ) < Pr ( X v [ 0 , t v ] ) ,

where X u [0, t u ] is the optimal sample path associated with root u.

Proof.

The proof of the lemma follows the proof of Lemma 2 in [6]. The key idea is to construct a sample path rooted at v, which has a higher probability than the optimal sample path rooted at u. It is not hard to see that t u = t v +1 based on the definition of the infection eccentricity. The graph is partitioned into T v u and T u v which are mutually independent after the infection of v and u. With this observation, we construct X ~ v [0, t v ] which infects u at the first time slot. X ~ v [ 0 , t v ] , T v u then mimics the behavior of X u [ 0 , t u ] , T v u , and X ~ v [ 0 , t v 1 ] , T u v has a higher probability than X u [ 0 , t u ] , T u v based on Lemma 1. □

The adjacent nodes inequality results in partial orders in the tree and makes it possible to compare the likelihood of optimal sample paths associated with adjacent nodes without knowing the actual probability of the optimal sample path. Following the proof of Theorem 4 in [6], it can be shown that in tree networks, from any node, there exists a path from the node to a Jordan infection center such that the observed infection eccentricity strictly decreases along the path. By repeatedly using Lemma 2, we can then prove that the source of the optimal sample path must be a Jordan infection center.

3.2.2 Proof of Theorem 2

In this subsection, we present the proof that shows that the sample path estimator is within a constant distance from the actual source independent of the size of the infected subnetwork. Given a tree rooted in v where the information starts from v following the general SIR model, we define the following three branching processes:

  1. 1.

    Z l ( T v ) denotes the set of nodes which are in infected or recovered states at level l on tree T v . Let Z l ( T v ) denote the cardinality of Z l ( T v ). Note that Z 0 ( T v )={ v }. We call this process the original infection process.

  2. 2.

    Z l τ ( T v ) denotes the set of infected and recovered nodes at level l whose parents are in set Z l 1 τ ( T v ) and who were infected within τ time slots after their parents were infected. This process adds a deadline τ on infection. If a node is not infected within τ time slots after its parent is infected, it is not included in this branching process. This process is called τ -deadline infection process. From the definition, if u,v Z l τ ( T v ), then

    | t u I t v I | l ( τ 1 ) .

For τ=1, we call Z l 1 ( T v ) the one-time-slot infection process. The extinction probability of a branching process is the probability that there is no offspring at a certain level of the branching process, i.e., Z l 1 ( T v )=0 for some l. Denote by ρ v the extinction probability of Z l 1 T v ϕ ( v ) .

  1. 3.

    We define the binomial branching process as a branching process whose offspring distribution follows binomial distribution B(g,φ) where g is the number of trials and φ is the success probability. Denote by ρ the extinction probability of the binomial branching process.

The following notations will be used in later analysis:

v denotes the optimal sample path estimator.

gmin is the lower bound on the number of children, i.e.,

min v | C ( v ) | g min , v V.

qmin is the lower bound on the infection probability, i.e.,

q min = min e q e , e E.

σ v τ is the probability that a node v infects at least one of its children within τ time slot after v is infected.

Given n0>0 and τ>0, define l= minl where Z l τ ( T v )> n 0 , i.e., l is the first level where the τ-deadline infection process has more than n0 offsprings.

Given τ and level L≥2, we consider the following two events:

Event 1: Z L T v =0.

Event 2: lL and at least two one-time-slot infection processes starting from level l survive, i.e., u,v Z l τ ( T v ) such that l, Z l 1 T u ϕ ( u ) 0 and Z l 1 T v ϕ ( v ) 0. In addition, at least one infected node at the bottom of each survived one-time-slot infection process is observed.

For event 1, no node at level L gets infected and the infection process terminates at level L−1. So the infection eccentricity of v is at most L−1, and the minimum infection eccentricity of the network is at most L−1. Therefore, the distance between v and v is no more than 2(L−1).

Considering event 2, we assume that the information propagates for t time slots. The deadline property of the τ-deadline infection process indicates t u 1 I τ l and t u 2 I τ l . Given a node v ~ at level (τ+1)l−1 where v ~ T u 2 ϕ ( u 2 ) and a node v T u 1 ϕ ( u 1 ) which is an observed infected node at the bottom of the infection tree, from Figure 7, we obtain

d v ~ , v = t t u 1 I + τ l + 1 t + 1 .
Figure 7
figure 7

A pictorial description of the distance relations in Theorem 2.

Note that uI,

d v , u t < d v ~ , v .

Since lL, any node at or below level L(τ+1)−1 has an infection eccentricity larger than that of v. Hence, v cannot be at or below level L(τ+1)−1. Therefore,

d v , v < ( τ + 1 ) L 1 .

Next, we prove the probability that either event 1 or event 2 happens goes asymptotically to 1. Denote by K l the number of one-time-slot infection processes which start from level l and survive. Denote by E the event that a survived one-time-slot infection process has at least one observed infected node at its lowest level.

According to the discussion above, the probability that the distance between the estimator and the actual source is no more than (τ+1)L−1 is at least

Pr Z L T v = 0 + Pr K l 2 , l L Pr ( E ) 2 Pr Z L T v = 0 + Pr l L Pr K l 2 l L Pr ( E ) 2 = Pr Z L T v = 0 + Pr i = 1 L Z i τ > n 0 × Pr ( K l 2 | l L ) Pr ( E ) 2 = 1 Pr i = 1 L 0 < Z i τ ( T v ) n 0 Pr i = 1 L Z i τ ( T v ) = 0 × Pr K l 2 | l L Pr ( E ) 2 + Pr Z L T v = 0 .

In addition, we have

Pr K l 2 | l L = l = 1 L Pr K l 2 , l = l | l L
(3)
= l = 1 L Pr K l 2 | l = l Pr l = l | l L .
(4)

In Lemma 3, we prove that the extinction probability of each branching process from level l is upper bounded by the extinction probability ρ of the binomial infection process B(gmin,qmin). Therefore, at level l, we have n0 i.i.d one-time infection processes whose extinction probabilities are upper bounded by ρ. The probability that at least two of them survive goes asymptotically to 1 when n0 increases. Therefore, ε1>0, we have enough large n0, such that

Pr K l 2 | l = l 1 ε 1 .

Therefore, Equation 4 becomes

Pr K l 2 | l L ( 1 ε 1 ) l = 1 L Pr l = l | l L = ( 1 ε 1 ) .

We show in Lemma 4 that Pr(E)≥1−ε2 given ε2>0. If n0 and t are sufficiently large, we have

Pr K l 2 | l L Pr ( E ) 2 ( 1 ε 1 ) 1 ε 2 2 .

Therefore,

Pr ( Z L ( T v ) = 0 ) + Pr K l 2 , l L Pr ( E ) 2 1 Pr i = 1 L 0 < Z i τ ( T v ) n 0 ( 1 ε 1 ) ( 1 ε 2 ) 2 Pr i = 1 L Z i τ ( T v ) = 0 + Pr ( Z L ( T v ) = 0 ) = 1 Pr i = 1 L 0 < Z i τ ( T v ) n 0 # Part1 ( 1 ε 1 ) ( 1 ε 2 ) 2 + Pr ( Z L ( T v ) = 0 ) Pr Z L τ ( T v ) = 0 # Part2 ,
(5)

where Equation 5 holds since Z l τ ( T v )=0 implies that Z l τ ( T v )=0 for lL.

For part 1 in Equation 5, we prove in Lemma 4, given ε3>0, when τ and L are sufficiently large,

1 Pr i = 1 L 0 < Z i τ ( T v ) n 0 > 1 ε 3 .

For part 2 in Equation 5, we have

lim τ Pr ( Z L τ ( T v ) = 0 ) = Pr ( Z L ( T v ) = 0 ) .

Therefore, given ε4>0, when τ is sufficiently large,

Pr ( Z L ( T v ) = 0 ) Pr Z L τ T v = 0 ε 4 .

Hence, we have

Pr Z L T v = 0 + Pr K l 2 , l L Pr ( E ) 2 ( 1 ε 1 ) 1 ε 2 2 1 ε 3 ε 4 .

Now choosing ε1=ε2=ε3=ε4=ε5/5 for some ε4>0, we have

Pr Z L T v = 0 + Pr K l 2 , l L Pr ( E ) 2 1 ε 5 .

Now let |Y| denote the number of infected nodes in the observation Y. Define events E1={Z L =0} and E2={K l ≥2f o r s o m e lL}, and E3 is the event that two of the survived one-time-slot infection processes have at least one observed infected node each at their bottoms. We have

Pr ( E 1 | | Y | 1 ) + Pr E 2 E 3 | | Y | 1 = 1 Pr ( | Y | 1 ) Pr ( E 1 { | Y | 1 } ) + Pr E 2 E 3 { | Y | 1 } .

Since E2E3 implies that |Y|≥1, we have

Pr ( E 1 | | Y | 1 ) + Pr E 2 E 3 | | Y | 1 = 1 Pr ( | Y | 1 ) Pr ( E 1 { | Y | 1 } ) + Pr E 2 E 3 = 1 Pr ( | Y | 1 ) Pr ( E 1 ) Pr ( E 1 { | Y | = 0 } ) + Pr E 2 E 3 1 Pr ( | Y | 1 ) Pr ( E 1 ) Pr ( { | Y | = 0 } ) + Pr E 2 E 3 1 Pr ( | Y | 1 ) Pr ( { | Y | 1 } ) ε 5 = 1 ε 5 Pr ( | Y | 1 ) .
(6)

Note that Pr(|Y|≥1) is a positive constant since blackthe one-time-slot infection process starting from the information source survives with non-zero probability. The theorem holds by choosing ε5=ε Pr(|Y|≥1).

Lemma 3.

The extinction probability of a one-time-slot infection process is smaller than the extinction probability of a binomial branching process B(gmin,qmin), i.e., vV,

ρ v < ρ.
Proof.

As shown in Figure 8, we construct a virtual source process Z l ( vs ) T v ϕ ( v ) and a min-infection process Z l ( mi ) T v ϕ ( v ) as auxiliary processes over the same tree topology where Y v ( vs ) and Y v ( mi ) are the binary numbers indicating whether node v has been infected. Denote by ρ v ( vs ) and ρ v ( mi ) the extinction probabilities, respectively.

Figure 8
figure 8

A pictorial description of the two auxiliary processes in Lemma 3.

In the min-infection process, infection spreads over edges with probability qmin. In the virtual source process, the probability that a node gets infected is

Pr Y v ( vs ) = 1 = Pr Y v ( mi ) = 1 + Pr Y v ( mi ) = 0 · q uv q min 1 q min = q uv ,

i.e., for each node uC(v),v tries to infect u with probability qmin. If v fails to infect u, a virtual source v tries to infect u with probability q vu q min 1 q min . Therefore, the virtual source process has the same distribution with the one-time-slot infection process.

We now couple the min-infection process and the virtual source infection process as follows:

If Y v ( mi ) =1, then Y v ( vs ) =1.

If Y v ( mi ) =0, then Y v ( vs ) =1 with probability q uv q min 1 q min .

Since a node is more likely to get infected in the virtual source infection process, we obtain

ρ v ( vs ) ρ v ( mi ) .

Recalling that the one-time-slot infection process has the same distribution with the virtual source branching process, we obtain ρ v ρ v ( mi ) ,v.

In addition, the min-infection process has more children than the binomial branching process with the same infection probability for each child. It is obvious that the binomial branching process is more likely to die out, i.e., ρ v ( mi ) <ρ.

As a summary, we prove

ρ v < ρ.

Lemma 4.

Assume ξ>0 such that σ v τ <1ξ,vV. Given any ε>0, there exists a constant L such that for any LL,

Pr i = 1 L 0 < Z i τ ( T v ) n 0 ε
Proof.

Follows the same argument of Lemma 7 in [6], and by choosing

L = log ε log 1 ξ n 0 ,

we obtain for any LL,ε>0

Pr i = 1 L 0 < Z i τ ( T v ) n 0 ε.

Lemma 5.

For any ε>0, there exists a sufficiently large t such that

Pr ( E ) 1 ε.
Proof.

Note that the binomial branching process B(gmin,qmin) is a Galton-Watson (GW) process [12] which requires each node to have an i.i.d offspring distribution. The previous result about the instability of the Galton-Watson process in Theorem 6.2 in [12] proves that the GW process either goes to infinity or goes to 0. If the GW process survives, the number of offsprings goes to infinity as the level increases. Therefore, for a sufficiently long time, the survived binomial branching process will have a sufficiently large number of offsprings at the lowest level. Since the one-time-slot infection process always has at least the same number of children as the binomial branching process, the survived one-time-slot infection process will have enough number of infected nodes at the lowest level as time increases. According to the unbiased property of the partial observation, after a sufficiently long time, the probability that at least one infected node in the lowest level is observed goes to 1 asymptotically, i.e.,

Pr ( E ) 1 ε.

4 Conclusions

In this paper, we studied the problem of detecting the information source in a heterogeneous SIR model with sparse observations. We proved that the optimal sample path estimator on an infinite tree is a node with the minimum infection eccentricity with partial observations. With a fairly general condition, we proved that the estimator is within constant distance from the actual information source with a high probability with a sparse observation. Extensive simulation results showed that our estimator outperforms other algorithms significantly.

Authors’ information

KZ received his B.E. degree in Electronics Engineering from Tsinghua University, Beijing, China, in 2010. He is currently working towards a Ph.D. degree at the School of Electrical, Computer and Energy Engineering at Arizona State University. His research interest is in social networks.

LY received his B.E. degree from Tsinghua University, Beijing, in 2001 and his M.S. and Ph.D in Electrical Engineering from the University of Illinois at Urbana-Champaign in 2003 and 2007, respectively. During Fall 2007, he worked as a postdoctoral fellow in the University of Texas at Austin. He was an assistant professor at the Department of Electrical and Computer Engineering at Iowa State University from January 2008 to August 2012. He currently is an associate professor at the School of Electrical, Computer and Energy Engineering at Arizona State University and an associate editor of the IEEE/ACM Transactions on Networking. His research interest is broadly in the area of stochastic networks, including big data and cloud computing, cyber security, P2P networks, social networks, and wireless networks. He won the Young Investigator Award from the Defense Threat Reduction Agency (DTRA) in 2009 and NSF CAREER Award in 2010. He was the Northrop Grumman Assistant Professor (formerly the Litton Industries Assistant Professor) in the Department of Electrical and Computer Engineering at Iowa State University from 2010 to 2012.

Abbreviations

CC:

closeness centrality

DMP:

dynamic message passing

RI:

reverse infection

SI:

susceptible-infected

SIR:

susceptible-infected-recovered

wCC:

weighted closeness centrality

wRI:

weighted reverse infection

References

  1. Shah, D, Zaman, T: Detecting sources of computer viruses in networks: theory and experiment. In: Proc. Ann. ACM SIGMETRICS Conf., pp. 203–214. ACM, New York, NY (2010).

    Google Scholar 

  2. Shah D, Zaman T: Rumors in a network: who’s the culprit? IEEE Trans. Inf. Theory 2011, 57: 5163–5181. 10.1109/TIT.2011.2158885

    Article  MathSciNet  Google Scholar 

  3. Shah, D, Zaman, T: Rumor centrality: a universal source detector. In: Proc. Ann. ACM SIGMETRICS Conf., pp. 199–210. ACM, London, England, UK (2012).

    Google Scholar 

  4. Luo, W, Tay, WP, Leng, M: Identifying infection sources and regions in large networks. Arxiv preprint arXiv:1204.0354 (2012).

    Google Scholar 

  5. Nguyen, DT, Nguyen, NP, Thai, MT: Sources of misinformation in online social networks: who to suspect? In: Military Communications Conference, 2012-MILCOM 2012, Orlando, FL, USA, 29 Oct 2012,pp. 1–6. IEEE (2012).

  6. Zhu, K, Ying, L: Information source detection in the SIR model: a sample path based approach. Arxiv preprint arXiv:1206.5421 (2012).

    Google Scholar 

  7. Subramanian, VG, Berry, R: Spotting trendsetters: inference for network games. In: Proc. Annu. Allerton Conf. Communication, Control and Computing, Monticello, IL, USA, 1 Oct 2012 (2012).

  8. Milling, C, Caramanis, C, Mannor, S, Shakkottai, S: Network forensics: random infection vs spreading epidemic. In: Proc. Ann. ACM SIGMETRICS Conf., London, England, UK, 11 Jun 2012,pp. 223–234. (2012).

  9. Shakarian P, Subrahmanian VS, Sapino ML: GAPs: geospatial abduction problems. ACM Trans. Intell. Syst. Technol. 2011,3(1):1–27. 10.1145/2036264.2036271

    Article  Google Scholar 

  10. Shakarian P, Subrahmanian VS: Geospatial Abduction: Principles and Practice. Springer, New York; 2011.

    Book  Google Scholar 

  11. Lokhov, AY, Mezard, M, Ohta, H, Zdeborova, L: Inferring the origin of an epidemy with dynamic message-passing algorithm. arXiv preprint arXiv:1303.5315 (2013).

  12. Harris TE: The Theory of Branching Processes. Dover Pubns, New York; 1963.

    Book  MATH  Google Scholar 

Download references

Acknowledgements

This research was supported in part by ARO grant W911NF-13-1-0279.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kai Zhu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

KZ and LY contributed equally to this work. Both authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhu, K., Ying, L. A robust information source estimator with sparse observations. Compu Social Networls 1, 3 (2014). https://doi.org/10.1186/s40649-014-0003-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40649-014-0003-2

Keywords