I am quite intrigued by randomized algorithms. I hope you are as well. If not, why would you be here?
To begin with, imagine an undirected, connected and unweighted graph G. A cut in G is simply a set of edges whose removal would break down the graph into two separate pieces. And a min-cut is the smallest of such a set of edges. And to pump you with some more info, we already have deterministic algorithms to calculate min-cut. So why a randomized algorithm, you say? Well, because its beautiful and simple and you must have heard that saying: 'Simple is beautiful'.
The Algorithm
The algorithm is quite simple actually. You uniformly and randomly pick an edge (u, v) in G, shrink that edge, and combine u and v into a new vertex which we shall call {u, v}. Cleaver readers would note that this causes the graph G to have one less vertex than before. (One step, one less vertex. Way to go!)
Now what you do is keep on repeating this process of randomly picking an edge, collapsing the two vertices across it to one until you just have two vertices left. And then what, you say? Simple. Just return the set of edges across the last two vertices left as your answer for the min-cut.
The Why?
Why does this even give a cut, forget about the min-cut? Well, think about it this way. At the end of the algorithm, you are left with only two vertices. Each of the vertex amongst the two is actually a set of vertices that was formed when we shrinked an edge while we were executing the algorithm. Now here is the interesting and obvious thing to note but before that, label the last last two vertices left as set S1 and S2.
All edges that are across any vertex in S1 and any vertex in S2 are untouched when the algorithm finishes (otherwise those two vertices would have been combined into one set). And so, if you remove the set of edges across S1 and S2, you separate out these two sets of vertices .. thus forming a cut in G.
The Analysis
The real interesting question is: What is the probability of finding a min-cut via the above algorithm? And here is the analysis:
Lets say that the real min-cut is of size k. Now if we are lucky and we do not choose any of those k edges to shrink while executing the algorithm, Voila! We would find a min-cut then. And what must be that probability?
Well, lets think. We need to choose any edge other than those k edges out of the total edges left. Total edges left, you say and despair? How do I know that since total edges change at each step in the algorithm. Patience, my friend. Imagine that s vertices are left at the end of a shrinking step in the algorithm. Now each vertex in the graph at the end of that step must be atleast of degree k otherwise you could get a min-cut of less than k by simply cutting across that vertex and rest of the vertices left. So that must mean that the total number of edges in the graph is atleast s.k / 2.
Awesome and then what? Well, so the probability that you pick one of the k edges out of the total edges left when we have s vertices is: p <= k / (s.k/2) or p <= 2 / s. (the inequality kicks in because the total edges is atleast s.k / 2)
And the probability that you do not pick any of the k edge: q >= (1 - p) or q >= (s - 2) / s.
Now you run the algorithm for (n - 2) steps if there are n vertices in G initially. (Remember, one step, one vertex gone .. n - 2 steps, n - 2 vertices gone and only 2 left). So the probability that we pick the min-cut is the probability that we don't pick any of the k edges in shrinking step 1 and also in shrinking step 2 and also in shrinking step 3 ... and also in shrinking step (n - 2). I think you get the idea. So all you do is calculate the product of (s - 2) / s giving s the value from 3 to n. So what is that?
It seems to be:
(n-2) / n . (n-3) / (n-1) . (n-4) / (n-2) . . . . . . (2 / 4) . (1 / 3) =
2 / n.(n-1) or 1/ [(n.(n-1)/2]
Conclusion
So the probability of finding the min-cut seems to be: 1 / O(n^2). Well, you know that you can run the experiment a large number of times and get to the min-cut with a high probability. How many times, you ask? I'd say O(n^2) times. And that makes the time complexity to be quadratic in n for the algorithm. Yes, it does but it also gives a very good chance of success (~63%) based on some simple analysis which I am not going to give.
But ain't the algorithm beautiful and simple? Ah, right! :)
To begin with, imagine an undirected, connected and unweighted graph G. A cut in G is simply a set of edges whose removal would break down the graph into two separate pieces. And a min-cut is the smallest of such a set of edges. And to pump you with some more info, we already have deterministic algorithms to calculate min-cut. So why a randomized algorithm, you say? Well, because its beautiful and simple and you must have heard that saying: 'Simple is beautiful'.
The Algorithm
The algorithm is quite simple actually. You uniformly and randomly pick an edge (u, v) in G, shrink that edge, and combine u and v into a new vertex which we shall call {u, v}. Cleaver readers would note that this causes the graph G to have one less vertex than before. (One step, one less vertex. Way to go!)
Now what you do is keep on repeating this process of randomly picking an edge, collapsing the two vertices across it to one until you just have two vertices left. And then what, you say? Simple. Just return the set of edges across the last two vertices left as your answer for the min-cut.
The Why?
Why does this even give a cut, forget about the min-cut? Well, think about it this way. At the end of the algorithm, you are left with only two vertices. Each of the vertex amongst the two is actually a set of vertices that was formed when we shrinked an edge while we were executing the algorithm. Now here is the interesting and obvious thing to note but before that, label the last last two vertices left as set S1 and S2.
All edges that are across any vertex in S1 and any vertex in S2 are untouched when the algorithm finishes (otherwise those two vertices would have been combined into one set). And so, if you remove the set of edges across S1 and S2, you separate out these two sets of vertices .. thus forming a cut in G.
The Analysis
The real interesting question is: What is the probability of finding a min-cut via the above algorithm? And here is the analysis:
Lets say that the real min-cut is of size k. Now if we are lucky and we do not choose any of those k edges to shrink while executing the algorithm, Voila! We would find a min-cut then. And what must be that probability?
Well, lets think. We need to choose any edge other than those k edges out of the total edges left. Total edges left, you say and despair? How do I know that since total edges change at each step in the algorithm. Patience, my friend. Imagine that s vertices are left at the end of a shrinking step in the algorithm. Now each vertex in the graph at the end of that step must be atleast of degree k otherwise you could get a min-cut of less than k by simply cutting across that vertex and rest of the vertices left. So that must mean that the total number of edges in the graph is atleast s.k / 2.
Awesome and then what? Well, so the probability that you pick one of the k edges out of the total edges left when we have s vertices is: p <= k / (s.k/2) or p <= 2 / s. (the inequality kicks in because the total edges is atleast s.k / 2)
And the probability that you do not pick any of the k edge: q >= (1 - p) or q >= (s - 2) / s.
Now you run the algorithm for (n - 2) steps if there are n vertices in G initially. (Remember, one step, one vertex gone .. n - 2 steps, n - 2 vertices gone and only 2 left). So the probability that we pick the min-cut is the probability that we don't pick any of the k edges in shrinking step 1 and also in shrinking step 2 and also in shrinking step 3 ... and also in shrinking step (n - 2). I think you get the idea. So all you do is calculate the product of (s - 2) / s giving s the value from 3 to n. So what is that?
It seems to be:
(n-2) / n . (n-3) / (n-1) . (n-4) / (n-2) . . . . . . (2 / 4) . (1 / 3) =
2 / n.(n-1) or 1/ [(n.(n-1)/2]
Conclusion
So the probability of finding the min-cut seems to be: 1 / O(n^2). Well, you know that you can run the experiment a large number of times and get to the min-cut with a high probability. How many times, you ask? I'd say O(n^2) times. And that makes the time complexity to be quadratic in n for the algorithm. Yes, it does but it also gives a very good chance of success (~63%) based on some simple analysis which I am not going to give.
But ain't the algorithm beautiful and simple? Ah, right! :)
No comments:
Post a Comment