Metric k-center
In graph theory, the metric k-center or metric facility location problem is a combinatorial optimization problem studied in theoretical computer science. Given n cities with specified distances, one wants to build k warehouses in different cities and minimize the maximum distance of a city to a warehouse. In graph theory this means finding a set of k vertices for which the largest distance of any point to its closest vertex in the k-set is minimum. The vertices must be in a metric space, providing a complete graph that satisfies the triangle inequality.
Formal definition
Let be a metric space where is a set and is a metric
A set , is provided together with a parameter . The goal is to find a subset with such that the maximum distance of a point in to the closest point in is minimized. The problem can be formally defined as follows:
For a metric space (,d),
- Input: a set , and a parameter .
- Output: a set of points.
- Goal: Minimize the cost d(v,)
That is, Every point in a cluster is in distance at most from its respective center. [1]
The k-Center Clustering problem can also be defined on a complete undirected graph G = (V, E) as follows:
Given a complete undirected graph G = (V, E) with distances d(vi, vj) ∈ N satisfying the triangle inequality, find a subset C ⊆ V with |C| = k while minimizing:
Computational complexity
In a complete undirected graph G = (V, E), if we sort the edges in nondecreasing order of the distances: d(e1) ≤ d(e2) ≤ … ≤ d(em) and let Gi = (V, Ei), where Ei = {e1, e2, …, ei}. The k-center problem is equivalent to finding the smallest index i such that Gi has a dominating set of size at most k. [2]
Although Dominating Set is NP-complete, the k-center problem remains NP-hard. This is clear, since the optimality of a given feasible solution for the k-center problem can be determined through the Dominating Set reduction only if we know in first place the size of the optimal solution (i.e. the smallest index i such that Gi has a dominating set of size at most k) , which is precisely the difficult core of the NP-Hard problems.
Approximations
A simple greedy algorithm
A simple greedy approximation algorithm that achieves an approximation factor of 2 builds using a farthest-first traversal in k iterations. This algorithm simply chooses the point farthest away from the current set of centers in each iteration as the new center. It can be described as follows:
- Pick an arbitrary point into
- For every point compute from
- Pick the point with highest distance from .
- Add it to the set of centers and denote this expanded set of centers as . Continue this till k centers are found
Running time
- The ith iteration of choosing the ith center takes time.
- There are k such iterations.
- Thus, overall the algorithm takes time.[3]
Proving the approximation factor
The solution obtained using the simple greedy algorithm is a 2-approximation to the optimal solution. This section focuses on proving this approximation factor.
Given a set of n points ,belonging to a metric space (,d), the greedy K-center algorithm computes a set K of k centers, such that K is a 2-approximation to the optimal k-center clustering of V.
i.e. [1]
This theorem can be proven using two cases as follows,
Case 1: Every cluster of contains exactly one point of
- Consider a point
- Let be the center it belongs to in
- Let be the center of that is in
- Similarly,
- By the triangle inequality:
Case 2: There are two centers and of that are both in , for some (By pigeon hole principle, this is the only other possibility)
- Assume, without loss of generality, that was added later to the center set by the greedy algorithm, say in ith iteration.
- But since the greedy algorithm always chooses the point furthest away from the current set of centers, we have that and,
Another 2-factor approximation algorithm
Another algorithm with the same approximation factor takes advantage of the fact that the k-center problem is equivalent to finding the smallest index i such that Gi has a dominating set of size at most k and computes a maximal independent set of Gi, looking for the smallest index i that has a maximal independent set with a size of at least k. [4] It is not possible to find an approximation algorithm with an approximation factor of 2 − ε for any ε > 0, unless P = NP. [5] Furthermore, the distances of all edges in G must satisfy the triangle inequality if the k-center problem is to be approximated within any constant factor, unless P = NP. [6]
See also
References
- Har-peled, Sariel (2011). Geometric Approximation Algorithms. Boston, MA, USA: American Mathematical Society. ISBN 0821849115.
- Vazirani, Vijay V. (2003), Approximation Algorithms, Berlin: Springer, pp. 47–48, ISBN 3-540-65367-8
- Gonzalez, Teofilo F. (1985), "Clustering to minimize the maximum intercluster distance", Theoretical Computer Science, 38, Elsevier Science B.V., pp. 293–306, doi:10.1016/0304-3975(85)90224-5
- Hochbaum, Dorit S.; Shmoys, David B. (1986), "A unified approach to approximation algorithms for bottleneck problems", Journal of the ACM, 33, pp. 533–550, ISSN 0004-5411
- Hochbaum, Dorit S. (1997), Approximation Algorithms for NP-Hard problems, Boston: PWS Publishing Company, pp. 346–398, ISBN 0-534-94968-1
- Crescenzi, Pierluigi; Kann, Viggo; Halldórsson, Magnús; Karpinski, Marek; Woeginger, Gerhard (2000), "Minimum k-center", A Compendium of NP Optimization Problems
Further reading
- Hochbaum, Dorit S.; Shmoys, David B. (1985), "A Best Possible Heuristic for the k-Center Problem", Mathematics of Operations Research, 10, pp. 180–184