TY - GEN
T1 - Helly-type theorems in property testing
AU - Chakraborty, Sourav
AU - Pratap, Rameshwar
AU - Roy, Sasanka
AU - Saraf, Shubhangi
PY - 2014
Y1 - 2014
N2 - Helly's theorem is a fundamental result in discrete geometry, describing the ways in which convex sets intersect with each other. If S is a set of n points in Rd, we say that S is (k,G)-clusterable if it can be partitioned into k clusters (subsets) such that each cluster can be contained in a translated copy of a geometric object G. In this paper, as an application of Helly's theorem, by taking a constant size sample from S, we present a testing algorithm for (k,G)-clustering, i.e., to distinguish between two cases: when S is (k,G)-clusterable, and when it is ∈-far from being (k,G)-clusterable. A set S is ∈-far (0< ∈) from being (k,G)-clusterable if at least ∈n points need to be removed from S to make it (k,G)-clusterable. We solve this problem for k=1 and when G is a symmetric convex object. For k∈gt1, we solve a weaker version of this problem. Finally, as an application of our testing result, in clustering with outliers, we show that one can find the approximate clusters by querying a constant size sample, with high probability.
AB - Helly's theorem is a fundamental result in discrete geometry, describing the ways in which convex sets intersect with each other. If S is a set of n points in Rd, we say that S is (k,G)-clusterable if it can be partitioned into k clusters (subsets) such that each cluster can be contained in a translated copy of a geometric object G. In this paper, as an application of Helly's theorem, by taking a constant size sample from S, we present a testing algorithm for (k,G)-clustering, i.e., to distinguish between two cases: when S is (k,G)-clusterable, and when it is ∈-far from being (k,G)-clusterable. A set S is ∈-far (0< ∈) from being (k,G)-clusterable if at least ∈n points need to be removed from S to make it (k,G)-clusterable. We solve this problem for k=1 and when G is a symmetric convex object. For k∈gt1, we solve a weaker version of this problem. Finally, as an application of our testing result, in clustering with outliers, we show that one can find the approximate clusters by querying a constant size sample, with high probability.
UR - http://www.scopus.com/inward/record.url?scp=84899949778&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84899949778&partnerID=8YFLogxK
U2 - 10.1007/978-3-642-54423-1_27
DO - 10.1007/978-3-642-54423-1_27
M3 - Conference contribution
AN - SCOPUS:84899949778
SN - 9783642544224
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 306
EP - 317
BT - LATIN 2014
PB - Springer Verlag
T2 - 11th Latin American Theoretical Informatics Symposium, LATIN 2014
Y2 - 31 March 2014 through 4 April 2014
ER -