### Abstract

Given a data set, consisting of n-dimensional binary vectors of positive and negative examples, a subset S of the attributes is called a support set if the positive and negative examples can be distinguished by using only the attributes in S. In this paper we consider several selection criteria for evaluating the “separation power” of supports sets, and formulate combinatorial optimization problems for finding the “best and smallest” support sets with respect to such criteria. We provide efficient heuristics, some with a guaranteed performance rate, for the solution of these problems, analyze the distribution of small support sets in random examples, and present the results of some computational experiments with the proposed algorithms.

Original language | English (US) |
---|---|

Title of host publication | Intelligent Data Engineering and Automated Learning - IDEAL 2000 |

Subtitle of host publication | Data Mining, Financial Engineering, and Intelligent Agents - 2nd International Conference, Proceedings |

Editors | Helen Meng, Kwong Sak Leung, Lai-Wan Chan |

Publisher | Springer Verlag |

Pages | 133-138 |

Number of pages | 6 |

ISBN (Print) | 3540414509, 9783540414506 |

State | Published - Jan 1 2000 |

Event | 2nd International Conference on Intelligent Data Engineering and Automated Learning, IDEAL 2000 - Shatin, N.T., Hong Kong Duration: Dec 13 2000 → Dec 15 2000 |

### Publication series

Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|

Volume | 1983 |

ISSN (Print) | 0302-9743 |

ISSN (Electronic) | 1611-3349 |

### Other

Other | 2nd International Conference on Intelligent Data Engineering and Automated Learning, IDEAL 2000 |
---|---|

Country | Hong Kong |

City | Shatin, N.T. |

Period | 12/13/00 → 12/15/00 |

### Fingerprint

### All Science Journal Classification (ASJC) codes

- Theoretical Computer Science
- Computer Science(all)

### Cite this

*Intelligent Data Engineering and Automated Learning - IDEAL 2000: Data Mining, Financial Engineering, and Intelligent Agents - 2nd International Conference, Proceedings*(pp. 133-138). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 1983). Springer Verlag.

}

*Intelligent Data Engineering and Automated Learning - IDEAL 2000: Data Mining, Financial Engineering, and Intelligent Agents - 2nd International Conference, Proceedings.*Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 1983, Springer Verlag, pp. 133-138, 2nd International Conference on Intelligent Data Engineering and Automated Learning, IDEAL 2000, Shatin, N.T., Hong Kong, 12/13/00.

**Finding essential attributes in binary data?** / Boros, Endre; Horiyama, Takashi; Ibaraki, Toshihide; Makino, Kazuhisa; Yagiura, Mutsunori.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

TY - GEN

T1 - Finding essential attributes in binary data?

AU - Boros, Endre

AU - Horiyama, Takashi

AU - Ibaraki, Toshihide

AU - Makino, Kazuhisa

AU - Yagiura, Mutsunori

PY - 2000/1/1

Y1 - 2000/1/1

N2 - Given a data set, consisting of n-dimensional binary vectors of positive and negative examples, a subset S of the attributes is called a support set if the positive and negative examples can be distinguished by using only the attributes in S. In this paper we consider several selection criteria for evaluating the “separation power” of supports sets, and formulate combinatorial optimization problems for finding the “best and smallest” support sets with respect to such criteria. We provide efficient heuristics, some with a guaranteed performance rate, for the solution of these problems, analyze the distribution of small support sets in random examples, and present the results of some computational experiments with the proposed algorithms.

AB - Given a data set, consisting of n-dimensional binary vectors of positive and negative examples, a subset S of the attributes is called a support set if the positive and negative examples can be distinguished by using only the attributes in S. In this paper we consider several selection criteria for evaluating the “separation power” of supports sets, and formulate combinatorial optimization problems for finding the “best and smallest” support sets with respect to such criteria. We provide efficient heuristics, some with a guaranteed performance rate, for the solution of these problems, analyze the distribution of small support sets in random examples, and present the results of some computational experiments with the proposed algorithms.

UR - http://www.scopus.com/inward/record.url?scp=84944104290&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84944104290&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84944104290

SN - 3540414509

SN - 9783540414506

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 133

EP - 138

BT - Intelligent Data Engineering and Automated Learning - IDEAL 2000

A2 - Meng, Helen

A2 - Leung, Kwong Sak

A2 - Chan, Lai-Wan

PB - Springer Verlag

ER -