A randomness based analysis on the data size needed for removing deceptive patterns

Kazuya Haraguchi, Mutsunori Yagiura, Endre Boros, Toshihide Ibaraki

Research output: Contribution to journalArticlepeer-review

Abstract

We consider a data set in which each example is an n-dimensional Boolean vector labeled as true or false. A pattern is a co-occurrence of a particular value combination of a given subset of the variables. If a pattern appears frequently in the true examples and infrequently in the false examples, we consider it a good pattern. In this paper, we discuss the problem of determining the data size needed for removing "deceptive" good patterns; in a data set of a small size, many good patterns may appear superficially, simply by chance, independently of the underlying structure. Our hypothesis is that, in order to remove such deceptive good patterns, the data set should contain a greater number of examples than that at which a random data set contains few good patterns. We justify this hypothesis by computational studies. We also derive a theoretical upper bound on the needed data size in view of our hypothesis.

Original languageEnglish (US)
Pages (from-to)781-788
Number of pages8
JournalIEICE Transactions on Information and Systems
VolumeE91-D
Issue number3
DOIs
StatePublished - Mar 2008

All Science Journal Classification (ASJC) codes

  • Software
  • Hardware and Architecture
  • Computer Vision and Pattern Recognition
  • Electrical and Electronic Engineering
  • Artificial Intelligence

Keywords

  • Association rules
  • Frequent/infrequent item sets
  • Knowledge discovery
  • Probabilistic analysis

Cite this