Abstract
This paper considers a modification of a PAC learning theory problem in which each instance of the training data is supplemented with side information. In this case, a transformation, given by a side-information map, of the training instance is also classified. However, the learning algorithm needs only to classify a new instance, not the instance and its value under the side information map. Side information can improve general learning rates, but not always. This paper shows that side information leads to the improvement of standard PAC learning theory rate bounds, under restrictions on the probable overlap between concepts and their images under the side information map.
Original language | English (US) |
---|---|
Pages (from-to) | 521-545 |
Number of pages | 25 |
Journal | Journal of Computer and System Sciences |
Volume | 68 |
Issue number | 3 |
DOIs | |
State | Published - May 2004 |
All Science Journal Classification (ASJC) codes
- Theoretical Computer Science
- Computer Networks and Communications
- Computational Theory and Mathematics
- Applied Mathematics
Keywords
- Dependent data
- Learning theory
- Probably Approximately Correct learning
- Uniform convergence of empirical means