Abstract
Consideration is given to the behavior of the least-squares problem that arises when one attempts to train a feedforward net with no hidden neurons. It is assumed that the net has monotonic nonlinear output units. Under the assumption that a training set is separable, that is, that there is a set of achievable outputs for which the error is zero, the authors show that there are no nonglobal minima. More precisely, they assume that the error is of a threshold least-mean square (LMS) type, in that the error function is zero for values beyond the target value. The authors' proof gives, in addition, the following stronger result: the continuous gradient adjustment procedure is such that from any initial weight configuration a separating set of weights is obtained in finite time. Thus they have a precise analog of the perceptron learning theorem. The authors contrast their results with the more classical pattern recognition problem of threshold LMS with linear output units.
Original language | English (US) |
---|---|
Pages | 639-642 |
Number of pages | 4 |
State | Published - 1989 |
Event | IJCNN International Joint Conference on Neural Networks - Washington, DC, USA Duration: Jun 18 1989 → Jun 22 1989 |
Other
Other | IJCNN International Joint Conference on Neural Networks |
---|---|
City | Washington, DC, USA |
Period | 6/18/89 → 6/22/89 |
All Science Journal Classification (ASJC) codes
- Engineering(all)