Show simple item record

dc.contributor.authorKapur, Shyamen_US
dc.date.accessioned2007-04-23T17:55:19Z
dc.date.available2007-04-23T17:55:19Z
dc.date.issued1991-09en_US
dc.identifier.citationhttp://techreports.library.cornell.edu:8081/Dienst/UI/1.0/Display/cul.cs/TR91-1234en_US
dc.identifier.urihttps://hdl.handle.net/1813/7074
dc.description.abstractThis thesis focuses on the Gold model of inductive inference from positive data. There are several aspects in which the model appears unsatisfactory for language learning: the class of families of learnable languages is highly restricted; even if a family is learnable, there exists no uniform method to obtain a learner for it, and the learner itself is complex. In this thesis, some such criticisms are being addressed. It is shown that no automatic synthesis of a learner from the description of a learnable family is possible. Nevertheless, in some special cases, this synthesis can be achieved and a general result is developed. In order to make the learner simpler, it is stipulated that the learner can change its guess only when the guess is inconsistent with the input evidence. Such a conservative learner never overgeneralizes. Exactly learnable families are characterized for prudent learners with the following types of constraints: (0) conservative, (1) conservative and consistent, (2) conservative and responsive, and (3) conservative, consistent and responsive. It is also shown that, when exactness is not required, prudence, consistency and responsiveness, even together, do not restrict the power of conservative learners. Conservative learners are simple in only one respect; even though it is easy to determine when to make a new guess, it is still hard to know what this guess should be. Finally, a learner that exploits pattern evident in the input is developed. Absence of a particular string over a suitable interval of the input can be viewed as a kind of "indirect negative evidence". Now, the learning criterion needs to be weakened to allow limited failure. It is shown that any family of languages can be learned with probability 1 from stochastic input, provided something is known about the probability distribution according to which the input is presented. Given the family, the learner is uniformly constructible. Further, the behavior of the learner is simpler in many aspects. It is expected that a variety of other natural constraints can be imposed on this learner without additional cost.en_US
dc.format.extent5354726 bytes
dc.format.extent1030217 bytes
dc.format.mimetypeapplication/pdf
dc.format.mimetypeapplication/postscript
dc.language.isoen_USen_US
dc.publisherCornell Universityen_US
dc.subjectcomputer scienceen_US
dc.subjecttechnical reporten_US
dc.titleComputional Learning of Languagesen_US
dc.typetechnical reporten_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Statistics