Supervised Learning With Implicit Preferences And Constraints
In classical Combinatorial Optimization, a well-defined objective function is to be optimized satisfying a set of deterministic constraints. Approximation algorithms and heuristic methods are often applied to the problems proven to be difficult, or NP-Complete and beyond. However, in many real-world problem domains the objective (or utility or preference) function an individual is trying to optimize is not explicitly known. Furthermore, preferences can take many different forms and it is difficult to pre-define the correct format of the true utility function one is optimizing. To circumvent such limitations, we model these problems as machine learning tasks with implicit preferences that can be inferred from observations of the choices the individual made in the past. This approach contrasts with the traditional approach that learns the parameters of an utility or preference function, whose functional form is explicitly defined a priori. We study a set of different learning problem domains in which the preferences, or utility functions, are not explicitly defined. These include structural learning, document citation ranking and resource capacity constraint satisfaction. Our goal is to accurately make predictions for future instances, assuming the same underlying preferences as those expressed in past observations, without explicitly modeling them. The new algorithms and techniques we propose to optimize our learning formulations are shown to be very effective. For situations where both the prediction accuracy and the explicit forms of preferences are important, we provide an Induc- tive Logic Programming (ILP) based algorithm to extract the preferences from a "black-box" machine learning model for intuitive human interpretation.
Dissertation or Thesis