Ordinal regression is a common supervised learning problem sharing properties with both regression and classification Many of the ordinal regression algorithms that have been proposed can be viewed as methods that minimize a convex surrogate of the zero-one absolute or squared errors We extend the notion of consistency which has been studied for classification ranking and some ordinal regression models to the general setting of ordinal regression We study a rich family of these surrogate loss functions and assess their consistency with both positive and negative results For arbitrary loss functions that are admissible in the context of ordinal regression we develop an approach that yields consistent surrogate loss functions Finally we illustrate our findings on real-world datasets
from HAL : Dernières publications http://ift.tt/12YLAWB
from HAL : Dernières publications http://ift.tt/12YLAWB
0 commentaires:
Enregistrer un commentaire