We show that the usual score function for conditional Markov networks can be written as the expectation over the scores of their spanning trees We also show that a small random sample of these output trees can attain a significant fraction of the margin obtained by the complete graph and we provide conditions under which we can perform tractable inference The experimental results confirm that practical learning is scalable to realistic datasets using this approach
from HAL : Dernières publications http://ift.tt/12YLAWB
from HAL : Dernières publications http://ift.tt/12YLAWB
0 commentaires:
Enregistrer un commentaire