http://bobehobi.livejournal.com/ ([identity profile] bobehobi.livejournal.com) wrote in [personal profile] anhinga_anhinga 2012-09-29 04:42 am (UTC)

>claimed that by introducing a second kernel to be used on the training data only (e.g. some creative and not necessarily well formalizable annotations by human annotators, ranging from mundane things to assigning poetic qualities to training samples) one can make the error inversely proportional to the number of training samples even when classes overlap with respect to the main

Не в этом ли и состоит регуляционная теория (см. Плохо поставленные задачи, Тихонов А.Н., Арсенин В.Я, 1974)?

Post a comment in response:

This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting