On Wed, Jul 23, 2003 at 05:52:17PM +0100, Theodore Hong wrote:
> Could we perhaps encode all of the samples as binary {0,1} data, but
> train the classifier to return a continuous result?  That is, use
> samples like:
> 
> ((time=17411,key=0x3a4b,htl=12), target=1)    // success
> ((time=17480,key=0x3a4b,htl=15), target=1)    // success
> ((time=17485,key=0x3a4c,htl=8), target=1)     // success
> ((time=17487,key=0x3a4b,htl=9), target=0)    // fail
> 
> with the regression mode.  The predictor will then return values
> between 0 and 1 which we can interpret as a probability of success.

Perhaps, although it is starting to look really convoluted :-/  I would 
be interested to see a critique of what we have now and in what ways 
this approach is likely to be better.  The main issue with the existing 
technique that I see is that the "forgetfulness" of the continuous 
averaging algorithms must be decided manually and arbitrarily, but if I 
understand this correctly I don't think this is addressing that issue - 
or is it?

Ian.

-- 
Ian Clarke                                                  [EMAIL PROTECTED]
Coordinator, The Freenet Project              http://freenetproject.org/
Founder, Locutus                                        http://locut.us/
Personal Homepage                                   http://locut.us/ian/

Attachment: pgp00000.pgp
Description: PGP signature

Reply via email to