Fabled indeed. The much storied and soon to graduate from fabled to
fabulous.
On Tue, Jan 19, 2010 at 7:34 PM, Jake Mannix jake.man...@gmail.com wrote:
Fabled indeed, harumph!
--
Ted Dunning, CTO
DeepDyve
Hello,
I am currently testing the MAHOUT-228-3.patch applied to the current
trunk. The merge went mostly well except a couple of duplicated chunks
in the patchs (probably applied otherwise to the trunk) and a
duplicated wordlist.
However to make the tests pass I add to reduce the precision of
That looks like a bug, to me... not sure where it is though...
-jake
On Mon, Jan 18, 2010 at 6:03 AM, Olivier Grisel olivier.gri...@ensta.orgwrote:
Hello,
I am currently testing the MAHOUT-228-3.patch applied to the current
trunk. The merge went mostly well except a couple of duplicated
These bounds were too tight in any case. I had to loosen other bounds
during development and should have loosened these as well.
Your change is a good one.
On Mon, Jan 18, 2010 at 6:03 AM, Olivier Grisel olivier.gri...@ensta.orgwrote:
Is this a consequence of the recent
2010/1/18 Ted Dunning ted.dunn...@gmail.com:
These bounds were too tight in any case. I had to loosen other bounds
during development and should have loosened these as well.
Your change is a good one.
Great! so here is the sequel:
I have written a real training convergence test and
THANK YOU.
I have been very grumpy that I couldn't get to doing this yet.
I will coordinate closely with you. I haven't used git yet in anger so it
will be a learning experience. Don't expect me to have time, though. ( I
will try ... but expect not to find a hole )
On Mon, Jan 18, 2010 at
On Mon, Jan 18, 2010 at 3:20 PM, Olivier Grisel olivier.gri...@ensta.orgwrote:
In the mean time could you please give me a hint on how to value the
probes of the binary randomizer w.r.t. the window size?
The basic trade-off is the standard hashed learning trade-off between number
of training