Wojciech, I've opened a ticked you can watch

 https://issues.apache.org/jira/browse/MAHOUT-716

I should have the in core code ready in ~3 days. The gradient portion is
easily parallelizable if you want to implement it as mapreduce.

On Tue, May 24, 2011 at 1:57 PM, Wojciech Indyk <[email protected]>wrote:

> Hi!
> I want implement AdaBoost in Mahout. Could it be useful in Mahout? I
> think so, because it's strong algorithm and very powerful, but Mahout
> is specific, so who knows :)
> I thought about training data and I know, that I must parallelize by
> data, rather than by algorithms, so it will be not so easy - I must
> run all mapers of chosed algorithms in my training maper, but I have
> no idea how could i choose algorithms to adaboost (in architecture
> way) like a parameter.
>
> Regards
>



-- 
Yee Yang Li Hector
http://hectorgon.blogspot.com/ (tech + travel)
http://hectorgon.com (book reviews)

Reply via email to