Hi Zoltan,

I am saying that hadoop2-stable and hadoop1 are binary compatible. I don't know 
what version of hadoop is used in cdh4-mr2 but I guess it was hadoop2 alpha, 
since bigtop was at hadoop 2.0.6 alpha last time I checked, which was last week.

Just try it and let us know if you experience a problem.

PS: We're currently working on building mahout against hadoop2 by the way

Best

> On Dec 11, 2013, at 8:53 PM, Zoltan Prekopcsak <preko1...@gmail.com> wrote:
> 
> Hi Gokhan,
> 
> Thank you for the clarification.
> Does it mean that Mahout is using the mapred API everywhere and there is no 
> mapreduce API left? As far as I know, the mapreduce API needs to be 
> recompiled and I remember needing to recompile Mahout for CDH4 when it first 
> came out.
> 
> Thanks, Zoltan
> 
> 
>> On 12/10/13 10:02 PM, Gokhan Capan wrote:
>> I meant that you shouldn't need to modify mahout's dependencies, just mvn 
>> package and it should work against hadoop 2.2.0 (Yeah, 2.2.0 is not alpha)
>> 
>> Quoting from 
>> http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html
>> "First, we ensure binary compatibility to the applications that use old 
>> mapred APIs. This means that applications which were built against MRv1 
>> mapred APIs can run directly on YARN without recompilation, merely by 
>> pointing them to an Apache Hadoop 2.x cluster via configuration."
>> 
>> If you encounter with any problems, just let the list know.
>> 
>> Best
>> 
>> 
>>> On Dec 9, 2013, at 9:40 PM, Hi There <srudamas...@yahoo.com> wrote:
>>> 
>>> Hi Gokhan,
>>> 
>>> My project currently fetches every dependency through Maven--is there any 
>>> way I can grab the version you mentioned that way?
>>> 
>>> In that vein, I am using the following version of hadoop:
>>> <dependency>
>>>        <groupId>org.apache.hadoop</groupId>
>>>        <artifactId>hadoop-client</artifactId>
>>>        <version>2.2.0</version>
>>> </dependency>
>>> 
>>> 
>>> That's not alpha, right?
>>> 
>>> Thanks!
>>> 
>>> 
>>> 
>>> 
>>> 
>>> On Monday, December 9, 2013 10:05 AM, Gokhan Capan <gkhn...@gmail.com> 
>>> wrote:
>>> 
>>> Mahout actually should work with hadoop-2 stable without recompiling,
>>> not with hadoop-2 alpha though.
>>> 
>>> We're, by the way, currently in the process of adding support to build
>>> mahout with hadoop-2.
>>> 
>>> Please see mahout-1354 for the relevant issue
>>> 
>>> Sent from my iPhone
>>> 
>>> 
>>>> On Dec 9, 2013, at 19:54, Hi There <srudamas...@yahoo.com> wrote:
>>>> 
>>>> Is Dec 2013 still the intended release date of the next mahout release 
>>>> that will be compatible with Hadoop 2.2.0?
>>>> 
>>>> 
>>>> 
>>>> 
>>>> On Thursday, November 21, 2013 12:36 PM, Suneel Marthi 
>>>> <suneel_mar...@yahoo.com> wrote:
>>>> 
>>>> Targeted for Dec 2013.
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> On Thursday, November 21, 2013 3:26 PM, Hi There <srudamas...@yahoo.com> 
>>>> wrote:
>>>> 
>>>> Thanks for the reply! Is there a timeline for then the next release will 
>>>> be?
>>>> 
>>>> 
>>>> Thanks,
>>>> Victor
>>>> 
>>>> 
>>>> 
>>>> 
>>>> On Tuesday, November 19, 2013 7:30 PM, Suneel Marthi 
>>>> <suneel_mar...@yahoo.com> wrote:
>>>> 
>>>> Hi Victor,
>>>> 
>>>> Future releases of Mahout will support Hadoop 2.x, the present codebase 
>>>> still only supports Hadoop 1.x.
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> On Tuesday, November 19, 2013 1:42 PM, Hi There <srudamas...@yahoo.com> 
>>>> wrote:
>>>> 
>>>> 
>>>> 
>>>> Hello,
>>>> 
>>>> I recently upgraded to hadoop's
>>>> newest release, and it seems one of their interfaces has changed, and
>>>> when I try to create sparse vectors from sequence files, I get the
>>>> following exception:
>>>> 
>>>> java.lang.IncompatibleClassChangeError: Found interface 
>>>> org.apache.hadoop.mapreduce.Counter, but class was expected
>>>> 
>>>> I can include more of the stack trace if necessary.
>>>> 
>>>> Are there any plans in the immediate future to upgrade mahout to be 
>>>> compatible with the newest hadoop release?
>>>> 
>>>> Thanks,
>>>> Victor
> 

Reply via email to