It depends on which group of APIs your application is using. Please refer
to this doc for details:
http://hadoop.apache.org/docs/r2.4.0/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html
On Thu, Jun 19, 2014 at 2:24 AM, Mohit Anchlia
wrote:
> Does
Does hadoop map reduce code compiled against 1.2 works with Yarn?
org.apache.hadoop
*hadoop*-core
1.2.1
It seems you are using the local FS rather than HDFS. You need to make sure
your hdfs cluster is up and running.
On Thu, Apr 17, 2014 at 6:42 PM, Shengjun Xin wrote:
> Did you start datanode service?
>
>
> On Thu, Apr 17, 2014 at 9:23 PM, Karim Awara wrote:
>
>> Hi,
>>
>> Whenever I start the h
There is because your HDFS has no space left. please check your datanodes
are all started. also please check dfs.datanode.du.reserved in
hdfs-site.xml to make sure you don't reserve large capacity.
On Fri, Apr 18, 2014 at 7:42 AM, Shengjun Xin wrote:
> Did you start datanode service?
>
>
> On T
Did you start datanode service?
On Thu, Apr 17, 2014 at 9:23 PM, Karim Awara wrote:
> Hi,
>
> Whenever I start the hadoop on 24 machines, the following exception
> happens on the jobtracker log file on the namenode: I would appreciate
> any help. Thank you.
>
>
>
> 2014-04-17 16:16:31,391 INFO
Hi,
Whenever I start the hadoop on 24 machines, the following exception happens
on the jobtracker log file on the namenode: I would appreciate any help.
Thank you.
2014-04-17 16:16:31,391 INFO org.apache.hadoop.mapred.JobTracker: Setting
safe mode to false. Requested by : karim
2014-04-17 16:
>>>>> The host and port that the MapReduce job tracker runs
>>>>>> at. If "local", then jobs are run in-process as a single map
>>>>>> and reduce task.
>>>>>>
>>>>>>
>>>>
t;>>> *hdfs-site.xml and*
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> dfs.replication
>>>>> 1
>>>>> Default block replication.
>>>>&g
dfs.replication
>>>> 1
>>>> Default block replication.
>>>> The actual number of replications can be specified when the file is
>>>> created.
>>>> The default is used if replication is not specified in create time.
>>>>
&
ariable is JAVA_HOME. All others are
>>> # optional. When running a distributed configuration it is best to
>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>> # remote nodes.
>>>
>>> # The java implementation to use. Required.
&g
gt;>>>
>>>>>
>>>>>
>>>>> *hdfs-site.xml and*
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> dfs.replication
>>>>> 1
>>&g
;>>
>>>> dfs.replication
>>>> 1
>>>> Default block replication.
>>>> The actual number of replications can be specified when the file is
>>>> created.
>>>> The default is used if replication is not specifie
ave the
>> tag and inside a tag you can have , and
>> tags.
>>
>> Regards
>> Bejoy KS
>>
>> Sent from remote device, Please excuse typos
>> --
>> *From: * Ashish Umrani
>> *Date: *Tue, 23 Jul 2013 09:28:00
cific environment variables here.
>>>
>>> # The only required environment variable is JAVA_HOME. All others are
>>> # optional. When running a distributed configuration it is best to
>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>&
a tag you can have , and
> tags.
>
> Regards
> Bejoy KS
>
> Sent from remote device, Please excuse typos
> --
> *From: * Ashish Umrani
> *Date: *Tue, 23 Jul 2013 09:28:00 -0700
> *To: *
> *ReplyTo: * user@hadoop.apache.org
> *Subject:
t;> # Extra Java CLASSPATH elements. Optional.
>> # export HADOOP_CLASSPATH=
>>
>>
>> All pther params in hadoop-env.sh are commented
>>
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>&
s. Optional.
> # export HADOOP_CLASSPATH=
>
>
> All pther params in hadoop-env.sh are commented
>
>
>
>
>
>
>
>
> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
> jeetuyadav200...@gmail.com> wrote:
>
>> Hi,
>>
>> You might have mi
gt; All pther params in hadoop-env.sh are commented
>
>
>
>
>
>
>
>
> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
> jeetuyadav200...@gmail.com> wrote:
>
>> Hi,
>>
>> You might have missed some configuration (XML tags ), Please check all
&
@hadoop.apache.org
Subject: Re: New hadoop 1.2 single node installation giving problems
Hey thanks for response. I have changed 4 files during installation
core-site.xml
mapred-site.xml
hdfs-site.xml and
hadoop-env.sh
I could not find any issues except that all params in the hadoop-env.sh are
commented
anks
> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani wrote:
>
>> Hi There,
>>
>> First of all, sorry if I am asking some stupid question. Myself being
>> new to the Hadoop environment , am finding it a bit difficult to figure out
>> why its failing
>>
>&
ing it a bit difficult to figure out why
> its failing
>
> I have installed hadoop 1.2, based on instructions given in the folllowing
> link
>
> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>
> All went well and I could do the s
First of all, sorry if I am asking some stupid question. Myself being new
> to the Hadoop environment , am finding it a bit difficult to figure out why
> its failing
>
> I have installed hadoop 1.2, based on instructions given in the folllowing
> link
>
> http://www.michael-nol
Hi There,
First of all, sorry if I am asking some stupid question. Myself being new
to the Hadoop environment , am finding it a bit difficult to figure out why
its failing
I have installed hadoop 1.2, based on instructions given in the folllowing
link
http://www.michael-noll.com/tutorials
I am getting some difficulties when trying to login to secure hadoop
cluster from ticket cache.
In UserGroupInformation java class, there is a method called
loginUserFromKeytab(), I can use this method to login with keytab files,
and later do some HDFS/hcatalog api calls.
But we don't know how to
24 matches
Mail list logo