an answer
-http://mail-archives.apache.org/mod_mbox/hadoop-common-user/201202.mbox/%3ccb4ecc21.33727%25ev...@yahoo-inc.com%3E
On Wed, Apr 16, 2014 at 12:04 AM, Radhe Radhe radhe.krishna.ra...@live.com
wrote:
Hello All,
I have configured Apache Hadoop 1.2.0 and set the $HADOOP_HOME env. variable
Hello All,
I have configured Apache Hadoop 1.2.0 and set the $HADOOP_HOME env. variable:
I keep getting :Warning: $HADOOP_HOME is deprecated
Solution:(After googling)I replaced HADOOP_HOME with HADOOP_PREFIX and the
warning disappeared.
Does that mean HADOOP_HOME is replaced by HADOOP_PREFIX? If
Hello All,
For Apache Hadoop 2.x (YARN) installation which *environment variables* are
REALLY needed.
By referring to various blogs I am getting a mix:
HADOOP_COMMON_HOMEHADOOP_CONF_DIRHADOOP_HDFS_HOMEHADOOP_HOMEHADOOP_MAPRED_HOMEHADOOP_PREFIXYARN_HOME
On Mon, Apr 14, 2014 at 9:19 AM, Radhe Radhe
radhe.krishna.ra...@live.com wrote:
Hello People,
As per the Apache site
http://hadoop.apache.org/docs/r2.3.0/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html
Binary Compatibility
able to compile it
against MRv2 mapreduce libs without code changes, and execute it.
- Zhijie
On Tue, Apr 15, 2014 at 12:44 PM, Radhe Radhe radhe.krishna.ra...@live.com
wrote:
Thanks John for your comments,
I believe MRv2 has support for both the old *mapred* APIs and new *mapreduce*
APIs
Hello People,
As per the Apache site
http://hadoop.apache.org/docs/r2.3.0/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html
Binary CompatibilityFirst, we ensure binary compatibility
to the applications that use old mapred
Hi All,
I'm trying to get some hands-on on the Map Reduce programming.
I downloaded the code examples from Hadoop-The definitive guide, 3rd edition
and build it using Maven:
mvn package -DskipTests -Dhadoop.distro=apache-2
Next I imported the maven projects into Eclipse. Using Eclipse now I
Hello All,
Can anyone please explain what we mean by Streaming data access in HDFS.
Data is usually copied to HDFS and in HDFS the data is splitted across
DataNodes in blocks.
Say for example, I have an input file of 10240 MB(10 GB) in size and a block
size of 64 MB. Then there will be 160
you have already read this :
http://hadoop.apache.org/docs/r0.18.1/streaming.html#Hadoop+Streaming
Warm Regards_∞_
Shashwat Shriparv
On Wed, Mar 5, 2014 at 1:38 PM, Radhe Radhe radhe.krishna.ra...@live.com
wrote:
Hello All,
Can anyone please explain what we mean by Streaming
, 5 Mar 2014 14:17:24 +0530
Subject: Re: Streaming data access in HDFS: Design Feature
From: nitinpawar...@gmail.com
To: user@hadoop.apache.org
are you asking why data read/write from/to hdfs blocks via mapreduce framework
is done in streaming manner?
On Wed, Mar 5, 2014 at 2:05 PM, Radhe Radhe
10 matches
Mail list logo