Hello!
I built a hadoop cluster including 12 nodes which is based on arm(cubietruck),
I run simple program wordcount to find how many words of h in hello, it runs
perfectly. But I run a mutiple program like pi,i run like this:
./hadoop jar hadoop-example-0.21.0.jar pi 100 10
infomation
Hey,
I'm new to Hadoop, so please correct me if I'm wrong and bear with me :)
I want to build Hadoop 1.2.1 on my Ubuntu VM.
I'm not able to find the .src.tar.gz file for 1.2.1.
Can anyone help me out?
Thanks,
Vikram Bajaj.
Did you get to look at this?
https://wiki.apache.org/hadoop/HowToContribute
and this
https://git-wip-us.apache.org/repos/asf?p=hadoop.git;a=blob;f=BUILDING.txt
Question: What are you trying to do here? Are you trying to contribute or
are you trying to learn?
On Sat, Jul 18, 2015 at 5:29 PM,
I'm actually using Ubuntu as a VM on Windows 8.1.
On 18 Jul 2015 4:01 pm, Nitin Pawar nitinpawar...@gmail.com wrote:
what operating system are you using ?
read about git and source code management
On Sat, Jul 18, 2015 at 3:57 PM, Vikram Bajaj vikrambajaj220...@gmail.com
wrote:
Could you
Okay. Thank you!
But, is there any direct file to be downloaded? Versions like 2.7.1 have
the .src.tar.gz file... Has it been removed for 1.2.1?
On 18 Jul 2015 4:17 pm, Nitin Pawar nitinpawar...@gmail.com wrote:
please read about how to clone git repository and switch branch.
It would help you
What version of Apache Hadoop are you running? Recent changes have made
YARN to auto-compute this via hardware detection, by default (rather than
the 8 default).
On Fri, Jul 17, 2015 at 11:31 PM Shushant Arora shushantaror...@gmail.com
wrote:
In Yarn there is a setting to specify no of vcores
try checking out git repo and switch to branch 1.2 ?
https://github.com/apache/hadoop/tree/branch-1.2
On Sat, Jul 18, 2015 at 3:50 PM, Vikram Bajaj vikrambajaj220...@gmail.com
wrote:
Hey,
I'm new to Hadoop, so please correct me if I'm wrong and bear with me :)
I want to build Hadoop 1.2.1 on
please read about how to clone git repository and switch branch.
It would help you to get the code
On Sat, Jul 18, 2015 at 4:05 PM, Vikram Bajaj vikrambajaj220...@gmail.com
wrote:
I'm actually using Ubuntu as a VM on Windows 8.1.
On 18 Jul 2015 4:01 pm, Nitin Pawar nitinpawar...@gmail.com
I doubt there would be any new 1.x hadoop release.
Please take a look at the links James provided.
See also:
http://hadoop.apache.org/docs/r2.7.1/
There's link to trunk hdfs Jenkins:
https://builds.apache.org/job/Hadoop-hdfs-trunk/
where you can find command for building:
+
Okay! Thanks :)
On 18 Jul 2015 4:39 pm, Ted Yu yuzhih...@gmail.com wrote:
http://apache.arvixe.com/hadoop/common/stable1/
The .tar.gz has source code.
On Jul 18, 2015, at 3:20 AM, Vikram Bajaj vikrambajaj220...@gmail.com
wrote:
Hey,
I'm new to Hadoop, so please correct me if I'm wrong
I was just confused because versions like 2.7.1 have both the .tar.gz file
as well as the .src.tar.gz file, while version 1.2.1 has only the .tar.gz
file.
On 18 Jul 2015 4:48 pm, Vikram Bajaj vikrambajaj220...@gmail.com wrote:
Okay! Thanks :)
On 18 Jul 2015 4:39 pm, Ted Yu yuzhih...@gmail.com
I'm trying to contribute. But I'm not sure how to. I'm really a beginner
and I clearly need to learn first.
But I'd like to know how to continue what I started.
On 18 Jul 2015 7:06 pm, James Bond bond.b...@gmail.com wrote:
Did you get to look at this?
Apache Hadoop 0.20 and 0.21 are both very old and unmaintained releases at
this point, and may carry some issues unfixed via further releases. Please
consider using a newer release.
Is there a specific reason you intend to use 0.21.0, which came out of a
branch long since abandoned?
On Sat, Jul
sry forgot about that
this is the while datanode log since the last startup:
http://pastebin.com/DAN6tQJY
The hadoop version is 2.6.0, i installed it via the tarball.
It is a two node cluster with one being both master and slave and one
pure slave node. I already tested this with
Could you please explain that in layman's terms? I'm pretty new to all this
:)
On 18 Jul 2015 3:52 pm, Nitin Pawar nitinpawar...@gmail.com wrote:
try checking out git repo and switch to branch 1.2 ?
https://github.com/apache/hadoop/tree/branch-1.2
On Sat, Jul 18, 2015 at 3:50 PM, Vikram Bajaj
http://apache.arvixe.com/hadoop/common/stable1/
The .tar.gz has source code.
On Jul 18, 2015, at 3:20 AM, Vikram Bajaj vikrambajaj220...@gmail.com wrote:
Hey,
I'm new to Hadoop, so please correct me if I'm wrong and bear with me :)
I want to build Hadoop 1.2.1 on my Ubuntu VM.
I'm
So, now that I have the .tar.gz file, how do I build hadoop?
P.S. : I don't know if this matters, but I've already installed java, ssh,
and set up the namenode. All the nodes (namenkde, secondarynamenode,
datanode) start up successfully and I also ran a mapreduce example that
worked.
How do
Its hadoop 2.5.0.
Whats the logic of default using hardware detection. Say My node has 8
actual core and 32 virtual cores. Its taking 26 as value of vcores
available of this node on RM UI.
On Sat, Jul 18, 2015 at 7:22 PM, Harsh J ha...@cloudera.com wrote:
What version of Apache Hadoop are you
what operating system are you using ?
read about git and source code management
On Sat, Jul 18, 2015 at 3:57 PM, Vikram Bajaj vikrambajaj220...@gmail.com
wrote:
Could you please explain that in layman's terms? I'm pretty new to all
this :)
On 18 Jul 2015 3:52 pm, Nitin Pawar
Okay :)
I'm focusing on hdfs, by the way. Forgot to mention that. My bad!
On 18 Jul 2015 7:43 pm, Ted Yu yuzhih...@gmail.com wrote:
I doubt there would be any new 1.x hadoop release.
Please take a look at the links James provided.
See also:
http://hadoop.apache.org/docs/r2.7.1/
There's
Hi,
Also what scheduler are you using?
DefaultResourceCalculator only consider memory.
Regards
div Original message /divdivFrom: Shushant Arora
shushantaror...@gmail.com /divdivDate:18/07/2015 4:18 PM (GMT+02:00)
/divdivTo: user@hadoop.apache.org /divdivSubject: Re: total
Hi Vikram!
Please join the common-dev@hadoop mailing list for these kinds of questions.
Working directly with the source repository is meant for folks working on
contributions in the community, and we define that boundary by joining the
development lists rather than the user lists.
--
Sean
On
Sure.
On 18 Jul 2015 11:06 pm, Sean Busbey bus...@cloudera.com wrote:
Hi Vikram!
Please join the common-dev@hadoop mailing list for these kinds of
questions.
Working directly with the source repository is meant for folks working on
contributions in the community, and we define that
Hi
I'd say than no matter what version is running, parameters seem not fit
the cluster that doesn't manage to handle 100 maps that each process a
billion samples : it's hitting the mapreduce timeout of 600 seconds
I'd try with something like 20 10
Ulul
Le 18/07/2015 12:17, Harsh J a
24 matches
Mail list logo