I've built pig-withouthadoop.jar and have copied it to my linux box.
Now how do I put hadoop-core-0.20.203.0.jar and pig-withouthadoop.jar
in the classpath. Is it by using CLASSPATH variable?
On Thu, May 26, 2011 at 10:18 AM, Mohit Anchlia mohitanch...@gmail.com wrote:
On Thu, May 26, 2011 at 10
$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
On Thu, May 26, 2011 at 10:55 AM, Mohit Anchlia mohitanch...@gmail.com wrote:
I've built pig-withouthadoop.jar and have copied it to my linux box.
Now how do I put hadoop-core-0.20.203.0.jar
I added all the jars in the classpath in HADOOP_HOME/lib and now I get
to the grunt prompt. Will try the tutorials and see how it behaves :)
Thanks for your help!
On Thu, May 26, 2011 at 9:56 AM, Mohit Anchlia mohitanch...@gmail.com wrote:
I sent this to pig apache user mailing list but have
How can I tell how the map and reduce tasks were spread accross the
cluster? I looked at the jobtracker web page but can't find that info.
Also, can I specify how many map or reduce tasks I want to be launched?
From what I understand is that it's based on the number of input files
passed to
...@yahoo.co.in wrote:
Hi Mohit,
No of Maps - It depends on what is the Total File Size / Block Size
No of Reducers - You can specify.
Regards,
Jagaran
From: Mohit Anchlia mohitanch...@gmail.com
To: common-user@hadoop.apache.org
Sent: Thu, 26 May, 2011 2:48:20 PM
:30 PM, Mohit Anchlia wrote:
I ran a simple pig script on this file:
-rw-r--r-- 1 root root 208348 May 26 13:43 excite-small.log
that orders the contents by name. But it only created one mapper. How
can I change this to distribute accross multiple machines?
On Thu, May 26, 2011 at 3:08 PM
to do it. So
far I have installed hadoop and gone through basic hadoop tutorial of
word count but I still lack knowledge of some important features.
Regards,
Aleksandr
--- On Tue, 5/24/11, Mohit Anchlia mohitanch...@gmail.com wrote:
From: Mohit Anchlia mohitanch...@gmail.com
Subject: Re
I just started learning hadoop and got done with wordcount mapreduce
example. I also briefly looked at hadoop streaming.
Some questions
1) What should be my first step now? Are there more examples
somewhere that I can try out?
2) Second question is around pracitcal usability using xml files. Our
to know how many customers above certain age
or certain age with certain income etc.
Sorry for all the questions. I am new and trying to get a grasp and
also learn how would I actually solve our use case.
Regards,
Aleksandr
--- On Tue, 5/24/11, Mohit Anchlia mohitanch...@gmail.com wrote:
From
.
Hadoop has build in counter, did you look into word count example from hadoop?
Regards,
Aleksandr
--- On Tue, 5/24/11, Mohit Anchlia mohitanch...@gmail.com wrote:
From: Mohit Anchlia mohitanch...@gmail.com
Subject: Re: Processing xml files
To: common-user@hadoop.apache.org
Date: Tuesday
Is hadoop going to format entire file sytem when I run bin/hadoop
namenode -format?
For eg: I have mountpoint /data and there are other files in /data
But I created a directory hadoop inside /data. Now if I run namenode
-format would it format everything inside /data or can I tell it to
format
Is there any specific reason for this logic in hadoop file under bin
where it checks for EUID? Whenever I start-dfs.sh it goes in -jvm
server if block and my jvm doesn't support that option.
elif [ $COMMAND = datanode ] ; then
CLASS='org.apache.hadoop.hdfs.server.datanode.DataNode'
if [[
, Hemanth Yamijala yhema...@gmail.com wrote:
Hi,
Can you please confirm if you've set JAVA_HOME in
conf-dir/hadoop-env.sh on all the nodes ?
Thanks
Hemanth
On Tue, Aug 31, 2010 at 6:21 AM, Mohit Anchlia mohitanch...@gmail.com wrote:
Hi,
I am running some basic setup and test to know about hadoop
Hi,
I am running some basic setup and test to know about hadoop. When I
try to start nodes I get this error. I am already using java 1.6. Can
someone please help?
# echo $JAVA_HOME
/root/jdk1.6.0_21/
# java -version
java version 1.6.0_21
Java(TM) SE Runtime Environment (build 1.6.0_21-b06)
Java
101 - 114 of 114 matches
Mail list logo