How do I list all the datanodes like you ve shown in your example?
Thanks
Mithila
On Sun, Apr 26, 2009 at 6:54 PM, Usman Waheed usm...@opera.com wrote:
Hi,
One of my data nodes is practically under utilized in the cluster of 4
datanodes and one namenode.
I executed the hadoop -setrep -w 2
to verify that you can connect from the
remote machine.
On Thu, Apr 16, 2009 at 9:18 PM, Mithila Nagendra mnage...@asu.edu
wrote:
Thanks! I ll see what I can find out.
On Fri, Apr 17, 2009 at 4:55 AM, jason hadoop jason.had...@gmail.com
wrote:
The firewall was run at system startup
right? It seems like the datanodes aren't getting an IP
address
to use, and I'm not sure why.
jpe30 wrote:
That helps a lot actually. I will try setting up my hosts file tomorrow
and make the other changes you suggested.
Thanks!
Mithila Nagendra wrote:
Hi,
The replication
Hey Jason
The problem s fixed! :) My network admin had messed something up! Now it
works! Thanks for your help!
Mithila
On Thu, Apr 16, 2009 at 11:58 PM, Mithila Nagendra mnage...@asu.edu wrote:
Thanks Jason! This helps a lot. I m planning to talk to my network admin
tomorrow. I hoping he ll
contain the same information on all nodes. Also
hadoop-site.xml files on all nodes should have master:portno for hdfs and
tasktracker.
Once you do this restart hadoop.
On Fri, Apr 17, 2009 at 10:04 AM, jpe30 jpotte...@gmail.com wrote:
Mithila Nagendra wrote:
You have to make sure that you
.
It turned out the kickstart script turned enabled the firewall with a rule
that blocked ports in the 50k range.
It took us a while to even think to check that was not a part of our normal
machine configuration
On Wed, Apr 15, 2009 at 11:04 AM, Mithila Nagendra mnage...@asu.edu
wrote:
Hi Aaron
.
On Thu, Apr 16, 2009 at 1:28 PM, Mithila Nagendra mnage...@asu.edu
wrote:
Jason: the kickstart script - was it something you wrote or is it run
when
the system turns on?
Mithila
On Thu, Apr 16, 2009 at 1:06 AM, Mithila Nagendra mnage...@asu.edu
wrote:
Thanks Jason
org.apache.hadoop.ipc.Client: Retrying connect
to server: node18/192.168.0.18:54310. Already tried 2 time(s).
Hmmm I still cant figure it out..
Mithila
On Tue, Apr 14, 2009 at 10:22 PM, Mithila Nagendra mnage...@asu.edu wrote:
Also, Would the way the port is accessed change if all these node
The log file runs into thousands of line with the same message being
displayed every time.
On Wed, Apr 15, 2009 at 8:10 PM, Mithila Nagendra mnage...@asu.edu wrote:
The log file : hadoop-mithila-datanode-node19.log.2009-04-14 has the
following in it:
2009-04-14 10:08:11,499 INFO
,
--
Ravi
On 4/15/09 10:15 AM, Mithila Nagendra mnage...@asu.edu wrote:
The log file runs into thousands of line with the same message being
displayed every time.
On Wed, Apr 15, 2009 at 8:10 PM, Mithila Nagendra mnage...@asu.edu
wrote:
The log file : hadoop-mithila-datanode-node19.log.2009
Hi,
The replication factor has to be set to 1. Also for you dfs and job tracker
configuration you should insert the name of the node rather than the i.p
address.
For instance:
value192.168.1.10:54310/value
can be:
valuemaster:54310/value
The nodes can be renamed by renaming them in the hosts
I ve drawn a blank here! Can't figure out what s wrong with the ports. I can
ssh between the nodes but cant access the DFS from the slaves - says Bad
connection to DFS. Master seems to be fine.
Mithila
On Tue, Apr 14, 2009 at 4:28 AM, Mithila Nagendra mnage...@asu.edu wrote:
Yes I can
wrote:
Are there any error messages in the log files on those nodes?
- Aaron
On Tue, Apr 14, 2009 at 9:03 AM, Mithila Nagendra mnage...@asu.edu
wrote:
I ve drawn a blank here! Can't figure out what s wrong with the ports. I
can
ssh between the nodes but cant access the DFS from the slaves
Also, Would the way the port is accessed change if all these node are
connected through a gateway? I mean in the hadoop-site.xml file? The Ubuntu
systems we worked with earlier didnt have a gateway.
Mithila
On Tue, Apr 14, 2009 at 9:48 PM, Mithila Nagendra mnage...@asu.edu wrote:
Aaron: Which
, 2009 at 6:54 PM, Mithila Nagendra mnage...@asu.edu
wrote:
Aaron
That could be the issue, my data is just 516MB - wouldn't this see a
bit
of
speed up?
Could you guide me to the example? I ll run my cluster on it and see
what
I
get. Also for my program I had a java
Yes I can..
On Mon, Apr 13, 2009 at 5:12 PM, Jim Twensky jim.twen...@gmail.com wrote:
Can you ssh between the nodes?
-jim
On Mon, Apr 13, 2009 at 6:49 PM, Mithila Nagendra mnage...@asu.edu
wrote:
Thanks Aaron.
Jim: The three clusters I setup had ubuntu running on them and the dfs
You have to stop the cluster before you reformat. Also restarting the master
might help.
Mithila
2009/4/12 halilibrahimcakir halilibrahimca...@mynet.com
I typed:
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub gt;gt; ~/.ssh/authorized_keys
Deleted this directory:
$ rm
, Mithila Nagendra mnage...@asu.edu
wrote:
Hey all
I recently setup a three node hadoop cluster and ran an examples on it.
It
was pretty fast, and all the three nodes were being used (I checked the
log
files to make sure that the slaves are utilized).
Now I ve setup another cluster
To add to the question, how does one decide what is the optimal replication
factor for a cluster. For instance what would be the appropriate replication
factor for a cluster consisting of 5 nodes.
Mithila
On Fri, Apr 10, 2009 at 8:20 AM, Alex Loddengaard a...@cloudera.com wrote:
Did you load
likely that there's another, more MapReduce-y way of looking at the job and
refactoring the code to make it work more cleanly with the intended
programming model.
- Aaron
On Mon, Apr 6, 2009 at 10:08 PM, Mithila Nagendra mnage...@asu.edu
wrote:
Thanks! I was looking at the link sent by Philip
before it is further processed.
Does this model make sense?
- Aaron
On Tue, Apr 7, 2009 at 1:06 AM, Mithila Nagendra mnage...@asu.edu wrote:
Aaron,
We hope to achieve a level of pipelining between two clusters - similar
to
how pipelining is done in executing RDB queries. You can look
Hey all!
Is there a way to print out the execution time of a map reduce task? An
inbuilt function or option to be used with bin/hadoop
Thanks!
Mithila Nagendra
Hey all
I'm trying to connect two separate Hadoop clusters. Is it possible to do so?
I need data to be shuttled back and forth between the two clusters. Any
suggestions?
Thank you!
Mithila Nagendra
Arizona State University
the data produced by the map/reduce at the lower level.
Mithila
On Tue, Apr 7, 2009 at 7:57 AM, Owen O'Malley omal...@apache.org wrote:
On Apr 6, 2009, at 9:49 PM, Mithila Nagendra wrote:
Hey all
I'm trying to connect two separate Hadoop clusters. Is it possible to do
so?
I need data
Hey all
Im using the hadoop version 0.18.3, and was wondering if the reduce phase
starts only after the mapping is completed? Is it required that the Map
phase is a 100% done, or can it be programmed in such a way that the reduce
starts earlier?
Thanks!
Mithila Nagendra
Arizona State University
Hey guys
I am currently working on a project where I need the input to be read by the
map/reduce word count program as and when it is generated - I don't want the
input to be stored in a text file. Is there a way hadoop can read from a
stream? Its similar to the producer-consumer problem - word
Hey Sandy
I had a similar problem with Hadoop. All I did was I stopped all the daemons
using stop-all.sh. Then formatted the namenode again using hadoop namenode
-format. After this I went on to restarting everything by using start-all.sh
I hope you dont have much data on the datanode,
Hey all
I was trying to run the word count example on one of the hadoop systems I
installed, but when i try to copy the text files from the local file system
to the DFS, it throws up the following exception:
[mith...@node02 hadoop]$ jps
8711 JobTracker
8805 TaskTracker
8901 Jps
8419 NameNode
8642
Hey all
When I try to copy a folder from the local file system in to the HDFS using
the command hadoop dfs -copyFromLocal, the copy fails and it gives an error
which says Bad connection to FS. How do I get past this? The following is
the output at the time of execution:
:
Mithila Nagendra wrote:
Hey steve
The version is: Linux enpc3740.eas.asu.edu 2.6.9-67.0.20.EL #1 Wed Jun 18
12:23:46 EDT 2008 i686 i686 i386 GNU/Linux, this is what I got when I used
the command uname -a
On Tue, Nov 25, 2008 at 1:50 PM, Steve Loughran ste...@apache.org
wrote:
Mithila Nagendra
Hey steve
The version is: Linux enpc3740.eas.asu.edu 2.6.9-67.0.20.EL #1 Wed Jun 18
12:23:46 EDT 2008 i686 i686 i386 GNU/Linux, this is what I got when I used
the command uname -a
On Tue, Nov 25, 2008 at 1:50 PM, Steve Loughran [EMAIL PROTECTED] wrote:
Mithila Nagendra wrote:
Hey Steve
I
Thanks Steve! Will take a look at it..
Mithila
On Mon, Nov 24, 2008 at 6:32 PM, Steve Loughran [EMAIL PROTECTED] wrote:
Mithila Nagendra wrote:
I tried dropping the jar files into the lib. It still doesnt work.. The
following is how the lib looks after the new files were put in:
[EMAIL
] wrote:
Mithila Nagendra wrote:
I tried dropping the jar files into the lib. It still doesnt work.. The
following is how the lib looks after the new files were put in:
[EMAIL PROTECTED] hadoop-0.17.2.1]$ cd bin
[EMAIL PROTECTED] bin]$ ls
hadoophadoop-daemon.sh rccstart
Hey Steve
I deleted what ever I needed to.. still no luck..
You said that the classpath might be messed up.. Is there some way I can
reset it? For the root user? What path do I set it to.
Mithila
On Mon, Nov 24, 2008 at 8:54 PM, Steve Loughran [EMAIL PROTECTED] wrote:
Mithila Nagendra wrote
On Fri, Nov 21, 2008 at 9:19 AM, Mithila Nagendra [EMAIL PROTECTED]
wrote:
Hey ALex
Which file do I download from the apache commons website?
Thanks
Mithila
On Fri, Nov 21, 2008 at 8:15 PM, Mithila Nagendra [EMAIL PROTECTED]
wrote:
I tried the 0.18.2 as welll.. it gave me the same
/downloads/download_logging.cgi) and drop them
in
to $HADOOP_HOME/lib.
Just curious, if you're starting a new cluster, why have you chosen to use
0.17.* and not 0.18.2? It would be a good idea to use 0.18.2 if possible.
Alex
On Thu, Nov 20, 2008 at 4:36 PM, Mithila Nagendra [EMAIL PROTECTED
Hey ALex
Which file do I download from the apache commons website?
Thanks
Mithila
On Fri, Nov 21, 2008 at 8:15 PM, Mithila Nagendra [EMAIL PROTECTED] wrote:
I tried the 0.18.2 as welll.. it gave me the same exception.. so tried the
lower version.. I should check if this works.. Thanks
On Fri, Nov 21, 2008 at 9:22 PM, Alex Loddengaard [EMAIL PROTECTED] wrote:
Download the 1.1.1.tar.gz binaries. This file will have a bunch of JAR
files; drop the JAR files in to $HADOOP_HOME/lib and see what happens.
Alex
On Fri, Nov 21, 2008 at 9:19 AM, Mithila Nagendra [EMAIL PROTECTED
!
On Wed, Nov 19, 2008 at 6:38 PM, Tom Wheeler [EMAIL PROTECTED] wrote:
On Wed, Nov 19, 2008 at 5:31 PM, Mithila Nagendra [EMAIL PROTECTED]
wrote:
Oh is that so. Im not sure which UNIX it is since Im working with a
cluster
that is remotely accessed.
If you can get a shell
of the exception if you wish. It would be
of immense help if you could provide answers for the above questions.
Thank you! Looking forward to your reply.
Best Regards
Mithila Nagendra
I ve attached the screen shots of the exception and hadoop-site.xml.
Thanks!
On Wed, Nov 19, 2008 at 9:12 PM, Mithila Nagendra [EMAIL PROTECTED] wrote:
Hello
I m currently a student at Arizona State University, Tempe, Arizona,
pursuing my masters in Computer Science. I m currently involved
Hey
My hadoop version is 0.17.0.. check out the screen shots i ve put in..
Mithila
On Wed, Nov 19, 2008 at 9:29 PM, Sagar Naik [EMAIL PROTECTED] wrote:
Mithila Nagendra wrote:
Hello
I m currently a student at Arizona State University, Tempe, Arizona,
pursuing my masters in Computer Science
think this list might strip
them. Can you copy-paste the error? Though I think the error won't be
useful. I'm pretty confident your issue is with Java. What UNIX are you
using?
Alex
On Wed, Nov 19, 2008 at 11:38 AM, Mithila Nagendra [EMAIL PROTECTED]
wrote:
Hey
My hadoop version
43 matches
Mail list logo