How to make the wordcount-nopipes work?

2008-01-03 Thread David D.
I compiled it and put it into dfs, but alwayls met the error: Hadoop Pipes Exception: failed to open file: at test/wordcount-nopipe.cc:66 in WordCountReader::WordCountReader(HadoopPipes::MapContext) I used commadn as follows: $ hadoop pipes -program exe/wordcount-nopipe -input input/ -output

Re: Not able to start Data Node

2008-01-03 Thread Dhaya007
Thanks Arun, I am able to riun the datanode in slave (As per the solution given by You (listinig port )) But still it asks the pasword while starting the dfs ans mapreduce First i generated rsa as password less as follws ssh-keygen -t rsa -P cat $HOME/.ssh/id_rsa.pub

Re: Not able to start Data Node

2008-01-03 Thread Miles Osborne
You need to make sure that each slave node has a copy of the authorised keys you generated on the master node. Miles On 03/01/2008, Dhaya007 [EMAIL PROTECTED] wrote: Thanks Arun, I am able to riun the datanode in slave (As per the solution given by You (listinig port )) But still it asks

Re: Not able to start Data Node

2008-01-03 Thread Khalil Honsali
I also note that for non-root passwordless ssh, you must chmod authorized_keys file to 655, On 03/01/2008, Miles Osborne [EMAIL PROTECTED] wrote: You need to make sure that each slave node has a copy of the authorised keys you generated on the master node. Miles On 03/01/2008, Dhaya007

Re: Why is getTracker() method in JobTracker class no longer in 0.15.1 release?

2008-01-03 Thread Arun C Murthy
Taeho Kang wrote: Dear Hadoop Users and Developers, It looks like getTracker() method in JobTracker class (to get a hold of a running JobTracker instance) no longer exists for 0.15.1 release. The reason I want an instance of JobTracker is to get some information about the current and old job

Damage Control

2008-01-03 Thread Jeff Eastman
I have a small cloud running with about 100 gb of data in the dfs. All appeared normal until yesterday, when Eclipse could not access the dfs. Investigating: 1. I logged onto the master machine and attempted to upload a local file. Got 6 errors like: 08/01/02 21:34:43 WARN fs.DFSClient:

Re: Datanode Problem

2008-01-03 Thread Ted Dunning
Localhost should never appear in either of these files since they are read on many machines (and the meaning of localhost is different on each one). On 1/3/08 7:01 AM, Natarajan, Senthil [EMAIL PROTECTED] wrote: Thanks. After replacing localhost with the machine name in /conf/masters and

Question for HBase users

2008-01-03 Thread Jim Kellerman
Do you have data stored in HBase that you cannot recreate? HADOOP-2478 will introduce an incompatible change in how HBase lays out files in HDFS so that should the root or meta tables be corrupted, it will be possible to reconstruct them from information in the file system alone. The problem is

RE: Damage Control

2008-01-03 Thread Jeff Eastman
I decided just to reset the dfs and it is up again. Any ideas on what might have happened? Jeff

Using Nutch for crawling + storing RSS feeds.

2008-01-03 Thread Manoj Bist
Hi, I need to build a system that crawls a given set of RSS feed urls periodically. For each RSS feed, the system needs to maintain a master RSS feed that contains all the items i.e. even though old items get dropped from the RSS feed, the master RSS feed contains all the items. Does something

Re: Question for HBase users

2008-01-03 Thread Billy
Saying that the working release only came out 3 months ago I would hope no one had data stored in hbase at this time that is not backup and/or stored somewhere else. so that puts me at a -1 on the migration utility. but I might be wrong above most if using hbase they should be able to output