Running hadoop on multinode cluster

2007-12-18 Thread M.Shiva
Hi, We have followed the steps to run the hadoop on linux implementing multi-node cluster. http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Multi-Node_Cluster)we're working 2 nodes. One as master,other as slave. We have started the namenode but the slave node fails. Since we

Re: Running hadoop on multinode cluster

2007-12-18 Thread Arun C Murthy
M.Shiva wrote: Hi, We have followed the steps to run the hadoop on linux implementing multi-node cluster. http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Multi-Node_Cluster)we're working 2 nodes. One as master,other as slave. We have started the namenode but the slave node

RE: Hbase tutorial?

2007-12-18 Thread edward yoon
Try the following: hql create table webtable( -- contents MAX_VERSIONS=10 COMPRESSION=BLOCK, -- anchor MAX_LENGTH=256 BLOOMFILTER=COUNTING_BLOOMFILTER -- VECTOR_SIZE=1 NUM_HASH=4); * BLOOMFILTER=NONE|BLOOMFILTER|COUNTING_BLOOMFILTER|RETOUCHED_BLOOMFILTER Thanks, Edward.

Re: hbase error - Too many open files

2007-12-18 Thread stack
See if last item in FAQ fixes your issue Billy: http://wiki.apache.org/lucene-hadoop/Hbase/FAQ St.Ack Billy wrote: I have tried to load hbase several times and always keep filing 2007-12-18 14:21:45,062 FATAL org.apache.hadoop.hbase.HRegionServer: Replay of hlog required. Forcing server

Re: Problem bringing up TaskTracker on slave nodes...

2007-12-18 Thread C G
Just to close the loop on this, and to make sure someone else doesn't have the same problem, this turned out to be a case of cockpit error. I had mis-read the documentation concerning mapred.task.tracker.report.bindAddress and had set it to point to the master node. I should have left

Re: hbase error - Too many open files

2007-12-18 Thread Billy
Thanks for that. I had two blocks I had to delete with hadoop fsck / -delete because they where corrupted but I am unsure if I lost data from base looks like I still have data just not sure what the corrupted blocks where if I did lose some info it was not much. I would thank there would be a

Re: Nutch crawl problem

2007-12-18 Thread jibjoice
i can't solve it now, pls help me jibjoice wrote: i use nutch-0.9, hadoop-0.12.2 and i use this command bin/nutch crawl urls -dir crawled -depth 3 have error : - crawl started in: crawled - rootUrlDir = input - threads = 10 - depth = 3 - Injector: starting - Injector: crawlDb:

about API of hase

2007-12-18 Thread ma qiang
Hi colleague, After reading the API docs about hbase,I don't know how to manipulate the hase using the java API .Would you please send me some examples? Thank you! Ma Qiang Department of Computer Science and Engineering Fudan University Shanghai, P. R. China

Re: Nutch crawl problem

2007-12-18 Thread pvvpr
basically your indexes are empty since no URLs were generated and fetched. See this, - Generator: 0 records selected for fetching, exiting ... - Stopping at depth=0 - no more URLs to fetch. - No URLs to fetch - check your seed list and URL filters. - crawl finished: crawled when no

Re: about API of hase

2007-12-18 Thread Peter Boot
look in src\contrib\hbase\src\test\org\apache\hadoop\hbase\TestHBaseCluster.java On Dec 18, 2007 7:51 PM, ma qiang [EMAIL PROTECTED] wrote: Hi colleague, After reading the API docs about hbase,I don't know how to manipulate the hase using the java API .Would you please send me some

RE: point in time snapshot

2007-12-18 Thread Jim Kellerman
Billy, Are you referring to snapshots of the entire DFS or of HBase? --- Jim Kellerman, Senior Engineer; Powerset -Original Message- From: news [mailto:[EMAIL PROTECTED] On Behalf Of Billy Sent: Tuesday, December 18, 2007 4:29 PM To: hadoop-user@lucene.apache.org Subject: point in

Re: Nutch crawl problem

2007-12-18 Thread jibjoice
where i should solve this? why it generated 0 records? pvvpr wrote: basically your indexes are empty since no URLs were generated and fetched. See this, - Generator: 0 records selected for fetching, exiting ... - Stopping at depth=0 - no more URLs to fetch. - No URLs to fetch -

How can I get parameters form mappers ?

2007-12-18 Thread kauu
How can I get parameters form mappers ? I can't do it with jobconf, I've try it . -邮件原件- 发件人: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 代表 Chris Dyer 发送时间: 2007年11月7日 6:04 收件人: hadoop-user@lucene.apache.org; [EMAIL PROTECTED] 主题: Re: configuration for mappers? Hi