How do I recover the namenode?

2014-07-07 Thread cho ju il
cluster is.. 2 namenodes ( ha cluster ), 3 jounalnodes, n datanodes I regularly back up the metadata(fsimage) file. ( http://[namenode address]:50070/imagetransfer?getimage=1amp;txid=latest ) How do I recover the namenode by using the metadata(fsimage)?

Re: heartbeat timeout doesn't work

2014-07-07 Thread Akira AJISAKA
The timeout value is set by the following formula: heartbeatExpireInterval = 2 * (heartbeatRecheckInterval) + 10 * 1000 * (heartbeatIntervalSeconds); Note that heartbeatRecheckInterval is set by dfs.namenode.heartbeat.recheck-interval property (5*60*1000 [msec] by

Re: define node

2014-07-07 Thread Kilaru, Sambaiah
A server with more than one hard drive is one node only. Sam On 7/7/14, 9:50 AM, Adaryl Bob Wakefield, MBA adaryl.wakefi...@hotmail.com wrote: If you have a server with more than one hard drive is that one node or n nodes where n = the number of hard drives? B.

Managed File Transfer

2014-07-07 Thread Mohan Radhakrishnan
Hi, We used a commercial FT and scheduler tool in clustered mode. This was a traditional active-active cluster that supported multiple protocols like FTPS etc. Now I am interested in evaluating a Distributed way of crawling FTP sites and downloading files using Hadoop. I thought

Re: How do I recover the namenode?

2014-07-07 Thread Raj K Singh
please follow along the steps • Shutdown all Hadoop daemons on all servers in the cluster. • Copy NameNode metadata onto the secondary NameNode and copy the entire directory tree to the secondary NameNode. • Modify the core-site.xml file, making the secondary NameNode server the new NameNode

Re: Significance of PID files

2014-07-07 Thread Suresh Srinivas
When a daemon process is started, the process ID of the process is captured in a pid file. It is used for following purposes: - During a daemon startup, the existence of pid file is used to determine that the process is already running. - When a daemon is stooped, hadoop scripts sends kill TERM

Re: Huge text file for Hadoop Mapreduce

2014-07-07 Thread Adaryl Bob Wakefield, MBA
http://www.cs.cmu.edu/~./enron/ Not sure the uncompressed size but pretty sure it’s over a Gig. B. From: navaz Sent: Monday, July 07, 2014 6:22 PM To: user@hadoop.apache.org Subject: Huge text file for Hadoop Mapreduce Hi I am running basic word count Mapreduce code. I have download a

Re: How do I recover the namenode?

2014-07-07 Thread cho ju il
Thank you for answer. However, my Hadoop version is 2.4.1. Cluster does not have secondary namenode . How do I recover the namenode( hadoop version 2.4.1 ) by using the metadata(fsimage) ? -Original Message- From: Raj K Singhlt;rajkrrsi...@gmail.comgt; To:

Copy hdfs block from one data node to another

2014-07-07 Thread Yehia Elshater
Hi All, How can copy a certain hdfs block (given the file name, start and end bytes) from one node to another node ? Thanks Yehia

Re: Copy hdfs block from one data node to another

2014-07-07 Thread Chris Mawata
Can you outline why one would want to do that? The blocks are disposable so it is strange to manipulate them directly. On Jul 7, 2014 8:16 PM, Yehia Elshater y.z.elsha...@gmail.com wrote: Hi All, How can copy a certain hdfs block (given the file name, start and end bytes) from one node to

can i monitor all hadoop component from one box?

2014-07-07 Thread ch huang
hi,maillist: i want to check all hadoop cluster component process is alive or die ,i do not know if it can do like check zookeeper node from one machine?thanks

Re: can i monitor all hadoop component from one box?

2014-07-07 Thread Nitin Pawar
look at nagios or ganglia for monitoring. On Tue, Jul 8, 2014 at 8:16 AM, ch huang justlo...@gmail.com wrote: hi,maillist: i want to check all hadoop cluster component process is alive or die ,i do not know if it can do like check zookeeper node from one machine?thanks -- Nitin

Re: Huge text file for Hadoop Mapreduce

2014-07-07 Thread Du Lam
Configuration conf = getConf(); conf.setLong(mapreduce.input.fileinputformat.split.maxsize,1000); // u can set this to some small value (in bytes) to ensure your file will split to multiple mappers , provided the format is not un-splitable format like .snappy. On Tue, Jul 8, 2014 at 7:32