Doubt Regarding QJM protocol - example 2.10.6 of Quorum-Journal Design document

2014-09-28 Thread Giridhar Addepalli
Hi All, I am going through Quorum Journal Design document. It is mentioned in Section 2.8 - In Accept Recovery RPC section If the current on-disk log is missing, or a *different length *than the proposed recovery, the JN downloads the log from the provided URI, replacing any current copy of the

Re: Significance of PID files

2014-07-06 Thread Giridhar Addepalli
At Daemon level. Thanks, Giridhar. On Fri, Jul 4, 2014 at 11:03 AM, Vijaya Narayana Reddy Bhoomi Reddy vijay.bhoomire...@gmail.com wrote: Vikas, Its main use is to keep one process at a time...like one one datanode at a any host - Can you please elaborate in a more detail? What is meant

Regarding Quorum Journal protocol used in HDFS

2014-06-18 Thread Giridhar Addepalli
Hi, We are trying to understand Quorum Journal Protocol (HDFS-3077) Came across a scenario in which active name node is terminated and standby namenode took over as new active namenode. But we could not understand why active namenode got terminated in the first place. Scenario : We have 3

Re: Regarding Quorum Journal protocol used in HDFS

2014-06-18 Thread Giridhar Addepalli
. On Wed, Jun 18, 2014 at 10:08 PM, Giridhar Addepalli giridhar1...@gmail.com wrote: Hi, We are trying to understand Quorum Journal Protocol (HDFS-3077) Came across a scenario in which active name node is terminated and standby namenode took over as new active namenode. But we could

Moving Secondary NameNode

2011-05-13 Thread Giridhar Addepalli
Hi, As of now, primary namenode and secondary namenode are running on the same machine in our configuration. As both are RAM heavy processes, we want to move secondary namenode to another machine in the cluster. What does this move take? Please refer me to some article which

generating crc for files on hdfs

2011-04-27 Thread Giridhar Addepalli
HI, How to generate crc for files on hdfs? I copied files from hdfs to remote machine, I want to verify integrity of files ( using copyToLocal command , I tried using -crc option too , but it looks like crc files does not exist on hdfs ) How should I proceed ? Thanks, Giridhar.

is configure( ) method of mapred api same as setup( ) method of mapreduce api

2011-02-02 Thread Giridhar Addepalli
Hi, setup( ), method present in mapred api, is called once at the start of each map/reduce task. Is it the same with configure( ) method present in mapreduce api ? Thanks, Giridhar.

does Oozie work with new mapreduce api ?

2011-01-16 Thread Giridhar Addepalli
Hi, I am using hadoop-0.20.2 version of hadoop. Want to use Oozie for managing workflows. All of my mapreduce programs use 'mapreduce' api instead of deprecated 'mapred' api. Downloaded oozie-2.2.1+78 , I see examples in here are using 'mapred' api. Is there any version of Oozie which

trying to write output to a file whose name i can have control over

2010-06-10 Thread Giridhar Addepalli
Hi, I am using hadoop 0.20.2 Maperduce framework by default writes output to part-r- etc. I want to write to a file with different name. I am trying to override getDefaultWorkFile method in TextOutputFormat class. I am getting following error :

Problem with DBOutputFormat

2010-06-08 Thread Giridhar Addepalli
Hi, I am trying to write output to MYSQL DB, I am getting following error java.io.IOException at org.apache.hadoop.mapreduce.lib.db.DBOutputFormat.getRecordWriter(DBOutp utFormat.java:180) at

RE: Problem with DBOutputFormat

2010-06-08 Thread Giridhar Addepalli
PM, Giridhar Addepalli giridhar.addepa...@komli.com wrote: Hi, I am trying to write output to MYSQL DB, I am getting following error java.io.IOException     at org.apache.hadoop.mapreduce.lib.db.DBOutputFormat.getRecordWriter(DBOutputFormat.java:180