Re: WebHdfs API

2015-06-02 Thread Manoj Babu
Hi - You can invoke the REST service using HTTP request with help of any HTTP clients. Example if it is an Java web application you can use Apache commons HTTP client. Thanks. On Tuesday, June 2, 2015, Carmen Manzulli carmenmanzu...@gmail.com wrote: Hi, I would like to know hot to use WebHDFS

Re: Reg HttpFS

2014-05-16 Thread Manoj Babu
://hadoop.apache.org/docs/r2.4.0/hadoop-hdfs-httpfs/httpfs-default.htmllook at the 'httpfs.authentication.*' properties. Thanks. On Sun, May 4, 2014 at 5:27 AM, Manoj Babu manoj...@gmail.com wrote: Hi, How to accesss files in hdfs using HttpFS that is protected by kerberos? Kerberos

Reg HttpFS

2014-05-04 Thread Manoj Babu
Hi, How to accesss files in hdfs using HttpFS that is protected by kerberos? Kerberos authentication works only where is is configured ex: edge node. If i am triggering request from other system then how do i authenticate? Kindly advise. Cheers! Manoj.

issue with DBInputFormat

2014-03-07 Thread Manoj Babu
Hi, When using DBInputFormat to unload a data from table to hdfs i have configured 6 map tasks to execute but 0th map task alone unloading the whole data from table and the remaining 5 tasks were running properly. Please find my obeservtion on debugging. Chunk size=855565 Input Splits: For

Re: Getting error unrecognized option -jvm on starting nodemanager

2013-12-24 Thread Manoj Babu
Lock on /usr/local/Software/hadoop-2.2.0/data/hdfs/namenode/*in_use.**lock acquired by nodename* 7518@localhost.localdomain stop all instance running and then do the steps. Cheers! Manoj. On Tue, Dec 24, 2013 at 8:48 PM, Sitaraman Vilayannur vrsitaramanietfli...@gmail.com wrote: I did press

Re: Getting error unrecognized option -jvm on starting nodemanager

2013-12-24 Thread Manoj Babu
Try by removing the file /usr/local/Software/hadoop-2. 2.0/data/hdfs/namenode/*in_use.**lock* Cheers! Manoj. On Tue, Dec 24, 2013 at 9:03 PM, Manoj Babu manoj...@gmail.com wrote: Lock on /usr/local/Software/hadoop-2.2.0/data/hdfs/namenode/*in_use.**lock acquired by nodename* 7518

Rewriting Ab-Initio scripts using Hadoop MapReduce

2013-12-23 Thread Manoj Babu
Hi All, Can anybody share their experience on Rewriting Ab-Initio scripts using Hadoop MapReduce? Cheers! Manoj.

Re: Unsubscribe

2013-03-27 Thread Manoj Babu
Hi Naveen, you have to send mail to user-unsubscr...@hadoop.apache.org for more info: http://hadoop.apache.org/mailing_lists.html Cheers! Manoj. On Thu, Mar 28, 2013 at 9:54 AM, Naveen Mahale nav...@zinniasystems.comwrote: unsubscribe

Re: Too many open files error with YARN

2013-03-21 Thread Manoj Babu
In the mean time you can quickly compare the source of the class with provided patch in the bug. Cheers! Manoj. On Thu, Mar 21, 2013 at 12:13 PM, Krishna Kishore Bonagiri write2kish...@gmail.com wrote: Hi Hemanth Sandy, Thanks for your reply. Yes, that indicates it is in close wait

Re: reg memory allocation failed

2013-03-08 Thread Manoj Babu
Hi, I am using version 1.6. Cheers! Manoj. On Fri, Mar 8, 2013 at 7:32 PM, Jean-Marc Spaggiari jean-m...@spaggiari.org wrote: Hi Manoj, It's related to your JVM. Which version are you using? JM 2013/3/8 Manoj Babu manoj...@gmail.com: Team, I am getting this issue when reducer

Re: reg memory allocation failed

2013-03-08 Thread Manoj Babu
Thanks in advance. Cheers! Manoj. On Fri, Mar 8, 2013 at 7:48 PM, Jean-Marc Spaggiari jean-m...@spaggiari.org wrote: Hi Manoj, Oracle 1.6? OpenJDK 1.6? Which 1.6 release? The 24? What is java -version giving you? 2013/3/8 Manoj Babu manoj...@gmail.com: Hi, I am using version 1.6

Re: reg memory allocation failed

2013-03-08 Thread Manoj Babu
Hi Jean, I dont have that rights. Is there any way to find? On 8 Mar 2013 20:13, Jean-Marc Spaggiari jean-m...@spaggiari.org wrote: Hi Manoj, Do you have the required rights to test with another JVM? Can you test the Oracle JVM Java SE 6 Update 43? JM 2013/3/8 Manoj Babu manoj

Re: Reg job tracker page

2013-02-23 Thread Manoj Babu
) On Sat, Feb 23, 2013 at 4:20 PM, Manoj Babu manoj...@gmail.com wrote: Hi All, What does this identifier means in the job tracker page? State: RUNNING Started: Thu Feb 21 12:22:03 CST 2013 Version: 0.20.2-cdh3u1, bdafb1dbffd0d5f2fbc6ee022e1c8df6500fd638 Compiled: Mon Jul 18 09:40

Re: Which class or method is called first when i run a command in hadoop

2013-02-19 Thread Manoj Babu
Hi Nikhil, Have a look inside the script file named hadoop inside hadoop bin folder. for example: C:\cygwin\home\hadoop-0.20.2\bin sample code: elif [ $COMMAND = jar ] ; then CLASS=org.apache.hadoop.util.RunJar Cheers! Manoj. On Tue, Feb 19, 2013 at 4:53 PM, Agarwal, Nikhil

Re: Reg Too many fetch-failures Error

2013-02-01 Thread Manoj Babu
data? Did you have other jobs running on your cluster?** ** ** ** Hope that helps ** ** Regards Vijay ** ** *From:* Manoj Babu [mailto:manoj...@gmail.com] *Sent:* 01 February 2013 15:09 *To:* user@hadoop.apache.org *Subject:* Reg Too many fetch-failures Error

Re: reg max map task config

2013-01-27 Thread Manoj Babu
to a TaskTracker daemon and therefore needs to be applied to its local mapred-site.xml and has to be restarted to take new values into effect. On Mon, Jan 28, 2013 at 10:36 AM, Manoj Babu manoj...@gmail.com wrote: Hi All, I am trying to override the value

Re: How to troubleshoot OutOfMemoryError

2012-12-22 Thread Manoj Babu
David, I faced the same issue due to too much of logging that fills the task tracker log folder. Cheers! Manoj. On Sat, Dec 22, 2012 at 9:10 PM, Stephen Fritz steph...@cloudera.comwrote: Troubleshooting OOMs in the map/reduce tasks can be tricky, see page 118 of Hadoop

Re: How to submit Tool jobs programatically in parallel?

2012-12-13 Thread Manoj Babu
David, You try like below instead of runJob() you can try submitJob(). JobClient jc = new JobClient(job); jc.submitJob(job); Cheers! Manoj. On Fri, Dec 14, 2012 at 10:09 AM, David Parks davidpark...@yahoo.comwrote: I'm submitting unrelated jobs programmatically (using AWS EMR) so they

Re: How to submit Tool jobs programatically in parallel?

2012-12-13 Thread Manoj Babu
use this approach. I use this for the jobs I control of course, but the problem is things like distcp where I don’t control the configuration. ** ** Dave ** ** ** ** *From:* Manoj Babu [mailto:manoj...@gmail.com] *Sent:* Friday, December 14, 2012 12:57 PM *To:* user

Reg: Map output copy failure

2012-12-10 Thread Manoj Babu
Hi All I got the below exception, Is the issue related to https://issues.apache.org/jira/browse/MAPREDUCE-1182 ? Am using CDH3U1 2012-12-10 06:22:39,688 FATAL org.apache.hadoop.mapred.Task: attempt_201211120903_9197_r_24_0 : Map output copy failure : java.lang.OutOfMemoryError: Java heap

Re: Reg: Map output copy failure

2012-12-10 Thread Manoj Babu
().maxMemory() * maxInMemCopyUse, Integer.MAX_VALUE); Whether cdh3u1 not holding this fix? Kindly Advice. Cheers! Manoj. On Mon, Dec 10, 2012 at 6:39 PM, Manoj Babu manoj...@gmail.com wrote: at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.copyOutput

Re: Reg: No space left on device Exception

2012-12-06 Thread Manoj Babu
don´t have enough space in your TaskTracker/DataNode node. Did you check available space on your dedicated hard drives to host your data in your TT/DN machine? On Fri 07 Dec 2012 12:38:27 AM CST, Manoj Babu wrote: Hi All, I am getting the exception as below but the job continues to running

Re: guessing number of reducers.

2012-11-21 Thread Manoj Babu
. -- *From: * Manoj Babu manoj...@gmail.com *Date: *Wed, 21 Nov 2012 23:28:00 +0530 *To: *user@hadoop.apache.org *Cc: *bejoy.had...@gmail.combejoy.had...@gmail.com *Subject: *Re: guessing number of reducers. Hi, How to set no of reducers in job conf dynamically? For example

Re: Tools for extracting data from hadoop logs

2012-10-29 Thread Manoj Babu
Much useful one thanks Binglin for sharing it! Cheers! Manoj. On Tue, Oct 30, 2012 at 8:54 AM, Binglin Chang decst...@gmail.com wrote: Hi, I think you want to analyze hadoop job logs in jobtracker history folder? These logs are in a centralized folder and don't need tools like flume or

Re: extracting lzo compressed files

2012-10-21 Thread Manoj Babu
fs -text fileName Provided you have lzo codec within the property 'io.compression.codecs' in core-site.xml A 'hadoop fs -ls' command would itself display the file size. Regards Bejoy KS Sent from handheld, please excuse typos. -- *From: * Manoj Babu manoj

Re: extracting lzo compressed files

2012-10-21 Thread Manoj Babu
Hi Bejoy, I am sorry. I can able to see the file size of compressed one but i am trying to find what will be size of the file if it is not compressed and by without extracting all set of files. Cheers! Manoj. On Sun, Oct 21, 2012 at 3:28 PM, Manoj Babu manoj...@gmail.com wrote: Hi Bejoy

Re: extracting lzo compressed files

2012-10-21 Thread Manoj Babu
. On Sun, Oct 21, 2012 at 6:59 PM, Manoj Babu manoj...@gmail.com wrote: Hi Bejoy, I am sorry. I can able to see the file size of compressed one but i am trying to find what will be size of the file if it is not compressed and by without extracting all set of files. Cheers! Manoj. On Sun

Re: Hadoop and CUDA

2012-10-16 Thread Manoj Babu
Hi, If it is a runnable jar you are creating from netbeans Check only the necessary dependencies are added. Cheers! Manoj. On Tue, Oct 16, 2012 at 11:38 AM, sudha sadhasivam sudhasadhasi...@yahoo.com wrote: Hello When we create a jar file for hadoop programs from command prompt it runs

Re: How to not output the key

2012-09-11 Thread Manoj Babu
Hi, You have to specify the reducer key out type as NullWritable. Cheers! Manoj. On Wed, Sep 12, 2012 at 7:43 AM, Nataraj Rashmi - rnatar rashmi.nata...@acxiom.com wrote: Hello, ** ** I have simple map/reduce program to merge input files into one big output files. My question is,

Re: Reg: parsing all files file append

2012-09-10 Thread Manoj Babu
to authoritatively comment on 'the production readiness of append()' . :) Regards Bejoy KS On Mon, Sep 10, 2012 at 11:03 AM, Manoj Babu manoj...@gmail.com wrote: Thank you Bejoy. Does file append is production stable? Cheers! Manoj. On Sun, Sep 9, 2012 at 10:19 PM, Bejoy KS

Reg: parsing all files file append

2012-09-09 Thread Manoj Babu
Hi All, I have two questions, providing info on it will be helpful. 1, I am using hadoop to analyze and to find top n search term metric's from logs. If any new log file is added to HDFS then again we are running the job to find the metrics. Daily we will be getting log files and we are parsing

Re: Reg: parsing all files file append

2012-09-09 Thread Manoj Babu
etc. Every day do the processing, get the results and aggregate the same with the previously aggregated results till date. Regards Bejoy KS Sent from handheld, please excuse typos. -- *From: * Manoj Babu manoj...@gmail.com *Date: *Sun, 9 Sep 2012 21:28:54 +0530

Re: How to debug

2012-08-26 Thread Manoj Babu
,address= 127.0.0.1:9987,suspend=y/value /property Now. add a Remote Java Application run configuration in eclipse. The Conection Type should be Standard (Socket Listen), and the port should be 9987. Happy debugging! Yaron On Sat, Aug 25, 2012 at 10:48 AM, Manoj Babu dmanojb...@gmail.com wrote

Re: How to debug

2012-08-26 Thread Manoj Babu
No problem in it Harsh, I dint find it and so I have asked. On 26 Aug 2012 17:39, Harsh J ha...@cloudera.com wrote:

How to debug

2012-08-25 Thread Manoj Babu
Hi All, how to debug mapreduce programs in pseudo mode? Thanks in Advance.

Reg: when failures on writing to DB from map\reduce

2012-08-23 Thread Manoj Babu
Hi All, In Sqoop: When exporting from HDFS to DB, If an export map task fails due to these or other reasons, it will cause the export job to fail. The results of a failed export are undefined. Each export map task operates in a separate transaction. Furthermore, individual map tasks commit their

Re: doubt on Hadoop job submission process

2012-08-13 Thread Manoj Babu
. On Mon, Aug 13, 2012 at 4:10 PM, Harsh J ha...@cloudera.com wrote: Hi Manoj, Reply inline. On Mon, Aug 13, 2012 at 3:42 PM, Manoj Babu manoj...@gmail.com wrote: Hi All, Normal Hadoop job submission process involves: Checking the input and output specifications of the job

Compare Hadoop and Pig Map\Reduce

2012-07-31 Thread Manoj Babu
Hi, It would be great if any of you compare Pig and Hadoop map reduce. When we should go for Hadoop or Pig? I love to program using java but peoples were arguing that can be easily achieved in ping with very few lines of code even my boss too... I am a fresh developer for Hadoop. Could kindly

Re: Compare Hadoop and Pig Map\Reduce

2012-07-31 Thread Manoj Babu
. Hope it is fine. Thank you! With Regards, Abhishek S On Tue, Jul 31, 2012 at 10:37 PM, Manoj Babu manoj...@gmail.com wrote: Hi, It would be great if any of you compare Pig and Hadoop map reduce. When we should go for Hadoop or Pig? I love to program using java but peoples were arguing

How to use CombineFileInputFormat in Hadoop?

2012-07-12 Thread Manoj Babu
Gentles, I want to use the CombineFileInputFormat of Hadoop 0.20.0 / 0.20.2 such that it processes 1 file per record and also doesn't compromise on data - locality (which it normally takes care of). It is mentioned in Tom White's Hadoop Definitive Guide but he has not shown how to do it.

Re: How to use CombineFileInputFormat in Hadoop?

2012-07-12 Thread Manoj Babu
Ya Harsh its been posted 2 months back no response, if you google for CombineFileInputFormat sure you will see it. Mapredue user group rocks! Thanks for the response. On 12 Jul 2012 21:25, Harsh J ha...@cloudera.com wrote:

Re: Mapper basic question

2012-07-11 Thread Manoj Babu
at 6:06 PM, Arun C Murthy a...@hortonworks.com wrote: Take a look at CombineFileInputFormat - this will create 'meta splits' which include multiple small spilts, thus reducing #maps which are run. Arun On Jul 11, 2012, at 5:29 AM, Manoj Babu wrote: Hi, The no of mappers is depends

Re: Mapper basic question

2012-07-11 Thread Manoj Babu
, please excuse typos. -- *From: * Manoj Babu manoj...@gmail.com *Date: *Wed, 11 Jul 2012 18:17:41 +0530 *To: *mapreduce-user@hadoop.apache.org *ReplyTo: * mapreduce-user@hadoop.apache.org *Subject: *Re: Mapper basic question Hi Tariq \Arun, The no of blocks(splits

Re: How to change name node storage directory?

2012-07-10 Thread Manoj Babu
one to preserve data, or if you wish to start again from scratch, you'll need to run a format of the NameNode and scrub your DataNode directories clean. On Tue, Jul 10, 2012 at 11:24 AM, Manoj Babu manoj...@gmail.com wrote: Hi, It would be great if you could provide answer for the below

Re: issue with map running time

2012-07-10 Thread Manoj Babu
, 2012 at 10:57 AM, Manoj Babu manoj...@gmail.com wrote: Hi Bobby, I have faced a similar issue, In the job the block size is 64MB and the no of the maps created is 656 and the no of files uploaded to HDFS is 656 and its each file size is 11MB. I assume that if small files exist it will not able

Re: Basic question on how reducer works

2012-07-09 Thread Manoj Babu
Hi, It would be more helpful, If you could more details for the below doubts. 1, How the partitioner knows which reducer needs to be called? 2, When we are using more than one reducers, the output gets separated. Actually for what scenario we have to go for multiple reducers? Cheers! Manoj.

Re: issue with map running time

2012-07-09 Thread Manoj Babu
Hi Bobby, I have faced a similar issue, In the job the block size is 64MB and the no of the maps created is 656 and the no of files uploaded to HDFS is 656 and its each file size is 11MB. I assume that if small files exist it will not able to group. Could kindly clarify it? Cheers! Manoj. On

Re: Basic question on how reducer works

2012-07-09 Thread Manoj Babu
already explained earlier. For (2) - For what scenario do you _not_ want multiple reducers handling each partition uniquely, when it is possible to scale that way? On Mon, Jul 9, 2012 at 11:22 PM, Manoj Babu manoj...@gmail.com wrote: Hi, It would be more helpful, If you could more