Hi - You can invoke the REST service using HTTP request with help of any
HTTP clients. Example if it is an Java web application you can use Apache
commons HTTP client.
Thanks.
On Tuesday, June 2, 2015, Carmen Manzulli carmenmanzu...@gmail.com wrote:
Hi,
I would like to know hot to use WebHDFS
://hadoop.apache.org/docs/r2.4.0/hadoop-hdfs-httpfs/httpfs-default.htmllook
at the 'httpfs.authentication.*' properties.
Thanks.
On Sun, May 4, 2014 at 5:27 AM, Manoj Babu manoj...@gmail.com wrote:
Hi,
How to accesss files in hdfs using HttpFS that is protected by kerberos?
Kerberos
Hi,
How to accesss files in hdfs using HttpFS that is protected by kerberos?
Kerberos authentication works only where is is configured ex: edge node.
If i am triggering request from other system then how do i authenticate?
Kindly advise.
Cheers!
Manoj.
Hi,
When using DBInputFormat to unload a data from table to hdfs i have
configured 6 map tasks to execute but 0th map task alone unloading the
whole data from table and the remaining 5 tasks were running properly.
Please find my obeservtion on debugging.
Chunk size=855565
Input Splits:
For
Lock on /usr/local/Software/hadoop-2.2.0/data/hdfs/namenode/*in_use.**lock
acquired by nodename* 7518@localhost.localdomain
stop all instance running and then do the steps.
Cheers!
Manoj.
On Tue, Dec 24, 2013 at 8:48 PM, Sitaraman Vilayannur
vrsitaramanietfli...@gmail.com wrote:
I did press
Try by removing the file /usr/local/Software/hadoop-2.
2.0/data/hdfs/namenode/*in_use.**lock*
Cheers!
Manoj.
On Tue, Dec 24, 2013 at 9:03 PM, Manoj Babu manoj...@gmail.com wrote:
Lock on /usr/local/Software/hadoop-2.2.0/data/hdfs/namenode/*in_use.**lock
acquired by nodename* 7518
Hi All,
Can anybody share their experience on Rewriting Ab-Initio scripts using
Hadoop MapReduce?
Cheers!
Manoj.
Hi Naveen,
you have to send mail to user-unsubscr...@hadoop.apache.org
for more info: http://hadoop.apache.org/mailing_lists.html
Cheers!
Manoj.
On Thu, Mar 28, 2013 at 9:54 AM, Naveen Mahale nav...@zinniasystems.comwrote:
unsubscribe
In the mean time you can quickly compare the source of the class
with provided patch in the bug.
Cheers!
Manoj.
On Thu, Mar 21, 2013 at 12:13 PM, Krishna Kishore Bonagiri
write2kish...@gmail.com wrote:
Hi Hemanth Sandy,
Thanks for your reply. Yes, that indicates it is in close wait
Hi,
I am using version 1.6.
Cheers!
Manoj.
On Fri, Mar 8, 2013 at 7:32 PM, Jean-Marc Spaggiari jean-m...@spaggiari.org
wrote:
Hi Manoj,
It's related to your JVM. Which version are you using?
JM
2013/3/8 Manoj Babu manoj...@gmail.com:
Team,
I am getting this issue when reducer
Thanks in advance.
Cheers!
Manoj.
On Fri, Mar 8, 2013 at 7:48 PM, Jean-Marc Spaggiari jean-m...@spaggiari.org
wrote:
Hi Manoj,
Oracle 1.6? OpenJDK 1.6?
Which 1.6 release? The 24?
What is java -version giving you?
2013/3/8 Manoj Babu manoj...@gmail.com:
Hi,
I am using version 1.6
Hi Jean,
I dont have that rights. Is there any way to find?
On 8 Mar 2013 20:13, Jean-Marc Spaggiari jean-m...@spaggiari.org wrote:
Hi Manoj,
Do you have the required rights to test with another JVM? Can you test
the Oracle JVM Java SE 6 Update 43?
JM
2013/3/8 Manoj Babu manoj
)
On Sat, Feb 23, 2013 at 4:20 PM, Manoj Babu manoj...@gmail.com wrote:
Hi All,
What does this identifier means in the job tracker page?
State: RUNNING
Started: Thu Feb 21 12:22:03 CST 2013
Version: 0.20.2-cdh3u1, bdafb1dbffd0d5f2fbc6ee022e1c8df6500fd638
Compiled: Mon Jul 18 09:40
Hi Nikhil,
Have a look inside the script file named hadoop inside hadoop bin folder.
for example:
C:\cygwin\home\hadoop-0.20.2\bin
sample code:
elif [ $COMMAND = jar ] ; then
CLASS=org.apache.hadoop.util.RunJar
Cheers!
Manoj.
On Tue, Feb 19, 2013 at 4:53 PM, Agarwal, Nikhil
data? Did you have other jobs running on your cluster?**
**
** **
Hope that helps
** **
Regards
Vijay
** **
*From:* Manoj Babu [mailto:manoj...@gmail.com]
*Sent:* 01 February 2013 15:09
*To:* user@hadoop.apache.org
*Subject:* Reg Too many fetch-failures Error
to a TaskTracker daemon and therefore needs to be applied to
its local mapred-site.xml and has to be restarted to take new values
into effect.
On Mon, Jan 28, 2013 at 10:36 AM, Manoj Babu manoj...@gmail.com wrote:
Hi All,
I am trying to override the value
David,
I faced the same issue due to too much of logging that fills the task
tracker log folder.
Cheers!
Manoj.
On Sat, Dec 22, 2012 at 9:10 PM, Stephen Fritz steph...@cloudera.comwrote:
Troubleshooting OOMs in the map/reduce tasks can be tricky, see page 118
of Hadoop
David,
You try like below instead of runJob() you can try submitJob().
JobClient jc = new JobClient(job);
jc.submitJob(job);
Cheers!
Manoj.
On Fri, Dec 14, 2012 at 10:09 AM, David Parks davidpark...@yahoo.comwrote:
I'm submitting unrelated jobs programmatically (using AWS EMR) so they
use
this approach. I use this for the jobs I control of course, but the problem
is things like distcp where I don’t control the configuration.
** **
Dave
** **
** **
*From:* Manoj Babu [mailto:manoj...@gmail.com]
*Sent:* Friday, December 14, 2012 12:57 PM
*To:* user
Hi All
I got the below exception, Is the issue related to
https://issues.apache.org/jira/browse/MAPREDUCE-1182 ?
Am using CDH3U1
2012-12-10 06:22:39,688 FATAL org.apache.hadoop.mapred.Task:
attempt_201211120903_9197_r_24_0 : Map output copy failure :
java.lang.OutOfMemoryError: Java heap
().maxMemory() * maxInMemCopyUse,
Integer.MAX_VALUE);
Whether cdh3u1 not holding this fix?
Kindly Advice.
Cheers!
Manoj.
On Mon, Dec 10, 2012 at 6:39 PM, Manoj Babu manoj...@gmail.com wrote:
at
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.copyOutput
don´t have enough space in your TaskTracker/DataNode
node.
Did you check available space on your dedicated hard drives to host your
data in your TT/DN machine?
On Fri 07 Dec 2012 12:38:27 AM CST, Manoj Babu wrote:
Hi All,
I am getting the exception as below but the job continues to running
.
--
*From: * Manoj Babu manoj...@gmail.com
*Date: *Wed, 21 Nov 2012 23:28:00 +0530
*To: *user@hadoop.apache.org
*Cc: *bejoy.had...@gmail.combejoy.had...@gmail.com
*Subject: *Re: guessing number of reducers.
Hi,
How to set no of reducers in job conf dynamically?
For example
Much useful one thanks Binglin for sharing it!
Cheers!
Manoj.
On Tue, Oct 30, 2012 at 8:54 AM, Binglin Chang decst...@gmail.com wrote:
Hi,
I think you want to analyze hadoop job logs in jobtracker history folder?
These logs are in a centralized folder and don't need tools like flume or
fs -text fileName
Provided you have lzo codec within the property 'io.compression.codecs' in
core-site.xml
A 'hadoop fs -ls' command would itself display the file size.
Regards
Bejoy KS
Sent from handheld, please excuse typos.
--
*From: * Manoj Babu manoj
Hi Bejoy,
I am sorry. I can able to see the file size of compressed one but i am
trying to find what will be size of the file if it is not compressed and by
without extracting all set of files.
Cheers!
Manoj.
On Sun, Oct 21, 2012 at 3:28 PM, Manoj Babu manoj...@gmail.com wrote:
Hi Bejoy
.
On Sun, Oct 21, 2012 at 6:59 PM, Manoj Babu manoj...@gmail.com wrote:
Hi Bejoy,
I am sorry. I can able to see the file size of compressed one but i am
trying to find what will be size of the file if it is not compressed and by
without extracting all set of files.
Cheers!
Manoj.
On Sun
Hi,
If it is a runnable jar you are creating from netbeans Check only the
necessary dependencies are added.
Cheers!
Manoj.
On Tue, Oct 16, 2012 at 11:38 AM, sudha sadhasivam
sudhasadhasi...@yahoo.com wrote:
Hello
When we create a jar file for hadoop programs from command prompt it runs
Hi,
You have to specify the reducer key out type as NullWritable.
Cheers!
Manoj.
On Wed, Sep 12, 2012 at 7:43 AM, Nataraj Rashmi - rnatar
rashmi.nata...@acxiom.com wrote:
Hello,
** **
I have simple map/reduce program to merge input files into one big output
files. My question is,
to authoritatively comment on 'the production
readiness of append()' . :)
Regards
Bejoy KS
On Mon, Sep 10, 2012 at 11:03 AM, Manoj Babu manoj...@gmail.com wrote:
Thank you Bejoy.
Does file append is production stable?
Cheers!
Manoj.
On Sun, Sep 9, 2012 at 10:19 PM, Bejoy KS
Hi All,
I have two questions, providing info on it will be helpful.
1, I am using hadoop to analyze and to find top n search term metric's from
logs.
If any new log file is added to HDFS then again we are running the job to
find the metrics.
Daily we will be getting log files and we are parsing
etc. Every day do
the processing, get the results and aggregate the same with the previously
aggregated results till date.
Regards
Bejoy KS
Sent from handheld, please excuse typos.
--
*From: * Manoj Babu manoj...@gmail.com
*Date: *Sun, 9 Sep 2012 21:28:54 +0530
,address=
127.0.0.1:9987,suspend=y/value
/property
Now. add a Remote Java Application run configuration in eclipse.
The Conection Type should be Standard (Socket Listen), and the port
should be 9987.
Happy debugging!
Yaron
On Sat, Aug 25, 2012 at 10:48 AM, Manoj Babu dmanojb...@gmail.com wrote
No problem in it Harsh, I dint find it and so I have asked.
On 26 Aug 2012 17:39, Harsh J ha...@cloudera.com wrote:
Hi All,
how to debug mapreduce programs in pseudo mode?
Thanks in Advance.
Hi All,
In Sqoop:
When exporting from HDFS to DB, If an export map task fails due to these or
other reasons, it will cause the export job to fail. The results of a
failed export are undefined. Each export map task operates in a separate
transaction. Furthermore, individual map tasks commit their
.
On Mon, Aug 13, 2012 at 4:10 PM, Harsh J ha...@cloudera.com wrote:
Hi Manoj,
Reply inline.
On Mon, Aug 13, 2012 at 3:42 PM, Manoj Babu manoj...@gmail.com wrote:
Hi All,
Normal Hadoop job submission process involves:
Checking the input and output specifications of the job
Hi,
It would be great if any of you compare Pig and Hadoop map reduce. When we
should go for Hadoop or Pig?
I love to program using java but peoples were arguing that can be
easily achieved in ping with very few lines of code even my boss too...
I am a fresh developer for Hadoop. Could kindly
.
Hope it is fine.
Thank you!
With Regards,
Abhishek S
On Tue, Jul 31, 2012 at 10:37 PM, Manoj Babu manoj...@gmail.com wrote:
Hi,
It would be great if any of you compare Pig and Hadoop map reduce. When
we should go for Hadoop or Pig?
I love to program using java but peoples were arguing
Gentles,
I want to use the CombineFileInputFormat of Hadoop 0.20.0 / 0.20.2 such
that it processes 1 file per record and also doesn't compromise on data -
locality (which it normally takes care of).
It is mentioned in Tom White's Hadoop Definitive Guide but he has not shown
how to do it.
Ya Harsh its been posted 2 months back no response, if you google for
CombineFileInputFormat sure you will see it.
Mapredue user group rocks!
Thanks for the response.
On 12 Jul 2012 21:25, Harsh J ha...@cloudera.com wrote:
at 6:06 PM, Arun C Murthy a...@hortonworks.com wrote:
Take a look at CombineFileInputFormat - this will create 'meta splits'
which include multiple small spilts, thus reducing #maps which are run.
Arun
On Jul 11, 2012, at 5:29 AM, Manoj Babu wrote:
Hi,
The no of mappers is depends
, please excuse typos.
--
*From: * Manoj Babu manoj...@gmail.com
*Date: *Wed, 11 Jul 2012 18:17:41 +0530
*To: *mapreduce-user@hadoop.apache.org
*ReplyTo: * mapreduce-user@hadoop.apache.org
*Subject: *Re: Mapper basic question
Hi Tariq \Arun,
The no of blocks(splits
one to
preserve data, or if you wish to start again from scratch, you'll need
to run a format of the NameNode and scrub your DataNode directories
clean.
On Tue, Jul 10, 2012 at 11:24 AM, Manoj Babu manoj...@gmail.com wrote:
Hi,
It would be great if you could provide answer for the below
, 2012 at 10:57 AM, Manoj Babu manoj...@gmail.com wrote:
Hi Bobby,
I have faced a similar issue, In the job the block size is 64MB and the
no of the maps created is 656 and the no of files uploaded to HDFS is 656
and its each file size is 11MB. I assume that if small files exist it will
not able
Hi,
It would be more helpful, If you could more details for the below doubts.
1, How the partitioner knows which reducer needs to be called?
2, When we are using more than one reducers, the output gets separated.
Actually for what scenario we have to go for multiple reducers?
Cheers!
Manoj.
Hi Bobby,
I have faced a similar issue, In the job the block size is 64MB and the no
of the maps created is 656 and the no of files uploaded to HDFS is 656 and
its each file size is 11MB. I assume that if small files exist it will not
able to group.
Could kindly clarify it?
Cheers!
Manoj.
On
already explained
earlier.
For (2) - For what scenario do you _not_ want multiple reducers
handling each partition uniquely, when it is possible to scale that
way?
On Mon, Jul 9, 2012 at 11:22 PM, Manoj Babu manoj...@gmail.com wrote:
Hi,
It would be more helpful, If you could more
48 matches
Mail list logo