Unsubscribe

2015-11-06 Thread Kiran Dangeti
On Nov 5, 2015 4:56 PM, "Arvind Thakur" wrote: > Unsubscribe >

Re: hadoop 2.4.0 streaming generic parser options using TAB as separator

2015-06-10 Thread Kiran Dangeti
\bbb On Jun 10, 2015 10:58 AM, anvesh ragi annunarc...@gmail.com wrote: Hello all, I know that the tab is default input separator for fields : stream.map.output.field.separator stream.reduce.input.field.separator stream.reduce.output.field.separator mapreduce.textoutputformat.separator

Re: Unable to start Hive

2015-05-15 Thread Kiran Dangeti
Anand, Sometimes it will error out due some resources are not available. So stop and start the hadoop cluster and see On May 15, 2015 12:24 PM, Anand Murali anand_vi...@yahoo.com wrote: Dear All: I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to connect Hive to it after

Re: How to debug why example not finishing (or even starting)

2015-01-28 Thread Kiran Dangeti
Frank, Did you set the debug mode ?? On Jan 28, 2015 7:10 PM, Frank Lanitz frank.lan...@sql-ag.de wrote: Hi, I've got a simple 3-node-setup where I wanted to test the grep function based on some examples. So $ hadoop fs -put /home/hadoop/hadoop/etc/hadoop hadoop-config $ hadoop jar

Re: not able to run map reduce job example on aws machine

2014-04-10 Thread Kiran Dangeti
Rahul, Please check the port name given in mapred.site.xml Thanks Kiran On Thu, Apr 10, 2014 at 3:23 PM, Rahul Singh smart.rahul.i...@gmail.comwrote: Hi, I am getting following exception while running word count example, 14/04/10 15:17:09 INFO mapreduce.Job: Task Id :

Re: Hadoop property precedence

2013-07-13 Thread Kiran Dangeti
Shalish, The default block size is 64MB which is good at the client end. Make sure the same at your end also in conf. You can increase the size of each block to 128MB or greater than that only thing you can see the processing will be fast but at end there may be chances of losing data. Thanks,

Re: Issues Running Hadoop 1.1.2 on multi-node cluster

2013-07-09 Thread Kiran Dangeti
Hi Siddharth, While running the multi-node we need to take care of the local host of the slave machine from the error messages the task tracker root directory not able to get to the masters. Please check and rerun it. Thanks, Kiran On Tue, Jul 9, 2013 at 10:26 PM, siddharth mathur