On Nov 5, 2015 4:56 PM, "Arvind Thakur" wrote:
> Unsubscribe
>
\bbb
On Jun 10, 2015 10:58 AM, anvesh ragi annunarc...@gmail.com wrote:
Hello all,
I know that the tab is default input separator for fields :
stream.map.output.field.separator
stream.reduce.input.field.separator
stream.reduce.output.field.separator
mapreduce.textoutputformat.separator
Anand,
Sometimes it will error out due some resources are not available. So stop
and start the hadoop cluster and see
On May 15, 2015 12:24 PM, Anand Murali anand_vi...@yahoo.com wrote:
Dear All:
I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to
connect Hive to it after
Frank,
Did you set the debug mode ??
On Jan 28, 2015 7:10 PM, Frank Lanitz frank.lan...@sql-ag.de wrote:
Hi,
I've got a simple 3-node-setup where I wanted to test the grep function
based on some examples. So
$ hadoop fs -put /home/hadoop/hadoop/etc/hadoop hadoop-config
$ hadoop jar
Rahul,
Please check the port name given in mapred.site.xml
Thanks
Kiran
On Thu, Apr 10, 2014 at 3:23 PM, Rahul Singh smart.rahul.i...@gmail.comwrote:
Hi,
I am getting following exception while running word count example,
14/04/10 15:17:09 INFO mapreduce.Job: Task Id :
Shalish,
The default block size is 64MB which is good at the client end. Make sure
the same at your end also in conf. You can increase the size of each block
to 128MB or greater than that only thing you can see the processing will be
fast but at end there may be chances of losing data.
Thanks,
Hi Siddharth,
While running the multi-node we need to take care of the local host of the
slave machine from the error messages the task tracker root directory not
able to get to the masters. Please check and rerun it.
Thanks,
Kiran
On Tue, Jul 9, 2013 at 10:26 PM, siddharth mathur