In Java, for precision , you need to use BigDecimal.
I believe there must be a BigDecimalWritable in hadoop.
regards
On Wed, Feb 26, 2014 at 2:58 PM, Marco Shaw marco.s...@gmail.com wrote:
(If code is required, I can send it along later.)
I'm a beginner and I'm having issues with MR when
remained capacity, then you got this
Exception.
On Tue, Feb 25, 2014 at 10:35 AM, Manoj Khangaonkar khangaon...@gmail.com
wrote:
Hi
Can one of the implementors comment on what conditions trigger this error
?
All the data nodes show up as commissioned. No errors during startup
If I
Hi,
I setup a cluster with
machine1 : namenode and datanode
machine 2 : data node
A simple hdfs copy is not working. Can someone help with this issue ?
Several folks have posted this error on the web, But I have seen a good
reason or solution.
command:
bin/hadoop fs -copyFromLocal ~/hello
and restarting
none of which help.
My guess is that this is a networking /port access issue. If anyone can
shed light on what conditions cause this error , it would be much
appreciated.
regards
On Mon, Feb 24, 2014 at 1:07 PM, Manoj Khangaonkar khangaon...@gmail.comwrote:
Hi,
I setup a cluster
One usage of these is in a secondary sort , which is used , when you
want the output values from Map sorted (within a key).
You implement a KeyComparator and tell mapreduce to use it to order
the keys using a composite key.
To ensure that during partioning Grouping , all the records for a
key
Hi,
I think you might need to extend FileInputFormat ( or one of its
derived classes) as well as
implement a RecordReader.
regards
On Mon, Aug 6, 2012 at 8:30 AM, Mohammad Tariq donta...@gmail.com wrote:
Hello list,
I need some guidance on how to handle files where we don't have
any
On Mon, Apr 18, 2011 at 11:34 AM, modemide modem...@gmail.com wrote:
I'm getting many cannot find symbol errors. I've been searching
everywhere and have given up. There has to be a good (and very
simple) reason for why this is happening.
My setup is as follows:
hadoop installed to -