table comparing.
Amber wrote:
We have a 10 million row table exported from AS400 mainframe every day, the
table is exported as a csv text file, which is about 30GB in size, then the
csv file is imported into a RDBMS table which is dropped and recreated every
day. Now we want to find how
it as data volume increasing if it is kept in memory.
--
From: James Moore [EMAIL PROTECTED]
Sent: Thursday, July 24, 2008 1:12 AM
To: core-user@hadoop.apache.org
Subject: Re: Using MapReduce to do table comparing.
On Wed, Jul 23, 2008 at 7:33 AM, Amber
We have a 10 million row table exported from AS400 mainframe every day, the
table is exported as a csv text file, which is about 30GB in size, then the csv
file is imported into a RDBMS table which is dropped and recreated every day.
Now we want to find how many rows are updated during each
The Hadoop documentation says Sun's JDK must be used, this message is post to
make sure that there is official statement about this.
Hi, I followed the instructions from
http://hayesdavis.net/2008/06/14/running-hadoop-on-windows/ to install Hadoop
0.17.1 on my Windows XP computer, whose computer name is AMBER, and the current
user name is User. I installed CygWin on G:\. I have verified ssh and
bin/hadoop version work fine