Raghu et al:
I reproduced all my experiments, only this time on an EC2 node, and they all
ran successfully without incident. So I am suspecting a machine or hardware
configuration issue.
I am going to try a more controlled series of experiments this weekend on a
machine that I
You might try backing out the HADOOP-1708 patch. It changed the test
guarding the log message you report below.
St.Ack
C G wrote:
Further experimentation, again single node configuration on a 4way 8G machine
w/0.14.0, trying to copyFromLocal 669M of data in 5,000,000 rows I see this in
the
Michael Stack wrote:
You might try backing out the HADOOP-1708 patch. It changed the test
guarding the log message you report below.
HADOOP-1708 isn't in 0.14.0.
Doug
My mistake...
St.Ack
Doug Cutting wrote:
Michael Stack wrote:
You might try backing out the HADOOP-1708 patch. It changed the test
guarding the log message you report below.
HADOOP-1708 isn't in 0.14.0.
Doug
C G,
Any specifics on how you reproduce any of these issues will be helpful.
I was able to copy a 5GB file without errors. copyFromLocal just copies
raw file content. Not sure of what '5,000,000 rows' means.
Raghu.
C G wrote:
Further experimentation, again single node configuration on a
Hi All:
I tried 0.14.0 today with limited success. 0.13.0 was doing pretty well, but
I'm not able to get as far with 0.14.0.
My environment is single-node, 4way box, 8G memory, 500G disk space.
First up is an out-of-memory error. The dataset is 1,000,000 rows (but only
60M in
Can you try to increase the java heap for tasks JVMs? The
mapred.child.java.opts property in conf/hadoop-site.xml defaults to
-Xmx200m.
Regd the second problem :
It is surprising that this fails repeatedly around the same place. 0.14
does check the checksum at the datanode (0.13 did not do this check). I
will try to reproduce this.
Raghu.
C G wrote:
Hi All:
Second issue is a failure on copyFromLocal with lost
Further experimentation, again single node configuration on a 4way 8G machine
w/0.14.0, trying to copyFromLocal 669M of data in 5,000,000 rows I see this in
the namenode log:
2007-08-24 00:50:45,902 WARN org.apache.hadoop.dfs.StateChange: DIR*
NameSystem.completeFile: failed to complete
Thanks Christophe, I kicked these values up to 512m and the case which
previously failed runs to completion with verifiable results. Good stuff...
Christophe Taton [EMAIL PROTECTED] wrote:
Can you try to increase the java heap for tasks JVMs? The
mapred.child.java.opts property in
10 matches
Mail list logo