[ 
https://issues.apache.org/jira/browse/HADOOP-1254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12488452
 ] 

Doug Cutting commented on HADOOP-1254:
--------------------------------------

For posterity, since Hudson builds are only kept for 30 days, the stack trace 
is:

org.apache.hadoop.ipc.RemoteException: java.io.IOException: Failed to create 
file /user/hudson/checkpointxx.dat on client 127.0.0.1 because there were not 
enough datanodes available. Found 0 datanodes but MIN_REPLICATION for the 
cluster is configured to be 1.
        at org.apache.hadoop.dfs.FSNamesystem.startFile(FSNamesystem.java:813)
        at org.apache.hadoop.dfs.NameNode.create(NameNode.java:294)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:339)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:573)

        at org.apache.hadoop.ipc.Client.call(Client.java:471)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:163)
        at org.apache.hadoop.dfs.$Proxy0.create(Unknown Source)
        at 
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateNewBlock(DFSClient.java:1141)
        at 
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:1079)
        at 
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.endBlock(DFSClient.java:1305)
        at 
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.flush(DFSClient.java:1258)
        at 
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.write(DFSClient.java:1240)
        at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:38)
        at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
        at java.io.DataOutputStream.write(DataOutputStream.java:90)
        at 
org.apache.hadoop.fs.ChecksumFileSystem$FSOutputSummer.write(ChecksumFileSystem.java:395)
        at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:38)
        at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
        at java.io.DataOutputStream.write(DataOutputStream.java:90)
        at java.io.FilterOutputStream.write(FilterOutputStream.java:80)
        at 
org.apache.hadoop.dfs.TestCheckpoint.writeFile(TestCheckpoint.java:47)
        at 
org.apache.hadoop.dfs.TestCheckpoint.testSecondaryNamenodeError1(TestCheckpoint.java:153)
        at 
org.apache.hadoop.dfs.TestCheckpoint.testCheckpoint(TestCheckpoint.java:323)

And the probable causes are:

   1. HADOOP-1001.  Check the type of keys and values generated by the mapper 
against the types specified in JobConf.  Contributed by Tahir Hashmi. (detail)
   2. HADOOP-971.  Improve DFS Scalability: Improve name node performance by 
adding a hostname to datanodes map.  Contributed by Hairong Kuang. (detail)
   3. HADOOP-1189.  Fix 'No space left on device' exceptions on datanodes.  
Contributed by Raghu Angadi. (detail)
   4. HADOOP-819.  Change LineRecordWriter to not insert a tab between key and 
value when either is null, and to print nothing when both are null.  
Contributed by Runping Qi. (detail)
   5. HADOOP-1204.  Rename InputFormatBase to be FileInputFormat. (detail)
   6. HADOOP-1213.  Improve logging of errors by IPC server. (detail)
   7. HADOOP-1114.  Permit user to specify additional CLASSPATH elements with a 
HADOOP_CLASSPATH environment variable. (detail)
   8. HADOOP-1238.  Fix metrics reporting by TaskTracker to correctly track 
maps_running and reduces_running.  Contributed by Michael Bieniosek. (detail) 


> TestCheckpoint fails intermittently
> -----------------------------------
>
>                 Key: HADOOP-1254
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1254
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>            Reporter: Doug Cutting
>
> TestCheckpoint started intermittently failing last night:
> http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/55/testReport/org.apache.hadoop.dfs/TestCheckpoint/testCheckpoint/
> This is probably caused by one of the changes introduced yesterday:
> http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/55/

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to