See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/545/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE 
###########################
[...truncated 472645 lines...]
    [junit] 2011-01-07 12:05:09,339 INFO  mortbay.log (?:invoke0(?)) - Started 
selectchannelconnec...@localhost:50216
    [junit] 2011-01-07 12:05:09,339 INFO  namenode.NameNode 
(NameNode.java:run(523)) - NameNode Web-server up at: localhost/127.0.0.1:50216
    [junit] 2011-01-07 12:05:09,340 INFO  ipc.Server (Server.java:run(608)) - 
IPC Server Responder: starting
    [junit] 2011-01-07 12:05:09,340 INFO  ipc.Server (Server.java:run(443)) - 
IPC Server listener on 53551: starting
    [junit] 2011-01-07 12:05:09,341 INFO  ipc.Server (Server.java:run(1369)) - 
IPC Server handler 0 on 53551: starting
    [junit] 2011-01-07 12:05:09,341 INFO  ipc.Server (Server.java:run(1369)) - 
IPC Server handler 2 on 53551: starting
    [junit] 2011-01-07 12:05:09,341 INFO  ipc.Server (Server.java:run(1369)) - 
IPC Server handler 1 on 53551: starting
    [junit] 2011-01-07 12:05:09,342 INFO  ipc.Server (Server.java:run(1369)) - 
IPC Server handler 3 on 53551: starting
    [junit] 2011-01-07 12:05:09,342 INFO  ipc.Server (Server.java:run(1369)) - 
IPC Server handler 4 on 53551: starting
    [junit] 2011-01-07 12:05:09,342 INFO  ipc.Server (Server.java:run(1369)) - 
IPC Server handler 5 on 53551: starting
    [junit] 2011-01-07 12:05:09,343 INFO  ipc.Server (Server.java:run(1369)) - 
IPC Server handler 6 on 53551: starting
    [junit] 2011-01-07 12:05:09,343 INFO  ipc.Server (Server.java:run(1369)) - 
IPC Server handler 7 on 53551: starting
    [junit] 2011-01-07 12:05:09,343 INFO  ipc.Server (Server.java:run(1369)) - 
IPC Server handler 8 on 53551: starting
    [junit] 2011-01-07 12:05:09,343 INFO  ipc.Server (Server.java:run(1369)) - 
IPC Server handler 9 on 53551: starting
    [junit] 2011-01-07 12:05:09,344 INFO  namenode.NameNode 
(NameNode.java:initialize(390)) - NameNode up at: localhost/127.0.0.1:53551
    [junit] Starting DataNode 0 with dfs.datanode.data.dir: 
file:/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data1/,file:/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data2/
    [junit] 2011-01-07 12:05:09,509 INFO  datanode.DataNode 
(DataNode.java:initDataXceiver(472)) - Opened info server at 33701
    [junit] 2011-01-07 12:05:09,513 INFO  datanode.DataNode 
(DataXceiverServer.java:<init>(77)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2011-01-07 12:05:09,520 INFO  common.Storage 
(DataStorage.java:recoverTransitionRead(127)) - Storage directory 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data1
 is not formatted.
    [junit] 2011-01-07 12:05:09,520 INFO  common.Storage 
(DataStorage.java:recoverTransitionRead(128)) - Formatting ...
    [junit] 2011-01-07 12:05:09,523 INFO  common.Storage 
(DataStorage.java:recoverTransitionRead(127)) - Storage directory 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data2
 is not formatted.
    [junit] 2011-01-07 12:05:09,524 INFO  common.Storage 
(DataStorage.java:recoverTransitionRead(128)) - Formatting ...
    [junit] 2011-01-07 12:05:09,576 INFO  datanode.DataNode 
(FSDataset.java:registerMBean(1772)) - Registered FSDatasetStatusMBean
    [junit] 2011-01-07 12:05:09,584 INFO  datanode.DirectoryScanner 
(DirectoryScanner.java:<init>(149)) - scan starts at 1294422877584 with 
interval 21600000
    [junit] 2011-01-07 12:05:09,586 INFO  http.HttpServer 
(HttpServer.java:addGlobalFilter(409)) - Added global filtersafety 
(class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
    [junit] 2011-01-07 12:05:09,589 INFO  http.HttpServer 
(HttpServer.java:start(579)) - Port returned by 
webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the 
listener on 0
    [junit] 2011-01-07 12:05:09,590 INFO  http.HttpServer 
(HttpServer.java:start(584)) - listener.getLocalPort() returned 44426 
webServer.getConnectors()[0].getLocalPort() returned 44426
    [junit] 2011-01-07 12:05:09,590 INFO  http.HttpServer 
(HttpServer.java:start(617)) - Jetty bound to port 44426
    [junit] 2011-01-07 12:05:09,590 INFO  mortbay.log (?:invoke0(?)) - 
jetty-6.1.14
    [junit] 2011-01-07 12:05:09,738 INFO  mortbay.log (?:invoke0(?)) - Started 
selectchannelconnec...@localhost:44426
    [junit] 2011-01-07 12:05:09,740 INFO  jvm.JvmMetrics 
(JvmMetrics.java:init(71)) - Cannot initialize JVM Metrics with 
processName=DataNode, sessionId=null - already initialized
    [junit] 2011-01-07 12:05:09,744 INFO  ipc.Server (Server.java:run(338)) - 
Starting SocketReader
    [junit] 2011-01-07 12:05:09,744 INFO  metrics.RpcMetrics 
(RpcMetrics.java:<init>(63)) - Initializing RPC Metrics with hostName=DataNode, 
port=58103
    [junit] 2011-01-07 12:05:09,745 INFO  metrics.RpcDetailedMetrics 
(RpcDetailedMetrics.java:<init>(57)) - Initializing RPC Metrics with 
hostName=DataNode, port=58103
    [junit] 2011-01-07 12:05:09,753 INFO  datanode.DataNode 
(DataNode.java:initIpcServer(432)) - dnRegistration = 
DatanodeRegistration(h9.grid.sp2.yahoo.net:33701, storageID=, infoPort=44426, 
ipcPort=58103)
    [junit] 2011-01-07 12:05:09,759 INFO  hdfs.StateChange 
(FSNamesystem.java:registerDatanode(2514)) - BLOCK* 
NameSystem.registerDatanode: node registration from 127.0.0.1:33701 storage 
DS-2082047178-127.0.1.1-33701-1294401909757
    [junit] 2011-01-07 12:05:09,765 INFO  net.NetworkTopology 
(NetworkTopology.java:add(331)) - Adding a new node: 
/default-rack/127.0.0.1:33701
    [junit] 2011-01-07 12:05:09,769 INFO  datanode.DataNode 
(DataNode.java:register(714)) - New storage id 
DS-2082047178-127.0.1.1-33701-1294401909757 is assigned to data-node 
127.0.0.1:33701
    [junit] 2011-01-07 12:05:09,770 INFO  datanode.DataNode 
(DataNode.java:run(1438)) - DatanodeRegistration(127.0.0.1:33701, 
storageID=DS-2082047178-127.0.1.1-33701-1294401909757, infoPort=44426, 
ipcPort=58103)In DataNode.run, data = 
FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-01-07 12:05:09,771 INFO  ipc.Server (Server.java:run(608)) - 
IPC Server Responder: starting
    [junit] 2011-01-07 12:05:09,771 INFO  ipc.Server (Server.java:run(443)) - 
IPC Server listener on 58103: starting
    [junit] 2011-01-07 12:05:09,771 INFO  ipc.Server (Server.java:run(1369)) - 
IPC Server handler 0 on 58103: starting
    [junit] 2011-01-07 12:05:09,772 INFO  ipc.Server (Server.java:run(1369)) - 
IPC Server handler 1 on 58103: starting
    [junit] 2011-01-07 12:05:09,772 INFO  ipc.Server (Server.java:run(1369)) - 
IPC Server handler 2 on 58103: starting
    [junit] 2011-01-07 12:05:09,772 INFO  datanode.DataNode 
(DataNode.java:offerService(904)) - using BLOCKREPORT_INTERVAL of 21600000msec 
Initial delay: 0msec
    [junit] 2011-01-07 12:05:09,784 INFO  datanode.DataNode 
(DataNode.java:blockReport(1143)) - BlockReport of 0 blocks got processed in 8 
msecs
    [junit] 2011-01-07 12:05:09,784 INFO  datanode.DataNode 
(DataNode.java:offerService(946)) - Starting Periodic block scanner.
    [junit] 2011-01-07 12:05:09,857 INFO  FSNamesystem.audit 
(FSNamesystem.java:logAuditEvent(148)) - ugi=hudson        ip=/127.0.0.1   
cmd=create      src=/testWriteConf.xml  dst=null        
perm=hudson:supergroup:rw-r--r--
    [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 60.062 sec
Build timed out. Aborting
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) 
##############################
3 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorNormalTransferVerySmallWrite

Error Message:
Cannot run program "du": java.io.IOException: error=24, Too many open files

Stack Trace:
java.io.IOException: Cannot run program "du": java.io.IOException: error=24, 
Too many open files
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
        at org.apache.hadoop.util.Shell.run(Shell.java:188)
        at org.apache.hadoop.fs.DU.<init>(DU.java:57)
        at org.apache.hadoop.fs.DU.<init>(DU.java:67)
        at 
org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolume.<init>(FSDataset.java:342)
        at 
org.apache.hadoop.hdfs.server.datanode.FSDataset.<init>(FSDataset.java:873)
        at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initFsDataSet(DataNode.java:400)
        at 
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:505)
        at 
org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:282)
        at 
org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:264)
        at 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1575)
        at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1518)
        at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1485)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:630)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:464)
        at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:186)
        at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:71)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:178)
        at 
org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88)
        at 
org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73)
Caused by: java.io.IOException: java.io.IOException: error=24, Too many open 
files
        at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
        at java.lang.ProcessImpl.start(ProcessImpl.java:65)
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:452)


FAILED:  org.apache.hadoop.hdfs.TestWriteConfigurationToDFS.testWriteConf

Error Message:
test timed out after 60000 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 60000 milliseconds
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at 
org.apache.hadoop.hdfs.DFSOutputStream.waitAndQueueCurrentPacket(DFSOutputStream.java:1169)
        at 
org.apache.hadoop.hdfs.DFSOutputStream.writeChunk(DFSOutputStream.java:1228)
        at 
org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunk(FSOutputSummer.java:161)
        at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:104)
        at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:90)
        at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:54)
        at java.io.DataOutputStream.write(DataOutputStream.java:90)
        at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
        at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:263)
        at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:106)
        at java.io.OutputStreamWriter.write(OutputStreamWriter.java:190)
        at 
com.sun.org.apache.xml.internal.serializer.ToStream.characters(ToStream.java:1499)
        at 
com.sun.org.apache.xml.internal.serializer.ToUnknownStream.characters(ToUnknownStream.java:789)
        at 
com.sun.org.apache.xml.internal.serializer.ToUnknownStream.characters(ToUnknownStream.java:323)
        at 
com.sun.org.apache.xalan.internal.xsltc.trax.DOM2TO.parse(DOM2TO.java:240)
        at 
com.sun.org.apache.xalan.internal.xsltc.trax.DOM2TO.parse(DOM2TO.java:226)
        at 
com.sun.org.apache.xalan.internal.xsltc.trax.DOM2TO.parse(DOM2TO.java:226)
        at 
com.sun.org.apache.xalan.internal.xsltc.trax.DOM2TO.parse(DOM2TO.java:226)
        at 
com.sun.org.apache.xalan.internal.xsltc.trax.DOM2TO.parse(DOM2TO.java:132)
        at 
com.sun.org.apache.xalan.internal.xsltc.trax.DOM2TO.parse(DOM2TO.java:94)
        at 
com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.transformIdentity(TransformerImpl.java:662)
        at 
com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.transform(TransformerImpl.java:708)
        at 
com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.transform(TransformerImpl.java:313)
        at 
org.apache.hadoop.conf.Configuration.writeXml(Configuration.java:1608)
        at 
org.apache.hadoop.conf.Configuration.writeXml(Configuration.java:1559)
        at 
org.apache.hadoop.hdfs.TestWriteConfigurationToDFS.__CLR3_0_28n7kbs1103(TestWriteConfigurationToDFS.java:46)
        at 
org.apache.hadoop.hdfs.TestWriteConfigurationToDFS.testWriteConf(TestWriteConfigurationToDFS.java:33)


FAILED:  
org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore

Error Message:
Image file 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/secondary/current/fsimage
 is corrupt with MD5 checksum of 530fc00dbe01d164bde9cfa80d9be7a8 but expecting 
45bf02671e0987a350184f34f4fd9881

Stack Trace:
java.io.IOException: Image file 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/secondary/current/fsimage
 is corrupt with MD5 checksum of 530fc00dbe01d164bde9cfa80d9be7a8 but expecting 
45bf02671e0987a350184f34f4fd9881
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
        at 
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
        at 
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
        at 
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
        at 
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
        at 
org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4tka(TestStorageRestore.java:316)
        at 
org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)



Reply via email to