[
https://issues.apache.org/jira/browse/SLIDER-377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14115087#comment-14115087
]
Steve Loughran commented on SLIDER-377:
---------------------------------------
Test {{mvn test -Dtest=TestConfPersisterLocksHDFS#testAcqAcqRelReadlock }}
{code}
/Work/slider/slider-core/target/hdfs/TestConfPersister/data/data2/]]
heartbeating to /127.0.0.1:52834] INFO util.GSet (LightWeightG
Set.java:computeCapacity(361)) - capacity = 2^19 = 524288 entries
2014-08-29 11:24:15,800 [DataXceiver for client
DFSClient_NONMAPREDUCE_-1927566207_1 at /127.0.0.1:52858 [Receiving block
BP-12632692
4-192.168.1.138-1409307851902:blk_1073741825_1001]] ERROR datanode.DataNode
(DataXceiver.java:run(243)) - 127.0.0.1:52841:DataXceiver
error processing WRITE_BLOCK operation src: /127.0.0.1:52858 dst:
/127.0.0.1:52841
java.lang.UnsatisfiedLinkError:
org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(II[BI[BIILjava/lang/String;JZ)V
at
org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(Native
Method)
at
org.apache.hadoop.util.NativeCrc32.verifyChunkedSumsByteArray(NativeCrc32.java:67)
at
org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:344)
at
org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:292)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.verifyChunks(BlockReceiver.java:416)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:551)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:771)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:718)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:126)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:72)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:225)
at java.lang.Thread.run(Thread.java:745)
2014-08-29 11:24:15,800 [ResponseProcessor for block
BP-126326924-192.168.1.138-1409307851902:blk_1073741825_1001] WARN hdfs.DFSClie
nt (DFSOutputStream.java:run(880)) - DFSOutputStream ResponseProcessor
exception for block BP-126326924-192.168.1.138-1409307851902:
blk_1073741825_1001
java.io.EOFException: Premature EOF: no length prefix available
at
org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2081)
at
org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:176)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:795)
2014-08-29 11:24:15,808 [Thread-63] INFO test.SliderTestUtils
(SliderTestUtils.groovy:describe(71)) -
2014-08-29 11:24:15,808 [Thread-63] INFO test.SliderTestUtils
(SliderTestUtils.groovy:describe(72)) - ==============================
=
2014-08-29 11:24:15,809 [Thread-63] INFO test.SliderTestUtils
(SliderTestUtils.groovy:describe(73)) - teardown
2014-08-29 11:24:15,809 [Thread-63] INFO test.SliderTestUtils
(SliderTestUtils.groovy:describe(74)) - ==============================
=
2014-08-29 11:24:15,809 [Thread-63] INFO test.SliderTestUtils
(SliderTestUtils.groovy:describe(75)) -
2014-08-29 11:24:15,856 [JUnit] WARN datanode.DirectoryScanner
(DirectoryScanner.java:shutdown(375)) - DirectoryScanner: shutdown ha
s been called
2014-08-29 11:24:15,940 [JUnit] INFO mortbay.log (Slf4jLog.java:info(67)) -
Stopped HttpServer2$SelectChannelConnectorWithSafeStartu
[email protected]:0
2014-08-29 11:24:15,942 [JUnit] INFO ipc.Server (Server.java:stop(2398)) -
Stopping server on 52845
2014-08-29 11:24:15,946 [IPC Server listener on 52845] INFO ipc.Server
(Server.java:run(694)) - Stopping IPC Server listener on 5284
5
2014-08-29 11:24:15,947 [IPC Server Responder] INFO ipc.Server
(Server.java:run(820)) - Stopping IPC Server Responder
2014-08-29 11:24:15,949 [DataNode:
[[[DISK]file:/C:/Work/slider/slider-core/target/hdfs/TestConfPersister/data/data1/,
[DISK]file:/C:
/Work/slider/slider-core/target/hdfs/TestConfPersister/data/data2/]]
heartbeating to /127.0.0.1:52834] WARN datanode.DataNode (BPSe
rviceActor.java:offerService(734)) - BPOfferService for Block pool
BP-126326924-192.168.1.138-1409307851902 (Datanode Uuid 1011a713-1
bc1-40d7-b652-f95dc4aa0b27) service to /127.0.0.1:52834 interrupted
2014-08-29 11:24:15,950 [DataNode:
[[[DISK]file:/C:/Work/slider/slider-core/target/hdfs/TestConfPersister/data/data1/,
[DISK]file:/C:
/Work/slider/slider-core/target/hdfs/TestConfPersister/data/data2/]]
heartbeating to /127.0.0.1:52834] WARN datanode.DataNode (BPSe
rviceActor.java:run(857)) - Ending block pool service for: Block pool
BP-126326924-192.168.1.138-1409307851902 (Datanode Uuid 1011a71
3-1bc1-40d7-b652-f95dc4aa0b27) service to /127.0.0.1:52834
2014-08-29 11:24:15,999 [JUnit] INFO ipc.Server (Server.java:stop(2398)) -
Stopping server on 52834
2014-08-29 11:24:16,003 [IPC Server listener on 52834] INFO ipc.Server
(Server.java:run(694)) - Stopping IPC Server listener on 5283
4
2014-08-29 11:24:16,004 [IPC Server Responder] INFO ipc.Server
(Server.java:run(820)) - Stopping IPC Server Responder
2014-08-29 11:24:16,005
[org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager$Monitor@2247c9cd]
WARN blockmanagement.De
commissionManager (DecommissionManager.java:run(78)) - Monitor interrupted:
java.lang.InterruptedException: sleep interrupted
2014-08-29 11:24:16,032 [JUnit] INFO mortbay.log (Slf4jLog.java:info(67)) -
Stopped HttpServer2$SelectChannelConnectorWithSafeStartu
[email protected]:0
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.801 sec <<<
FAILURE! - in org.apache.slider.core.persist.TestConfPe
rsisterLocksHDFS
testAcqAcqRelReadlock(org.apache.slider.core.persist.TestConfPersisterLocksHDFS)
Time elapsed: 0.545 sec <<< FAILURE!
org.codehaus.groovy.runtime.powerassert.PowerAssertionError: assert
persister.acquireReadLock()
| |
| false
Persister to
hdfs://localhost:52834/user/administrator/.slider/cluster/testAcqRelReadlock
at
org.codehaus.groovy.runtime.InvokerHelper.assertFailed(InvokerHelper.java:398)
at
org.codehaus.groovy.runtime.ScriptBytecodeAdapter.assertFailed(ScriptBytecodeAdapter.java:646)
at
org.apache.slider.core.persist.TestConfPersisterLocksHDFS.testAcqAcqRelReadlock(TestConfPersisterLocksHDFS.groovy:135)
2014-08-29 11:24:16,144 [Thread-32] ERROR hdfs.DFSClient
(DFSClient.java:closeAllFilesBeingWritten(888)) - Failed to close inode 1639
1
java.io.IOException: All datanodes 127.0.0.1:52841 are bad. Aborting...
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1132)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:930)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:481)
{code}
> slider MiniHDFSCluster tests failing on windows+branch2
> -------------------------------------------------------
>
> Key: SLIDER-377
> URL: https://issues.apache.org/jira/browse/SLIDER-377
> Project: Slider
> Issue Type: Sub-task
> Components: test, windows
> Affects Versions: Slider 0.60
> Reporter: Steve Loughran
>
> Tests that use the MiniHDFSCluster are failing on windows with link errors
> -datanodes are failing on JNI linkage errors calculating CRC32 checksums
--
This message was sent by Atlassian JIRA
(v6.2#6252)