[ 
https://issues.apache.org/jira/browse/HDFS-12255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16455195#comment-16455195
 ] 

Hudson commented on HDFS-12255:
-------------------------------

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14070 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14070/])
HDFS-12255. Block Storage: Cblock should generated unique trace ID for 
(omalley: rev 6a16d7c7ab531668e2c0000fc30bcdd241a81cbd)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/cblock/jscsiHelper/cache/impl/CBlockLocalCache.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/cblock/jscsiHelper/ContainerCacheFlusher.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/cblock/jscsiHelper/BlockWriterTask.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/cblock/jscsiHelper/cache/impl/AsyncBlockWriter.java


> Block Storage: Cblock should generated unique trace ID for the ops
> ------------------------------------------------------------------
>
>                 Key: HDFS-12255
>                 URL: https://issues.apache.org/jira/browse/HDFS-12255
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: ozone
>    Affects Versions: HDFS-7240
>            Reporter: Mukul Kumar Singh
>            Assignee: Mukul Kumar Singh
>            Priority: Major
>             Fix For: HDFS-7240
>
>         Attachments: HDFS-12255-HDFS-7240.001.patch, 
> HDFS-12255-HDFS-7240.002.patch, HDFS-12255-HDFS-7240.003.patch
>
>
> Cblock tests fails because cblock does not generate unique trace id for each 
> op.
> {code}
> java.lang.AssertionError: expected:<0> but was:<1051>
>       at org.junit.Assert.fail(Assert.java:88)
>       at org.junit.Assert.failNotEquals(Assert.java:743)
>       at org.junit.Assert.assertEquals(Assert.java:118)
>       at org.junit.Assert.assertEquals(Assert.java:555)
>       at org.junit.Assert.assertEquals(Assert.java:542)
>       at 
> org.apache.hadoop.cblock.TestBufferManager.testRepeatedBlockWrites(TestBufferManager.java:448)
> {code}
> This failure is because of following error.
> {code}
> 017-08-02 17:50:34,569 [Cache Block Writer Thread #4] ERROR 
> scm.XceiverClientHandler (XceiverClientHandler.java:sendCommandAsync(134)) - 
> Command with Trace already exists. Ignoring this command. . Previous Command: 
> java.util.concurrent.CompletableFuture@7847fc2d[Not completed, 1 dependents]
> 2017-08-02 17:50:34,569 [Cache Block Writer Thread #4] ERROR 
> jscsiHelper.ContainerCacheFlusher (BlockWriterTask.java:run(108)) - Writing 
> of block:44 failed, We have attempted to write this block 7 tim
> es to the container container2483304118.Trace ID:
> java.lang.IllegalStateException: Duplicate trace ID. Command with this trace 
> ID is already executing. Please ensure that trace IDs are not reused. ID: 
>         at 
> org.apache.hadoop.scm.XceiverClientHandler.sendCommandAsync(XceiverClientHandler.java:139)
>         at 
> org.apache.hadoop.scm.XceiverClientHandler.sendCommand(XceiverClientHandler.java:114)
>         at 
> org.apache.hadoop.scm.XceiverClient.sendCommand(XceiverClient.java:132)
>         at 
> org.apache.hadoop.scm.storage.ContainerProtocolCalls.writeSmallFile(ContainerProtocolCalls.java:225)
>         at 
> org.apache.hadoop.cblock.jscsiHelper.BlockWriterTask.run(BlockWriterTask.java:97)
>         at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to