[ 
https://issues.apache.org/jira/browse/HDFS-12255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16122364#comment-16122364
 ] 

Anu Engineer commented on HDFS-12255:
-------------------------------------

[~msingh] +1, from me, I will commit after [~vagarychen]'s comments are 
addressed.

bq.  Also, log a warning when UnknownHostException ex happens?
if we decide to log this warning, can we make sure we warn only once? or max a 
few times. Otherwise, for a client where this lookup fails,  the log file will 
be overrun with this warning. So while it might be a good idea to warn, we 
might want to restrict the number of times we warn. We use a similar pattern on 
data-node side, if we are not able to communicate to SCM, we don't warn each 
try, but only at a selected frequency, we log how many times this call has 
failed. 

Another option is to put this as a trace message so that it does not get to the 
log unless we are debugging.


> Block Storage: Cblock should generated unique trace ID for the ops
> ------------------------------------------------------------------
>
>                 Key: HDFS-12255
>                 URL: https://issues.apache.org/jira/browse/HDFS-12255
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: ozone
>    Affects Versions: HDFS-7240
>            Reporter: Mukul Kumar Singh
>            Assignee: Mukul Kumar Singh
>             Fix For: HDFS-7240
>
>         Attachments: HDFS-12255-HDFS-7240.001.patch, 
> HDFS-12255-HDFS-7240.002.patch
>
>
> Cblock tests fails because cblock does not generate unique trace id for each 
> op.
> {code}
> java.lang.AssertionError: expected:<0> but was:<1051>
>       at org.junit.Assert.fail(Assert.java:88)
>       at org.junit.Assert.failNotEquals(Assert.java:743)
>       at org.junit.Assert.assertEquals(Assert.java:118)
>       at org.junit.Assert.assertEquals(Assert.java:555)
>       at org.junit.Assert.assertEquals(Assert.java:542)
>       at 
> org.apache.hadoop.cblock.TestBufferManager.testRepeatedBlockWrites(TestBufferManager.java:448)
> {code}
> This failure is because of following error.
> {code}
> 017-08-02 17:50:34,569 [Cache Block Writer Thread #4] ERROR 
> scm.XceiverClientHandler (XceiverClientHandler.java:sendCommandAsync(134)) - 
> Command with Trace already exists. Ignoring this command. . Previous Command: 
> java.util.concurrent.CompletableFuture@7847fc2d[Not completed, 1 dependents]
> 2017-08-02 17:50:34,569 [Cache Block Writer Thread #4] ERROR 
> jscsiHelper.ContainerCacheFlusher (BlockWriterTask.java:run(108)) - Writing 
> of block:44 failed, We have attempted to write this block 7 tim
> es to the container container2483304118.Trace ID:
> java.lang.IllegalStateException: Duplicate trace ID. Command with this trace 
> ID is already executing. Please ensure that trace IDs are not reused. ID: 
>         at 
> org.apache.hadoop.scm.XceiverClientHandler.sendCommandAsync(XceiverClientHandler.java:139)
>         at 
> org.apache.hadoop.scm.XceiverClientHandler.sendCommand(XceiverClientHandler.java:114)
>         at 
> org.apache.hadoop.scm.XceiverClient.sendCommand(XceiverClient.java:132)
>         at 
> org.apache.hadoop.scm.storage.ContainerProtocolCalls.writeSmallFile(ContainerProtocolCalls.java:225)
>         at 
> org.apache.hadoop.cblock.jscsiHelper.BlockWriterTask.run(BlockWriterTask.java:97)
>         at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to