[ 
https://issues.apache.org/jira/browse/HADOOP-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16747933#comment-16747933
 ] 

Kai Xie commented on HADOOP-16049:
----------------------------------

submitted branch-2-005 with the checkstyle fix.

btw, I can see that the timeout / unit test failure is caused by OpenJDK 7

(taken from 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15811/artifact/out/patch-asflicense.txt)
{code:java}
=======================================================================
==/testptch/hadoop/hadoop-tools/hadoop-distcp/hs_err_pid2744.log
=======================================================================
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  Internal Error (safepoint.cpp:325), pid=2744, tid=139788518442752
#  guarantee(PageArmed == 0) failed: invariant
#
# JRE version: OpenJDK Runtime Environment (7.0_181-b01) (build 1.7.0_181-b01)
# Java VM: OpenJDK 64-Bit Server VM (24.181-b01 mixed mode linux-amd64 
compressed oops)
# Derivative: IcedTea 2.6.14
# Distribution: Ubuntu 14.04 LTS, package 7u181-2.6.14-0ubuntu0.3
# Failed to write core dump. Core dumps have been disabled. To enable core 
dumping, try "ulimit -c unlimited" before starting Java again
#
# If you would like to submit a bug report, please include
# instructions on how to reproduce the bug and visit:
#   http://icedtea.classpath.org/bugzilla
#

---------------  T H R E A D  ---------------

Current thread (0x00007f2318272800):  VMThread [stack: 
0x00007f230cec4000,0x00007f230cfc5000] [id=2762]

Stack: [0x00007f230cec4000,0x00007f230cfc5000],  sp=0x00007f230cfc3b10,  free 
space=1022k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V  [libjvm.so+0x966c25]
V  [libjvm.so+0x49b96e]
V  [libjvm.so+0x872b51]
V  [libjvm.so+0x96b69a]
V  [libjvm.so+0x96baf2]
V  [libjvm.so+0x7da992]

VM_Operation (0x00007f2321b956b0): RevokeBias, mode: safepoint, requested by 
thread 0x00007f231800a000


---------------  P R O C E S S  ---------------

Java Threads: ( => current thread )
  0x00007f231936e800 JavaThread "nioEventLoopGroup-3-32" [_thread_blocked, 
id=3316, stack(0x00007f22f9769000,0x00007f22f986a000)]
  0x00007f231936d000 JavaThread "nioEventLoopGroup-3-31" [_thread_blocked, 
id=3315, stack(0x00007f22f986a000,0x00007f22f996b000)]
  0x00007f231936b000 JavaThread "nioEventLoopGroup-3-30" [_thread_blocked, 
id=3314, stack(0x00007f22f996b000,0x00007f22f9a6c000)]
  0x00007f2319368800 JavaThread "nioEventLoopGroup-3-29" [_thread_blocked, 
id=3313, stack(0x00007f22f9a6c000,0x00007f22f9b6d000)]
  0x00007f2319366800 JavaThread "nioEventLoopGroup-3-28" [_thread_blocked, 
id=3312, stack(0x00007f22f9b6d000,0x00007f22f9c6e000)]
  0x00007f2319364800 JavaThread "nioEventLoopGroup-3-27" [_thread_blocked, 
id=3311, stack(0x00007f22f9c6e000,0x00007f22f9d6f000)]
  0x00007f2319362800 JavaThread "nioEventLoopGroup-3-26" [_thread_blocked, 
id=3310, stack(0x00007f22f9d6f000,0x00007f22f9e70000)]
  0x00007f2319360800 JavaThread "nioEventLoopGroup-3-25" [_thread_blocked, 
id=3309, stack(0x00007f22f9e70000,0x00007f22f9f71000)]
  0x00007f231935e800 JavaThread "nioEventLoopGroup-3-24" [_thread_blocked, 
id=3308, stack(0x00007f22f9f71000,0x00007f22fa072000)]
  0x00007f231935c800 JavaThread "nioEventLoopGroup-3-23" [_thread_blocked, 
id=3307, stack(0x00007f22fa072000,0x00007f22fa173000)]
  0x00007f231935a800 JavaThread "nioEventLoopGroup-3-22" [_thread_blocked, 
id=3306, stack(0x00007f22fa173000,0x00007f22fa274000)]
  0x00007f2319358800 JavaThread "nioEventLoopGroup-3-21" [_thread_blocked, 
id=3305, stack(0x00007f22fa274000,0x00007f22fa375000)]
  0x00007f2319356800 JavaThread "nioEventLoopGroup-3-20" [_thread_blocked, 
id=3304, stack(0x00007f22fa375000,0x00007f22fa476000)]
  0x00007f2319354000 JavaThread "nioEventLoopGroup-3-19" [_thread_blocked, 
id=3303, stack(0x00007f22fa476000,0x00007f22fa577000)]


{code}

> DistCp result has data and checksum mismatch when blocks per chunk > 0
> ----------------------------------------------------------------------
>
>                 Key: HADOOP-16049
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16049
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: tools/distcp
>    Affects Versions: 2.9.2
>            Reporter: Kai Xie
>            Assignee: Kai Xie
>            Priority: Major
>         Attachments: HADOOP-16049-branch-2-003.patch, 
> HADOOP-16049-branch-2-003.patch, HADOOP-16049-branch-2-004.patch, 
> HADOOP-16049-branch-2-005.patch
>
>
> In 2.9.2 RetriableFileCopyCommand.copyBytes,
> {code:java}
> int bytesRead = readBytes(inStream, buf, sourceOffset);
> while (bytesRead >= 0) {
>   ...
>   if (action == FileAction.APPEND) {
>     sourceOffset += bytesRead;
>   }
>   ... // write to dst
>   bytesRead = readBytes(inStream, buf, sourceOffset);
> }{code}
> it does a positioned read but the position (`sourceOffset` here) is never 
> updated when blocks per chunk is set to > 0 (which always disables append 
> action). So for chunk with offset != 0, it will keep copying the first few 
> bytes again and again, causing result to have data & checksum mismatch.
> To re-produce this issue, in branch-2, update BLOCK_SIZE to 10240 (> default 
> copy buffer size) in class TestDistCpSystem and run it.
> HADOOP-15292 has resolved the issue reported in this ticket in 
> trunk/branch-3.1/branch-3.2 by not using the positioned read, but has not 
> been backported to branch-2 yet
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to