[
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16371052#comment-16371052
]
genericqa commented on HDFS-13056:
----------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m
0s{color} | {color:green} The patch appears to include 6 new or modified test
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}
15m 47s{color} | {color:green} branch has no errors when building and testing
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 12m
25s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 12m 25s{color}
| {color:red} root generated 1 new + 1231 unchanged - 0 fixed = 1232 total (was
1231) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}
2m 17s{color} | {color:orange} root: The patch generated 175 new + 609
unchanged - 1 fixed = 784 total (was 610) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git
apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}
10m 12s{color} | {color:green} patch has no errors when building and testing
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m
2s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m
30s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}114m 18s{color}
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m
51s{color} | {color:green} The patch does not generate ASF License warnings.
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}221m 40s{color} |
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests |
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-13056 |
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/12911312/HDFS-13056.002.patch |
| Optional Tests | asflicense compile javac javadoc mvninstall mvnsite
unit shadedclient findbugs checkstyle cc |
| uname | Linux 2d0d14cbbcb3 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b0d3c87 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| javac |
https://builds.apache.org/job/PreCommit-HDFS-Build/23137/artifact/out/diff-compile-javac-root.txt
|
| checkstyle |
https://builds.apache.org/job/PreCommit-HDFS-Build/23137/artifact/out/diff-checkstyle-root.txt
|
| whitespace |
https://builds.apache.org/job/PreCommit-HDFS-Build/23137/artifact/out/whitespace-eol.txt
|
| unit |
https://builds.apache.org/job/PreCommit-HDFS-Build/23137/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
|
| Test Results |
https://builds.apache.org/job/PreCommit-HDFS-Build/23137/testReport/ |
| Max. process+thread count | 3000 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common
hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . |
| Console output |
https://builds.apache.org/job/PreCommit-HDFS-Build/23137/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
> Expose file-level composite CRCs in HDFS which are comparable across
> different instances/layouts
> ------------------------------------------------------------------------------------------------
>
> Key: HDFS-13056
> URL: https://issues.apache.org/jira/browse/HDFS-13056
> Project: Hadoop HDFS
> Issue Type: New Feature
> Components: datanode, distcp, erasure-coding, federation, hdfs
> Affects Versions: 3.0.0
> Reporter: Dennis Huo
> Priority: Major
> Attachments: HDFS-13056-branch-2.8.001.patch,
> HDFS-13056-branch-2.8.poc1.patch, HDFS-13056.001.patch, HDFS-13056.002.patch,
> Reference_only_zhen_PPOC_hadoop2.6.X.diff, hdfs-file-composite-crc32-v1.pdf,
> hdfs-file-composite-crc32-v2.pdf, hdfs-file-composite-crc32-v3.pdf
>
>
> FileChecksum was first introduced in
> [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then
> has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are
> already stored as part of datanode metadata, and the MD5 approach is used to
> compute an aggregate value in a distributed manner, with individual datanodes
> computing the MD5-of-CRCs per-block in parallel, and the HDFS client
> computing the second-level MD5.
>
> A shortcoming of this approach which is often brought up is the fact that
> this FileChecksum is sensitive to the internal block-size and chunk-size
> configuration, and thus different HDFS files with different block/chunk
> settings cannot be compared. More commonly, one might have different HDFS
> clusters which use different block sizes, in which case any data migration
> won't be able to use the FileChecksum for distcp's rsync functionality or for
> verifying end-to-end data integrity (on top of low-level data integrity
> checks applied at data transfer time).
>
> This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430
> during the addition of checksum support for striped erasure-coded files;
> while there was some discussion of using CRC composability, it still
> ultimately settled on hierarchical MD5 approach, which also adds the problem
> that checksums of basic replicated files are not comparable to striped files.
>
> This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses
> CRC composition to remain completely chunk/block agnostic, and allows
> comparison between striped vs replicated files, between different HDFS
> instances, and possible even between HDFS and other external storage systems.
> This feature can also be added in-place to be compatible with existing block
> metadata, and doesn't need to change the normal path of chunk verification,
> so is minimally invasive. This also means even large preexisting HDFS
> deployments could adopt this feature to retroactively sync data. A detailed
> design document can be found here:
> https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]