[
https://issues.apache.org/jira/browse/HDFS-11402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15975935#comment-15975935
]
Hadoop QA commented on HDFS-11402:
----------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m
0s{color} | {color:green} The patch appears to include 4 new or modified test
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m
31s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m
35s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m
6s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m
14s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}
0m 56s{color} | {color:orange} hadoop-hdfs-project: The patch generated 4 new +
788 unchanged - 4 fixed = 792 total (was 792) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m
0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m
26s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 1s{color}
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m
21s{color} | {color:green} The patch does not generate ASF License warnings.
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}122m 12s{color} |
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests |
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
| | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
| | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
| | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
| | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Image:yetus/hadoop:0ac17dc |
| JIRA Issue | HDFS-11402 |
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/12864169/HDFS-11402.05.patch |
| Optional Tests | asflicense compile javac javadoc mvninstall mvnsite
unit findbugs checkstyle xml |
| uname | Linux 5e0352be5ef0 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh
|
| git revision | trunk / c154935 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs |
https://builds.apache.org/job/PreCommit-HDFS-Build/19151/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html
|
| findbugs |
https://builds.apache.org/job/PreCommit-HDFS-Build/19151/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
|
| checkstyle |
https://builds.apache.org/job/PreCommit-HDFS-Build/19151/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
|
| unit |
https://builds.apache.org/job/PreCommit-HDFS-Build/19151/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
|
| Test Results |
https://builds.apache.org/job/PreCommit-HDFS-Build/19151/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
| Console output |
https://builds.apache.org/job/PreCommit-HDFS-Build/19151/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
> HDFS Snapshots should capture point-in-time copies of OPEN files
> ----------------------------------------------------------------
>
> Key: HDFS-11402
> URL: https://issues.apache.org/jira/browse/HDFS-11402
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: hdfs
> Affects Versions: 2.6.0
> Reporter: Manoj Govindassamy
> Assignee: Manoj Govindassamy
> Attachments: HDFS-11402.01.patch, HDFS-11402.02.patch,
> HDFS-11402.03.patch, HDFS-11402.04.patch, HDFS-11402.05.patch
>
>
> *Problem:*
> 1. When there are files being written and when HDFS Snapshots are taken in
> parallel, Snapshots do capture all these files, but these being written files
> in Snapshots do not have the point-in-time file length captured. That is,
> these open files are not frozen in HDFS Snapshots. These open files
> grow/shrink in length, just like the original file, even after the snapshot
> time.
> 2. At the time of File close or any other meta data modification operation on
> these files, HDFS reconciles the file length and records the modification in
> the last taken Snapshot. All the previously taken Snapshots continue to have
> those open Files with no modification recorded. So, all those previous
> snapshots end up using the final modification record in the last snapshot.
> Thus after the file close, file lengths in all those snapshots will end up
> same.
> Assume File1 is opened for write and a total of 1MB written to it. While the
> writes are happening, snapshots are taken in parallel.
> {noformat}
> |---Time---T1-----------T2-------------T3----------------T4------>
> |-----------------------Snap1----------Snap2-------------Snap3--->
> |---File1.open---write---------write-----------close------------->
> {noformat}
> Then at time,
> T2:
> Snap1.File1.length = 0
> T3:
> Snap1.File1.length = 0
> Snap2.File1.length = 0
> <File1 write completed and closed>
> T4:
> Snap1.File1.length = 1MB
> Snap2.File1.length = 1MB
> Snap3.File1.length = 1MB
> *Proposal*
> 1. At the time of taking Snapshot, {{SnapshotManager#createSnapshot}} can
> optionally request {{DirectorySnapshottableFeature#addSnapshot}} to freeze
> open files.
> 2. {{DirectorySnapshottableFeature#addSnapshot}} can consult with
> {{LeaseManager}} and get a list INodesInPath for all open files under the
> snapshot dir.
> 3. {{DirectorySnapshottableFeature#addSnapshot}} after the Snapshot creation,
> Diff creation and updating modification time, can invoke
> {{INodeFile#recordModification}} for each of the open files. This way, the
> Snapshot just taken will have a {{FileDiff}} with {{fileSize}} captured for
> each of the open files.
> 4. Above model follows the current Snapshot and Diff protocols and doesn't
> introduce any any disk formats. So, I don't think we will be needing any new
> FSImage Loader/Saver changes for Snapshots.
> 5. One of the design goals of HDFS Snapshot was ability to take any number of
> snapshots in O(1) time. LeaseManager though has all the open files with
> leases in-memory map, an iteration is still needed to prune the needed open
> files and then run recordModification on each of them. So, it will not be a
> strict O(1) with the above proposal. But, its going be a marginal increase
> only as the new order will be of O(open_files_under_snap_dir). In order to
> avoid HDFS Snapshots change in behavior for open files and avoid change in
> time complexity, this improvement can be made under a new config
> {{"dfs.namenode.snapshot.freeze.openfiles"}} which by default can be
> {{false}}.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]