[ https://issues.apache.org/jira/browse/HADOOP-12077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16113385#comment-16113385 ]
Hadoop QA commented on HADOOP-12077: ------------------------------------ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 27s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 34s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 54s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 9 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 5s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 12m 5s{color} | {color:red} root generated 5 new + 1418 unchanged - 0 fixed = 1423 total (was 1418) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 2s{color} | {color:orange} root: The patch generated 11 new + 159 unchanged - 5 fixed = 170 total (was 164) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 46s{color} | {color:red} hadoop-common-project/hadoop-common generated 5 new + 0 unchanged - 0 fixed = 5 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 22s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 40s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}184m 20s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-common-project/hadoop-common | | | Boxing/unboxing to parse a primitive org.apache.hadoop.fs.viewfs.NflyFSystem.createFileSystem(URI[], Configuration, String) At NflyFSystem.java:org.apache.hadoop.fs.viewfs.NflyFSystem.createFileSystem(URI[], Configuration, String) At NflyFSystem.java:[line 933] | | | org.apache.hadoop.fs.viewfs.NflyFSystem$MRNflyNode doesn't override org.apache.hadoop.net.NodeBase.equals(Object) At NflyFSystem.java:At NflyFSystem.java:[line 1] | | | org.apache.hadoop.fs.viewfs.NflyFSystem$NflyNode doesn't override org.apache.hadoop.net.NodeBase.equals(Object) At NflyFSystem.java:At NflyFSystem.java:[line 1] | | | org.apache.hadoop.fs.viewfs.NflyFSystem$NflyStatus overrides equals in org.apache.hadoop.fs.FileStatus and may not be symmetric At NflyFSystem.java:and may not be symmetric At NflyFSystem.java:[lines 523-526] | | | Class org.apache.hadoop.fs.viewfs.NflyFSystem$NflyStatus defines non-transient non-serializable instance field realFs In NflyFSystem.java:instance field realFs In NflyFSystem.java | | Failed junit tests | hadoop.net.TestDNS | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-12077 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12880113/HADOOP-12077.006.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 3876c19b6643 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c5d256c | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-HADOOP-Build/12942/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html | | javac | https://builds.apache.org/job/PreCommit-HADOOP-Build/12942/artifact/patchprocess/diff-compile-javac-root.txt | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/12942/artifact/patchprocess/diff-checkstyle-root.txt | | findbugs | https://builds.apache.org/job/PreCommit-HADOOP-Build/12942/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-common.html | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/12942/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/12942/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12942/testReport/ | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12942/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Provide a multi-URI replication Inode for ViewFs > ------------------------------------------------ > > Key: HADOOP-12077 > URL: https://issues.apache.org/jira/browse/HADOOP-12077 > Project: Hadoop Common > Issue Type: New Feature > Components: fs > Reporter: Gera Shegalov > Assignee: Gera Shegalov > Attachments: HADOOP-12077.001.patch, HADOOP-12077.002.patch, > HADOOP-12077.003.patch, HADOOP-12077.004.patch, HADOOP-12077.005.patch, > HADOOP-12077.006.patch > > > This JIRA is to provide simple "replication" capabilities for applications > that maintain logically equivalent paths in multiple locations for caching or > failover (e.g., S3 and HDFS). We noticed a simple common HDFS usage pattern > in our applications. They host their data on some logical cluster C. There > are corresponding HDFS clusters in multiple datacenters. When the application > runs in DC1, it prefers to read from C in DC1, and the applications prefers > to failover to C in DC2 if the application is migrated to DC2 or when C in > DC1 is unavailable. New application data versions are created > periodically/relatively infrequently. > In order to address many common scenarios in a general fashion, and to avoid > unnecessary code duplication, we implement this functionality in ViewFs (our > default FileSystem spanning all clusters in all datacenters) in a project > code-named Nfly (N as in N datacenters). Currently each ViewFs Inode points > to a single URI via ChRootedFileSystem. Consequently, we introduce a new type > of links that points to a list of URIs that are each going to be wrapped in > ChRootedFileSystem. A typical usage: > /nfly/C/user->/DC1/C/user,/DC2/C/user,... This collection of > ChRootedFileSystem instances is fronted by the Nfly filesystem object that is > actually used for the mount point/Inode. Nfly filesystems backs a single > logical path /nfly/C/user/<user>/path by multiple physical paths. > Nfly filesystem supports setting minReplication. As long as the number of > URIs on which an update has succeeded is greater than or equal to > minReplication exceptions are only logged but not thrown. Each update > operation is currently executed serially (client-bandwidth driven parallelism > will be added later). > A file create/write: > # Creates a temporary invisible _nfly_tmp_file in the intended chrooted > filesystem. > # Returns a FSDataOutputStream that wraps output streams returned by 1 > # All writes are forwarded to each output stream. > # On close of stream created by 2, all n streams are closed, and the files > are renamed from _nfly_tmp_file to file. All files receive the same mtime > corresponding to the client system time as of beginning of this step. > # If at least minReplication destinations has gone through steps 1-4 without > failures the transaction is considered logically committed, otherwise a > best-effort attempt of cleaning up the temporary files is attempted. > As for reads, we support a notion of locality similar to HDFS /DC/rack/node. > We sort Inode URIs using NetworkTopology by their authorities. These are > typically host names in simple HDFS URIs. If the authority is missing as is > the case with the local file:/// the local host name is assumed > InetAddress.getLocalHost(). This makes sure that the local file system is > always the closest one to the reader in this approach. For our Hadoop 2 hdfs > URIs that are based on nameservice ids instead of hostnames it is very easy > to adjust the topology script since our nameservice ids already contain the > datacenter. As for rack and node we can simply output any string such as > /DC/rack-nsid/node-nsid, since we only care about datacenter-locality for > such filesystem clients. > There are 2 policies/additions to the read call path that makes it more > expensive, but improve user experience: > - readMostRecent - when this policy is enabled, Nfly first checks mtime for > the path under all URIs, sorts them from most recent to least recent. Nfly > then sorts the set of most recent URIs topologically in the same manner as > described above. > - repairOnRead - when readMostRecent is enabled Nfly already has to RPC all > underlying destinations. With repairOnRead, Nfly filesystem would > additionally attempt to refresh destinations with the path missing or a stale > version of the path using the nearest available most recent destination. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org