[
https://issues.apache.org/jira/browse/HADOOP-12077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15238332#comment-15238332
]
Hadoop QA commented on HADOOP-12077:
------------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m
0s {color} | {color:green} The patch appears to include 3 new or modified test
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 16s
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 17s
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m
8s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 48s
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 0s
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 49s
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 42s
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 58s
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 58s
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 5s
{color} | {color:red} root: patch generated 2 new + 169 unchanged - 4 fixed =
171 total (was 173) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 47s
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 53s
{color} | {color:red} hadoop-common-project/hadoop-common generated 4 new + 0
unchanged - 0 fixed = 4 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 0s
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 48s
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 51s {color}
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 0s {color}
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_77. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 44s
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95.
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 1s {color}
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m
42s {color} | {color:green} Patch does not generate ASF License warnings.
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 243m 2s {color}
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
| | Boxing/unboxing to parse a primitive
org.apache.hadoop.fs.viewfs.NflyFSystem.createFileSystem(URI[], Configuration,
String) At
NflyFSystem.java:org.apache.hadoop.fs.viewfs.NflyFSystem.createFileSystem(URI[],
Configuration, String) At NflyFSystem.java:[line 933] |
| | org.apache.hadoop.fs.viewfs.NflyFSystem$MRNflyNode doesn't override
org.apache.hadoop.net.NodeBase.equals(Object) At NflyFSystem.java:At
NflyFSystem.java:[line 1] |
| | org.apache.hadoop.fs.viewfs.NflyFSystem$NflyNode doesn't override
org.apache.hadoop.net.NodeBase.equals(Object) At NflyFSystem.java:At
NflyFSystem.java:[line 1] |
| | org.apache.hadoop.fs.viewfs.NflyFSystem$NflyStatus overrides equals in
org.apache.hadoop.fs.FileStatus and may not be symmetric At
NflyFSystem.java:and may not be symmetric At NflyFSystem.java:[lines 523-526] |
| JDK v1.8.0_77 Failed junit tests |
hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
| | hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead |
| | hadoop.hdfs.TestFileAppend |
| | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
| JDK v1.8.0_77 Timed out junit tests |
org.apache.hadoop.http.TestHttpServerLifecycle |
| | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
| | org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding |
| JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.TestAclsEndToEnd |
| | hadoop.hdfs.server.datanode.TestDataNodeUUID |
| | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
| | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
| | hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead |
| | hadoop.hdfs.server.datanode.TestDirectoryScanner |
| | hadoop.hdfs.TestCrcCorruption |
| | hadoop.hdfs.TestEncryptionZones |
| | hadoop.hdfs.TestHFlush |
| | hadoop.hdfs.server.namenode.TestLargeDirectoryDelete |
| | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
| JDK v1.7.0_95 Timed out junit tests |
org.apache.hadoop.hdfs.TestWriteReadStripedFile |
| | org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/12798359/HADOOP-12077.005.patch
|
| JIRA Issue | HADOOP-12077 |
| Optional Tests | asflicense compile javac javadoc mvninstall mvnsite
unit findbugs checkstyle |
| uname | Linux 814d7da5e2e0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh
|
| git revision | trunk / 6ef4287 |
| Default Java | 1.7.0_95 |
| Multi-JDK versions | /usr/lib/jvm/java-8-oracle:1.8.0_77
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 |
| findbugs | v3.0.0 |
| checkstyle |
https://builds.apache.org/job/PreCommit-HADOOP-Build/9074/artifact/patchprocess/diff-checkstyle-root.txt
|
| findbugs |
https://builds.apache.org/job/PreCommit-HADOOP-Build/9074/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-common.html
|
| unit |
https://builds.apache.org/job/PreCommit-HADOOP-Build/9074/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_77.txt
|
| unit |
https://builds.apache.org/job/PreCommit-HADOOP-Build/9074/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt
|
| unit |
https://builds.apache.org/job/PreCommit-HADOOP-Build/9074/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
|
| unit test logs |
https://builds.apache.org/job/PreCommit-HADOOP-Build/9074/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_77.txt
https://builds.apache.org/job/PreCommit-HADOOP-Build/9074/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt
https://builds.apache.org/job/PreCommit-HADOOP-Build/9074/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
|
| JDK v1.7.0_95 Test Results |
https://builds.apache.org/job/PreCommit-HADOOP-Build/9074/testReport/ |
| modules | C: hadoop-common-project/hadoop-common
hadoop-hdfs-project/hadoop-hdfs U: . |
| Console output |
https://builds.apache.org/job/PreCommit-HADOOP-Build/9074/console |
| Powered by | Apache Yetus 0.2.0 http://yetus.apache.org |
This message was automatically generated.
> Provide a multi-URI replication Inode for ViewFs
> ------------------------------------------------
>
> Key: HADOOP-12077
> URL: https://issues.apache.org/jira/browse/HADOOP-12077
> Project: Hadoop Common
> Issue Type: New Feature
> Components: fs
> Reporter: Gera Shegalov
> Assignee: Gera Shegalov
> Attachments: HADOOP-12077.001.patch, HADOOP-12077.002.patch,
> HADOOP-12077.003.patch, HADOOP-12077.004.patch, HADOOP-12077.005.patch
>
>
> This JIRA is to provide simple "replication" capabilities for applications
> that maintain logically equivalent paths in multiple locations for caching or
> failover (e.g., S3 and HDFS). We noticed a simple common HDFS usage pattern
> in our applications. They host their data on some logical cluster C. There
> are corresponding HDFS clusters in multiple datacenters. When the application
> runs in DC1, it prefers to read from C in DC1, and the applications prefers
> to failover to C in DC2 if the application is migrated to DC2 or when C in
> DC1 is unavailable. New application data versions are created
> periodically/relatively infrequently.
> In order to address many common scenarios in a general fashion, and to avoid
> unnecessary code duplication, we implement this functionality in ViewFs (our
> default FileSystem spanning all clusters in all datacenters) in a project
> code-named Nfly (N as in N datacenters). Currently each ViewFs Inode points
> to a single URI via ChRootedFileSystem. Consequently, we introduce a new type
> of links that points to a list of URIs that are each going to be wrapped in
> ChRootedFileSystem. A typical usage:
> /nfly/C/user->/DC1/C/user,/DC2/C/user,... This collection of
> ChRootedFileSystem instances is fronted by the Nfly filesystem object that is
> actually used for the mount point/Inode. Nfly filesystems backs a single
> logical path /nfly/C/user/<user>/path by multiple physical paths.
> Nfly filesystem supports setting minReplication. As long as the number of
> URIs on which an update has succeeded is greater than or equal to
> minReplication exceptions are only logged but not thrown. Each update
> operation is currently executed serially (client-bandwidth driven parallelism
> will be added later).
> A file create/write:
> # Creates a temporary invisible _nfly_tmp_file in the intended chrooted
> filesystem.
> # Returns a FSDataOutputStream that wraps output streams returned by 1
> # All writes are forwarded to each output stream.
> # On close of stream created by 2, all n streams are closed, and the files
> are renamed from _nfly_tmp_file to file. All files receive the same mtime
> corresponding to the client system time as of beginning of this step.
> # If at least minReplication destinations has gone through steps 1-4 without
> failures the transaction is considered logically committed, otherwise a
> best-effort attempt of cleaning up the temporary files is attempted.
> As for reads, we support a notion of locality similar to HDFS /DC/rack/node.
> We sort Inode URIs using NetworkTopology by their authorities. These are
> typically host names in simple HDFS URIs. If the authority is missing as is
> the case with the local file:/// the local host name is assumed
> InetAddress.getLocalHost(). This makes sure that the local file system is
> always the closest one to the reader in this approach. For our Hadoop 2 hdfs
> URIs that are based on nameservice ids instead of hostnames it is very easy
> to adjust the topology script since our nameservice ids already contain the
> datacenter. As for rack and node we can simply output any string such as
> /DC/rack-nsid/node-nsid, since we only care about datacenter-locality for
> such filesystem clients.
> There are 2 policies/additions to the read call path that makes it more
> expensive, but improve user experience:
> - readMostRecent - when this policy is enabled, Nfly first checks mtime for
> the path under all URIs, sorts them from most recent to least recent. Nfly
> then sorts the set of most recent URIs topologically in the same manner as
> described above.
> - repairOnRead - when readMostRecent is enabled Nfly already has to RPC all
> underlying destinations. With repairOnRead, Nfly filesystem would
> additionally attempt to refresh destinations with the path missing or a stale
> version of the path using the nearest available most recent destination.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)