[
https://issues.apache.org/jira/browse/HADOOP-12077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151770#comment-16151770
]
Hadoop QA commented on HADOOP-12077:
------------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m
0s{color} | {color:green} The patch appears to include 3 new or modified test
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 57s{color}
| {color:red} root generated 1 new + 1285 unchanged - 0 fixed = 1286 total (was
1285) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}
2m 5s{color} | {color:orange} root: The patch generated 2 new + 152 unchanged
- 11 fixed = 154 total (was 163) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m
0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 8s{color}
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 4s{color}
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m
39s{color} | {color:green} The patch does not generate ASF License warnings.
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}185m 40s{color} |
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
| | hadoop.security.TestKDiag |
| | hadoop.hdfs.TestBlockStoragePolicy |
| | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
| | hadoop.hdfs.TestLeaseRecoveryStriped |
| | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 |
| | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 |
| | hadoop.hdfs.TestReplaceDatanodeOnFailure |
| | hadoop.hdfs.TestReadStripedFileWithDecoding |
| | hadoop.hdfs.TestEncryptionZonesWithHA |
| | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
| | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
| | hadoop.hdfs.TestFileAppendRestart |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HADOOP-12077 |
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/12885126/HADOOP-12077.010.patch
|
| Optional Tests | asflicense compile javac javadoc mvninstall mvnsite
unit findbugs checkstyle |
| uname | Linux 248e8a593816 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh
|
| git revision | trunk / 275980b |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| javac |
https://builds.apache.org/job/PreCommit-HADOOP-Build/13158/artifact/patchprocess/diff-compile-javac-root.txt
|
| checkstyle |
https://builds.apache.org/job/PreCommit-HADOOP-Build/13158/artifact/patchprocess/diff-checkstyle-root.txt
|
| unit |
https://builds.apache.org/job/PreCommit-HADOOP-Build/13158/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
|
| unit |
https://builds.apache.org/job/PreCommit-HADOOP-Build/13158/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
|
| Test Results |
https://builds.apache.org/job/PreCommit-HADOOP-Build/13158/testReport/ |
| modules | C: hadoop-common-project/hadoop-common
hadoop-hdfs-project/hadoop-hdfs U: . |
| Console output |
https://builds.apache.org/job/PreCommit-HADOOP-Build/13158/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
> Provide a multi-URI replication Inode for ViewFs
> ------------------------------------------------
>
> Key: HADOOP-12077
> URL: https://issues.apache.org/jira/browse/HADOOP-12077
> Project: Hadoop Common
> Issue Type: New Feature
> Components: fs
> Reporter: Gera Shegalov
> Assignee: Gera Shegalov
> Attachments: HADOOP-12077.001.patch, HADOOP-12077.002.patch,
> HADOOP-12077.003.patch, HADOOP-12077.004.patch, HADOOP-12077.005.patch,
> HADOOP-12077.006.patch, HADOOP-12077.007.patch, HADOOP-12077.008.patch,
> HADOOP-12077.009.patch, HADOOP-12077.010.patch
>
>
> This JIRA is to provide simple "replication" capabilities for applications
> that maintain logically equivalent paths in multiple locations for caching or
> failover (e.g., S3 and HDFS). We noticed a simple common HDFS usage pattern
> in our applications. They host their data on some logical cluster C. There
> are corresponding HDFS clusters in multiple datacenters. When the application
> runs in DC1, it prefers to read from C in DC1, and the applications prefers
> to failover to C in DC2 if the application is migrated to DC2 or when C in
> DC1 is unavailable. New application data versions are created
> periodically/relatively infrequently.
> In order to address many common scenarios in a general fashion, and to avoid
> unnecessary code duplication, we implement this functionality in ViewFs (our
> default FileSystem spanning all clusters in all datacenters) in a project
> code-named Nfly (N as in N datacenters). Currently each ViewFs Inode points
> to a single URI via ChRootedFileSystem. Consequently, we introduce a new type
> of links that points to a list of URIs that are each going to be wrapped in
> ChRootedFileSystem. A typical usage:
> /nfly/C/user->/DC1/C/user,/DC2/C/user,... This collection of
> ChRootedFileSystem instances is fronted by the Nfly filesystem object that is
> actually used for the mount point/Inode. Nfly filesystems backs a single
> logical path /nfly/C/user/<user>/path by multiple physical paths.
> Nfly filesystem supports setting minReplication. As long as the number of
> URIs on which an update has succeeded is greater than or equal to
> minReplication exceptions are only logged but not thrown. Each update
> operation is currently executed serially (client-bandwidth driven parallelism
> will be added later).
> A file create/write:
> # Creates a temporary invisible _nfly_tmp_file in the intended chrooted
> filesystem.
> # Returns a FSDataOutputStream that wraps output streams returned by 1
> # All writes are forwarded to each output stream.
> # On close of stream created by 2, all n streams are closed, and the files
> are renamed from _nfly_tmp_file to file. All files receive the same mtime
> corresponding to the client system time as of beginning of this step.
> # If at least minReplication destinations has gone through steps 1-4 without
> failures the transaction is considered logically committed, otherwise a
> best-effort attempt of cleaning up the temporary files is attempted.
> As for reads, we support a notion of locality similar to HDFS /DC/rack/node.
> We sort Inode URIs using NetworkTopology by their authorities. These are
> typically host names in simple HDFS URIs. If the authority is missing as is
> the case with the local file:/// the local host name is assumed
> InetAddress.getLocalHost(). This makes sure that the local file system is
> always the closest one to the reader in this approach. For our Hadoop 2 hdfs
> URIs that are based on nameservice ids instead of hostnames it is very easy
> to adjust the topology script since our nameservice ids already contain the
> datacenter. As for rack and node we can simply output any string such as
> /DC/rack-nsid/node-nsid, since we only care about datacenter-locality for
> such filesystem clients.
> There are 2 policies/additions to the read call path that makes it more
> expensive, but improve user experience:
> - readMostRecent - when this policy is enabled, Nfly first checks mtime for
> the path under all URIs, sorts them from most recent to least recent. Nfly
> then sorts the set of most recent URIs topologically in the same manner as
> described above.
> - repairOnRead - when readMostRecent is enabled Nfly already has to RPC all
> underlying destinations. With repairOnRead, Nfly filesystem would
> additionally attempt to refresh destinations with the path missing or a stale
> version of the path using the nearest available most recent destination.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]