[jira] [Commented] (HDFS-13158) Fix Spelling Mistakes - DECOMISSIONED

2019-02-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759293#comment-16759293
 ] 

Hadoop QA commented on HDFS-13158:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m  4s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 46s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-fs2img in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
33s{color} | {color:green} hadoop-yarn-ui in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}296m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | 

[jira] [Commented] (HDFS-13909) RBF: Add Cache pools and directives related ClientProtocol apis

2019-02-02 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759288#comment-16759288
 ] 

Ayush Saxena commented on HDFS-13909:
-

Anybody working on this??

> RBF: Add Cache pools and directives related ClientProtocol apis
> ---
>
> Key: HDFS-13909
> URL: https://issues.apache.org/jira/browse/HDFS-13909
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Major
>
> Currently addCachePool, modifyCachePool, removeCachePool, listCachePools, 
> addCacheDirective, modifyCacheDirective, removeCacheDirective, 
> listCacheDirectives these APIs are not implemented in Router.
> This JIRA is intend to implement above mentioned APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14172) Avoid NPE when SectionName#fromString() returns null

2019-02-02 Thread Xiang Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759264#comment-16759264
 ] 

Xiang Li commented on HDFS-14172:
-

[~linyiqun] [~adam.antal], could you please review the patch v003 when you are 
able to?

> Avoid NPE when SectionName#fromString() returns null
> 
>
> Key: HDFS-14172
> URL: https://issues.apache.org/jira/browse/HDFS-14172
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HADOOP-14172.000.patch, HADOOP-14172.001.patch, 
> HADOOP-14172.002.patch, HADOOP-14172.003.patch
>
>
> In FSImageFormatProtobuf.SectionName#fromString(), as follows:
> {code:java}
> public static SectionName fromString(String name) {
>   for (SectionName n : values) {
> if (n.name.equals(name))
>   return n;
>   }
>   return null;
> }
> {code}
> When the code meets an unknown section from the fsimage, the function will 
> return null. Callers always operates the return value with a "switch" clause, 
> like FSImageFormatProtobuf.Loader#loadInternal(), as:
> {code:java}
> switch (SectionName.fromString(n))
> {code}
> NPE will be thrown here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14172) Avoid NPE when SectionName#fromString() returns null

2019-02-02 Thread Xiang Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759262#comment-16759262
 ] 

Xiang Li commented on HDFS-14172:
-

The UT errors reported are not related. Could get them passed on my local 
machine.
* hadoop.fs.viewfs.TestViewFileSystemHdfs
* hadoop.hdfs.web.TestWebHdfsTimeouts
* hadoop.hdfs.server.datanode.TestBPOfferService
* hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes

> Avoid NPE when SectionName#fromString() returns null
> 
>
> Key: HDFS-14172
> URL: https://issues.apache.org/jira/browse/HDFS-14172
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HADOOP-14172.000.patch, HADOOP-14172.001.patch, 
> HADOOP-14172.002.patch, HADOOP-14172.003.patch
>
>
> In FSImageFormatProtobuf.SectionName#fromString(), as follows:
> {code:java}
> public static SectionName fromString(String name) {
>   for (SectionName n : values) {
> if (n.name.equals(name))
>   return n;
>   }
>   return null;
> }
> {code}
> When the code meets an unknown section from the fsimage, the function will 
> return null. Callers always operates the return value with a "switch" clause, 
> like FSImageFormatProtobuf.Loader#loadInternal(), as:
> {code:java}
> switch (SectionName.fromString(n))
> {code}
> NPE will be thrown here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14104) Review getImageTxIdToRetain

2019-02-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759253#comment-16759253
 ] 

Hadoop QA commented on HDFS-14104:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 5 unchanged - 1 fixed = 5 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}120m  2s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}184m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14104 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12957408/HDFS-14104.5.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5c1e15311f36 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b6f90d3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26134/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26134/testReport/ |
| Max. process+thread count | 2721 (vs. ulimit of 1) |
| modules | C: 

[jira] [Commented] (HDFS-9596) Remove "Shuffle" Method From DFSUtil

2019-02-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-9596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759249#comment-16759249
 ] 

Hadoop QA commented on HDFS-9596:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 100 unchanged - 1 fixed = 100 total (was 101) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}133m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-9596 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12957412/HDFS-9596.5.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux af2f58b3e034 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b6f90d3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26135/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26135/testReport/ |
| Max. process+thread count | 5033 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console 

[jira] [Updated] (HDFS-14172) Avoid NPE when SectionName#fromString() returns null

2019-02-02 Thread Xiang Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HDFS-14172:

Description: 
In FSImageFormatProtobuf.SectionName#fromString(), as follows:
{code:java}
public static SectionName fromString(String name) {
  for (SectionName n : values) {
if (n.name.equals(name))
  return n;
  }
  return null;
}
{code}
When the code meets an unknown section from the fsimage, the function will 
return null. Callers always operates the return value with a "switch" clause, 
like FSImageFormatProtobuf.Loader#loadInternal(), as:
{code:java}
switch (SectionName.fromString(n))
{code}
NPE will be thrown here.

  was:
In FSImageFormatProtobuf.SectionName#fromString(), as follows:
{code:java}
public static SectionName fromString(String name) {
  for (SectionName n : values) {
if (n.name.equals(name))
  return n;
  }
  return null;
}
{code}
When the code meets an unknown section from the fsimage, the function will 
return null. Callers always operates the return value with a "switch" clause, 
like FSImageFormatProtobuf.Loader#loadInternal(), as:
{code:java}
switch (SectionName.fromString(n))
{code}
NPE will be thrown here.
For self-protection, shall we add a default section name in the enum of 
SectionName, like "UNKNOWN", to steer clear of NPE?


> Avoid NPE when SectionName#fromString() returns null
> 
>
> Key: HDFS-14172
> URL: https://issues.apache.org/jira/browse/HDFS-14172
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HADOOP-14172.000.patch, HADOOP-14172.001.patch, 
> HADOOP-14172.002.patch, HADOOP-14172.003.patch
>
>
> In FSImageFormatProtobuf.SectionName#fromString(), as follows:
> {code:java}
> public static SectionName fromString(String name) {
>   for (SectionName n : values) {
> if (n.name.equals(name))
>   return n;
>   }
>   return null;
> }
> {code}
> When the code meets an unknown section from the fsimage, the function will 
> return null. Callers always operates the return value with a "switch" clause, 
> like FSImageFormatProtobuf.Loader#loadInternal(), as:
> {code:java}
> switch (SectionName.fromString(n))
> {code}
> NPE will be thrown here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13158) Fix Spelling Mistakes - DECOMISSIONED

2019-02-02 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-13158:
---
Attachment: HDFS-13158.5.patch

> Fix Spelling Mistakes - DECOMISSIONED
> -
>
> Key: HDFS-13158
> URL: https://issues.apache.org/jira/browse/HDFS-13158
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Attachments: HDFS-13158.001.patch, HDFS-13158.2.patch, 
> HDFS-13158.3.patch, HDFS-13158.4.patch, HDFS-13158.5.patch
>
>
> https://github.com/apache/hadoop/search?l=Java=DECOMISSIONED
> There are references in the code to _DECOMISSIONED_ but the correct spelling 
> is _DECOMMISSIONED_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13158) Fix Spelling Mistakes - DECOMISSIONED

2019-02-02 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-13158:
---
Status: Open  (was: Patch Available)

> Fix Spelling Mistakes - DECOMISSIONED
> -
>
> Key: HDFS-13158
> URL: https://issues.apache.org/jira/browse/HDFS-13158
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Attachments: HDFS-13158.001.patch, HDFS-13158.2.patch, 
> HDFS-13158.3.patch, HDFS-13158.4.patch, HDFS-13158.5.patch
>
>
> https://github.com/apache/hadoop/search?l=Java=DECOMISSIONED
> There are references in the code to _DECOMISSIONED_ but the correct spelling 
> is _DECOMMISSIONED_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13158) Fix Spelling Mistakes - DECOMISSIONED

2019-02-02 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-13158:
---
Status: Patch Available  (was: Open)

> Fix Spelling Mistakes - DECOMISSIONED
> -
>
> Key: HDFS-13158
> URL: https://issues.apache.org/jira/browse/HDFS-13158
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Attachments: HDFS-13158.001.patch, HDFS-13158.2.patch, 
> HDFS-13158.3.patch, HDFS-13158.4.patch, HDFS-13158.5.patch
>
>
> https://github.com/apache/hadoop/search?l=Java=DECOMISSIONED
> There are references in the code to _DECOMISSIONED_ but the correct spelling 
> is _DECOMMISSIONED_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14104) Review getImageTxIdToRetain

2019-02-02 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14104:
---
Attachment: HDFS-14104.5.patch

> Review getImageTxIdToRetain
> ---
>
> Key: HDFS-14104
> URL: https://issues.apache.org/jira/browse/HDFS-14104
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14104.1.patch, HDFS-14104.1.patch, 
> HDFS-14104.1.patch, HDFS-14104.2.patch, HDFS-14104.3.patch, 
> HDFS-14104.4.patch, HDFS-14104.5.patch
>
>
> {code:java|title=NNStorageRetentionManager.java}
>   private long getImageTxIdToRetain(FSImageTransactionalStorageInspector 
> inspector) {
>   
> List images = inspector.getFoundImages();
> TreeSet imageTxIds = Sets.newTreeSet();
> for (FSImageFile image : images) {
>   imageTxIds.add(image.getCheckpointTxId());
> }
> 
> List imageTxIdsList = Lists.newArrayList(imageTxIds);
> if (imageTxIdsList.isEmpty()) {
>   return 0;
> }
> 
> Collections.reverse(imageTxIdsList);
> int toRetain = Math.min(numCheckpointsToRetain, imageTxIdsList.size());   
>  
> long minTxId = imageTxIdsList.get(toRetain - 1);
> LOG.info("Going to retain " + toRetain + " images with txid >= " +
> minTxId);
> return minTxId;
>   }
> {code}
> # Fix check style issues
> # Use SLF4J paramaterized logging
> # A lot of work gets done before checking if the list actually contains any 
> entries and returning a 0.  That should be the first thing that happens
> # Instead of building up the {{TreeSet}} in its natural order, then reversing 
> the collection, simply use a reverse natural ordering to begin with and save 
> a step.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9596) Remove "Shuffle" Method From DFSUtil

2019-02-02 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-9596:
--
Attachment: HDFS-9596.5.patch

> Remove "Shuffle" Method From DFSUtil
> 
>
> Key: HDFS-9596
> URL: https://issues.apache.org/jira/browse/HDFS-9596
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Attachments: HDFS-9596.2.patch, HDFS-9596.3.patch, HDFS-9596.4.patch, 
> HDFS-9596.5.patch, Shuffle.HDFS-9596.patch
>
>
> DFSUtil contains a "shuffle" routine that shuffles arrays.  With a little 
> wrapping, Java Collections class can shuffle.  This code is superfluous and 
> undocumented.  The method returns an array, though it does not specify if it 
> returns the same array or a new, copy, array of shuffled items.
> It is only used in one place, so it is better to simply remove and substitute 
> this implementation with the java Collections implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9596) Remove "Shuffle" Method From DFSUtil

2019-02-02 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-9596:
--
Status: Patch Available  (was: Open)

> Remove "Shuffle" Method From DFSUtil
> 
>
> Key: HDFS-9596
> URL: https://issues.apache.org/jira/browse/HDFS-9596
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Attachments: HDFS-9596.2.patch, HDFS-9596.3.patch, HDFS-9596.4.patch, 
> HDFS-9596.5.patch, Shuffle.HDFS-9596.patch
>
>
> DFSUtil contains a "shuffle" routine that shuffles arrays.  With a little 
> wrapping, Java Collections class can shuffle.  This code is superfluous and 
> undocumented.  The method returns an array, though it does not specify if it 
> returns the same array or a new, copy, array of shuffled items.
> It is only used in one place, so it is better to simply remove and substitute 
> this implementation with the java Collections implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9596) Remove "Shuffle" Method From DFSUtil

2019-02-02 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-9596:
--
Status: Open  (was: Patch Available)

> Remove "Shuffle" Method From DFSUtil
> 
>
> Key: HDFS-9596
> URL: https://issues.apache.org/jira/browse/HDFS-9596
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Attachments: HDFS-9596.2.patch, HDFS-9596.3.patch, HDFS-9596.4.patch, 
> HDFS-9596.5.patch, Shuffle.HDFS-9596.patch
>
>
> DFSUtil contains a "shuffle" routine that shuffles arrays.  With a little 
> wrapping, Java Collections class can shuffle.  This code is superfluous and 
> undocumented.  The method returns an array, though it does not specify if it 
> returns the same array or a new, copy, array of shuffled items.
> It is only used in one place, so it is better to simply remove and substitute 
> this implementation with the java Collections implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14104) Review getImageTxIdToRetain

2019-02-02 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14104:
---
Status: Patch Available  (was: Open)

> Review getImageTxIdToRetain
> ---
>
> Key: HDFS-14104
> URL: https://issues.apache.org/jira/browse/HDFS-14104
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14104.1.patch, HDFS-14104.1.patch, 
> HDFS-14104.1.patch, HDFS-14104.2.patch, HDFS-14104.3.patch, 
> HDFS-14104.4.patch, HDFS-14104.5.patch
>
>
> {code:java|title=NNStorageRetentionManager.java}
>   private long getImageTxIdToRetain(FSImageTransactionalStorageInspector 
> inspector) {
>   
> List images = inspector.getFoundImages();
> TreeSet imageTxIds = Sets.newTreeSet();
> for (FSImageFile image : images) {
>   imageTxIds.add(image.getCheckpointTxId());
> }
> 
> List imageTxIdsList = Lists.newArrayList(imageTxIds);
> if (imageTxIdsList.isEmpty()) {
>   return 0;
> }
> 
> Collections.reverse(imageTxIdsList);
> int toRetain = Math.min(numCheckpointsToRetain, imageTxIdsList.size());   
>  
> long minTxId = imageTxIdsList.get(toRetain - 1);
> LOG.info("Going to retain " + toRetain + " images with txid >= " +
> minTxId);
> return minTxId;
>   }
> {code}
> # Fix check style issues
> # Use SLF4J paramaterized logging
> # A lot of work gets done before checking if the list actually contains any 
> entries and returning a 0.  That should be the first thing that happens
> # Instead of building up the {{TreeSet}} in its natural order, then reversing 
> the collection, simply use a reverse natural ordering to begin with and save 
> a step.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14104) Review getImageTxIdToRetain

2019-02-02 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14104:
---
Status: Open  (was: Patch Available)

> Review getImageTxIdToRetain
> ---
>
> Key: HDFS-14104
> URL: https://issues.apache.org/jira/browse/HDFS-14104
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14104.1.patch, HDFS-14104.1.patch, 
> HDFS-14104.1.patch, HDFS-14104.2.patch, HDFS-14104.3.patch, 
> HDFS-14104.4.patch, HDFS-14104.5.patch
>
>
> {code:java|title=NNStorageRetentionManager.java}
>   private long getImageTxIdToRetain(FSImageTransactionalStorageInspector 
> inspector) {
>   
> List images = inspector.getFoundImages();
> TreeSet imageTxIds = Sets.newTreeSet();
> for (FSImageFile image : images) {
>   imageTxIds.add(image.getCheckpointTxId());
> }
> 
> List imageTxIdsList = Lists.newArrayList(imageTxIds);
> if (imageTxIdsList.isEmpty()) {
>   return 0;
> }
> 
> Collections.reverse(imageTxIdsList);
> int toRetain = Math.min(numCheckpointsToRetain, imageTxIdsList.size());   
>  
> long minTxId = imageTxIdsList.get(toRetain - 1);
> LOG.info("Going to retain " + toRetain + " images with txid >= " +
> minTxId);
> return minTxId;
>   }
> {code}
> # Fix check style issues
> # Use SLF4J paramaterized logging
> # A lot of work gets done before checking if the list actually contains any 
> entries and returning a 0.  That should be the first thing that happens
> # Instead of building up the {{TreeSet}} in its natural order, then reversing 
> the collection, simply use a reverse natural ordering to begin with and save 
> a step.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1041) Support TDE(Transparent Data Encryption) for Ozone

2019-02-02 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759210#comment-16759210
 ] 

Anu Engineer commented on HDDS-1041:


[~linyiqun] You are absolutely right, this was not a planned feature for 0.4.0 
release. However, we were doing a proof-of-concept engagement with a user and 
they wanted to have TDE for ozone to be deployed in their cluster. Hence we 
went ahead with this feature. I will update the road map based on your comments.

> Support TDE(Transparent Data Encryption) for Ozone
> --
>
> Key: HDDS-1041
> URL: https://issues.apache.org/jira/browse/HDDS-1041
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: Ozone Encryption At-Rest v2019.2.1.pdf
>
>
> Currently ozone saves data unencrypted on datanode, this ticket is opened to 
> support TDE(Transparent Data Encryption) for Ozone to meet the requirement of 
> use cases that need protection of sensitive data.
> The table below summarize the comparison of HDFS TDE and Ozone TDE: 
>  
> |*HDFS*|*Ozone*|
> |Encryption zone created at directory level.
>  All files created within the encryption zone will be encryption.|Encryption 
> enabled at Bucket level.
>  All objects created within the encrypted bucket will be encrypted.|
> |Encryption zone created with ZK(Zone Key)|Encrypted Bucket created with 
> BEK(Bucket Encryption Key)|
> |Per File Encryption  
>  * File encrypted with DEK(Data Encryption Key)
>  * DEK is encrypted with ZK as EDEK by KMS and persisted as extended 
> attributes.|Per Object Encryption
>  * Object encrypted with DEK(Data Encryption Key)
>  * DEK is encrypted with BEK as EDEK by KMS and persisted as object metadata.|
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14252) RBF : Exceptions are exposing the actual sub cluster path

2019-02-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759200#comment-16759200
 ] 

Hadoop QA commented on HDFS-14252:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 7s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 
21s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14252 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12957395/HDFS-14252-HDFS-13891-02.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b40b4cd025e5 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / d37590b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26133/testReport/ |
| Max. process+thread count | 970 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26133/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF : Exceptions are exposing the actual sub cluster path
> 

[jira] [Commented] (HDFS-14252) RBF : Exceptions are exposing the actual sub cluster path

2019-02-02 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759156#comment-16759156
 ] 

Ayush Saxena commented on HDFS-14252:
-

Thanx [~elgoiri] for the review.

Have handled comments as part of v2.

Pls Review!!!

> RBF : Exceptions are exposing the actual sub cluster path
> -
>
> Key: HDFS-14252
> URL: https://issues.apache.org/jira/browse/HDFS-14252
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14252-HDFS-13891-01.patch, 
> HDFS-14252-HDFS-13891-02.patch
>
>
> In case of file not found exception. If only one destination is available. 
> Either mounted only one Or mounted multiple but available only one(disabled 
> NS or something) during operation. In that scenario the exceptions are not 
> processed and is directly thrown. This exposes the actual sub cluster 
> destination path instead the path w.r.t. Mount.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14252) RBF : Exceptions are exposing the actual sub cluster path

2019-02-02 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14252:

Attachment: HDFS-14252-HDFS-13891-02.patch

> RBF : Exceptions are exposing the actual sub cluster path
> -
>
> Key: HDFS-14252
> URL: https://issues.apache.org/jira/browse/HDFS-14252
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14252-HDFS-13891-01.patch, 
> HDFS-14252-HDFS-13891-02.patch
>
>
> In case of file not found exception. If only one destination is available. 
> Either mounted only one Or mounted multiple but available only one(disabled 
> NS or something) during operation. In that scenario the exceptions are not 
> processed and is directly thrown. This exposes the actual sub cluster 
> destination path instead the path w.r.t. Mount.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1021) Allow ozone datanode to participate in a 1 node as well as 3 node Ratis Pipeline

2019-02-02 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-1021:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the contribution [~shashikant]. I have committed this to trunk.

> Allow ozone datanode to participate in a 1 node as well as 3 node Ratis 
> Pipeline
> 
>
> Key: HDDS-1021
> URL: https://issues.apache.org/jira/browse/HDDS-1021
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1021.000.patch, HDDS-1021.001.patch
>
>
> Currently, if a datanode is a part of 3 node Ratis pipeline, it cannot 
> participate in a single node Ratis pipeline. This Jira aims to enable 
> datanodes to become part of a a 3 node Ratis pipeline as well as single node 
> Ratis pipeline. This is the first step in introducing multiRaft in Ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1044) Client doesn't propogate correct error code to client on out of disk space

2019-02-02 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-1044:
---

 Summary: Client doesn't propogate correct error code to client on 
out of disk space
 Key: HDDS-1044
 URL: https://issues.apache.org/jira/browse/HDDS-1044
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client, Ozone Datanode
Affects Versions: 0.4.0
Reporter: Mukul Kumar Singh
 Fix For: 0.4.0


Ozone Datanode doesn't propagate the correct error to client when the datanode 
runs out of distk space. In space of throwing out of disk space exception, 
Container not found exception is thrown to clients.

{code}
2019-02-02 18:56:22 INFO  KeyValueHandler:149 - Operation: CreateContainer : 
Trace ID: 
72e5c6cc-e624-4acb-bba7-574e3a98ac90WriteChunk1868fc02742c62c3543022377edc0c5ca_stream_4fdc8750-f0d5-4e17-8e37-6d78552053b0_chunk_1
 : Message: Container creation failed, due to disk out of space : Result: 
DISK_OUT_OF_SPACE
2019-02-02 18:56:22 INFO  HddsDispatcher:149 - Operation: WriteChunk : Trace 
ID: 
72e5c6cc-e624-4acb-bba7-574e3a98ac90WriteChunk1868fc02742c62c3543022377edc0c5ca_stream_4fdc8750-f0d5-4e17-8e37-6d78552053b0_chunk_1
 : Message: ContainerID 65 does not exist : Result: CONTAINER_NOT_FOUND
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1021) Allow ozone datanode to participate in a 1 node as well as 3 node Ratis Pipeline

2019-02-02 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759148#comment-16759148
 ] 

Hudson commented on HDDS-1021:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #15871 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15871/])
HDDS-1021. Allow ozone datanode to participate in a 1 node as well as 3 
(msingh: rev b6f90d3957bf229a670842689e7c8b70a8facb87)
* (add) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestHybridPipelineOnDatanode.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java


> Allow ozone datanode to participate in a 1 node as well as 3 node Ratis 
> Pipeline
> 
>
> Key: HDDS-1021
> URL: https://issues.apache.org/jira/browse/HDDS-1021
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1021.000.patch, HDDS-1021.001.patch
>
>
> Currently, if a datanode is a part of 3 node Ratis pipeline, it cannot 
> participate in a single node Ratis pipeline. This Jira aims to enable 
> datanodes to become part of a a 3 node Ratis pipeline as well as single node 
> Ratis pipeline. This is the first step in introducing multiRaft in Ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1021) Allow ozone datanode to participate in a 1 node as well as 3 node Ratis Pipeline

2019-02-02 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759142#comment-16759142
 ] 

Mukul Kumar Singh commented on HDDS-1021:
-

Thanks for updating the patch [~shashikant].
+1, the v1 patch looks good to me. I will commit this shortly.

I also tested this patch on a docker based cluster and created ratis factor 1 
and 3 pipeline on a 3 node cluster and was able to create them on the same 
cluster. Pasting the output below.

{code}
hadoop@a1f85a4449a9:~$ /opt/hadoop/bin/ozone scmcli listPipelines | grep RATIS
Pipeline[ Id: b2204b94-4c30-44ee-898e-7af1caa1e520, Nodes: 
1325875a-0c68-4ba7-aee5-725bacf8c4e5{ip: 172.18.0.5, host: 
ozone_datanode_2.ozone_default}, Type:RATIS, Factor:ONE, State:OPEN]
Pipeline[ Id: d15944d3-6cae-49ae-9764-f2c0115af694, Nodes: 
e379dcb0-c9f9-4e05-ac2f-e38e19829c83{ip: 172.18.0.6, host: 
ozone_datanode_3.ozone_default}, Type:RATIS, Factor:ONE, State:OPEN]
Pipeline[ Id: 0084be6b-b4d6-455f-9d76-06a478cf5b45, Nodes: 
b994c39a-b718-40fd-bba1-d516186e39e4{ip: 172.18.0.2, host: 
ozone_datanode_1.ozone_default}, Type:RATIS, Factor:ONE, State:OPEN]
Pipeline[ Id: c214f463-974f-41a4-8c62-ca2d1e68a0c4, Nodes: 
1325875a-0c68-4ba7-aee5-725bacf8c4e5{ip: 172.18.0.5, host: 
ozone_datanode_2.ozone_default}e379dcb0-c9f9-4e05-ac2f-e38e19829c83{ip: 
172.18.0.6, host: 
ozone_datanode_3.ozone_default}b994c39a-b718-40fd-bba1-d516186e39e4{ip: 
172.18.0.2, host: ozone_datanode_1.ozone_default}, Type:RATIS, Factor:THREE, 
State:OPEN]
{code}

> Allow ozone datanode to participate in a 1 node as well as 3 node Ratis 
> Pipeline
> 
>
> Key: HDDS-1021
> URL: https://issues.apache.org/jira/browse/HDDS-1021
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1021.000.patch, HDDS-1021.001.patch
>
>
> Currently, if a datanode is a part of 3 node Ratis pipeline, it cannot 
> participate in a single node Ratis pipeline. This Jira aims to enable 
> datanodes to become part of a a 3 node Ratis pipeline as well as single node 
> Ratis pipeline. This is the first step in introducing multiRaft in Ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1021) Allow ozone datanode to participate in a 1 node as well as 3 node Ratis Pipeline

2019-02-02 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-1021:

Summary: Allow ozone datanode to participate in a 1 node as well as 3 node 
Ratis Pipeline  (was: Add functionality to make a datanode participate in a 
single node Ratis as well as 3 node Ratis pipeline)

> Allow ozone datanode to participate in a 1 node as well as 3 node Ratis 
> Pipeline
> 
>
> Key: HDDS-1021
> URL: https://issues.apache.org/jira/browse/HDDS-1021
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1021.000.patch, HDDS-1021.001.patch
>
>
> Currently, if a datanode is a part of 3 node Ratis pipeline, it cannot 
> participate in a single node Ratis pipeline. This Jira aims to enable 
> datanodes to become part of a a 3 node Ratis pipeline as well as single node 
> Ratis pipeline. This is the first step in introducing multiRaft in Ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14252) RBF : Exceptions are exposing the actual sub cluster path

2019-02-02 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759139#comment-16759139
 ] 

Íñigo Goiri commented on HDFS-14252:


Thanks [~ayushtkn] for the patch!
The code looks good.
A couple comments in the unit test:
* Use {{Collections.singletonMap("ns0", "/tmp/testdir");}}
* Do we need to create the directory in the NN to trigger the exception? If we 
do, I would create the folder before adding the mount point.
* Can we extract {{routerContext.getFileSystem()}} to {{routerFS}}?

> RBF : Exceptions are exposing the actual sub cluster path
> -
>
> Key: HDFS-14252
> URL: https://issues.apache.org/jira/browse/HDFS-14252
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14252-HDFS-13891-01.patch
>
>
> In case of file not found exception. If only one destination is available. 
> Either mounted only one Or mounted multiple but available only one(disabled 
> NS or something) during operation. In that scenario the exceptions are not 
> processed and is directly thrown. This exposes the actual sub cluster 
> destination path instead the path w.r.t. Mount.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14252) RBF : Exceptions are exposing the actual sub cluster path

2019-02-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759038#comment-16759038
 ] 

Hadoop QA commented on HDFS-14252:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
46s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 23m  
6s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14252 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12957386/HDFS-14252-HDFS-13891-01.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f38a051f0559 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / d37590b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26132/testReport/ |
| Max. process+thread count | 977 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26132/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF : Exceptions are exposing the actual sub cluster path
> 

[jira] [Commented] (HDFS-7663) Erasure Coding: Append on striped file

2019-02-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759014#comment-16759014
 ] 

Hadoop QA commented on HDFS-7663:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 58s{color} | {color:orange} hadoop-hdfs-project: The patch generated 3 new + 
31 unchanged - 0 fixed = 34 total (was 31) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
52s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
48s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}150m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  new org.apache.hadoop.hdfs.DFSStripedOutputStream(DFSClient, String, 
EnumSet, Progressable, LocatedBlock, HdfsFileStatus, DataChecksum, String[]) 
may expose internal representation by storing an externally mutable object into 
DFSStripedOutputStream.favoredNodes  At 
DFSStripedOutputStream.java:LocatedBlock, HdfsFileStatus, DataChecksum, 
String[]) may expose internal representation by storing an externally mutable 
object into DFSStripedOutputStream.favoredNodes  At 
DFSStripedOutputStream.java:[line 353] |
| Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS 
|
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | 

[jira] [Updated] (HDFS-14252) RBF : Exceptions are exposing the actual sub cluster path

2019-02-02 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14252:

Status: Patch Available  (was: Open)

> RBF : Exceptions are exposing the actual sub cluster path
> -
>
> Key: HDFS-14252
> URL: https://issues.apache.org/jira/browse/HDFS-14252
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14252-HDFS-13891-01.patch
>
>
> In case of file not found exception. If only one destination is available. 
> Either mounted only one Or mounted multiple but available only one(disabled 
> NS or something) during operation. In that scenario the exceptions are not 
> processed and is directly thrown. This exposes the actual sub cluster 
> destination path instead the path w.r.t. Mount.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14252) RBF : Exceptions are exposing the actual sub cluster path

2019-02-02 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14252:

Attachment: HDFS-14252-HDFS-13891-01.patch

> RBF : Exceptions are exposing the actual sub cluster path
> -
>
> Key: HDFS-14252
> URL: https://issues.apache.org/jira/browse/HDFS-14252
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14252-HDFS-13891-01.patch
>
>
> In case of file not found exception. If only one destination is available. 
> Either mounted only one Or mounted multiple but available only one(disabled 
> NS or something) during operation. In that scenario the exceptions are not 
> processed and is directly thrown. This exposes the actual sub cluster 
> destination path instead the path w.r.t. Mount.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14252) RBF : Exceptions are exposing the actual sub cluster path

2019-02-02 Thread Ayush Saxena (JIRA)
Ayush Saxena created HDFS-14252:
---

 Summary: RBF : Exceptions are exposing the actual sub cluster path
 Key: HDFS-14252
 URL: https://issues.apache.org/jira/browse/HDFS-14252
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ayush Saxena
Assignee: Ayush Saxena


In case of file not found exception. If only one destination is available. 
Either mounted only one Or mounted multiple but available only one(disabled NS 
or something) during operation. In that scenario the exceptions are not 
processed and is directly thrown. This exposes the actual sub cluster 
destination path instead the path w.r.t. Mount.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7663) Erasure Coding: Append on striped file

2019-02-02 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-7663:
---
Attachment: HDFS-7663-03.patch

> Erasure Coding: Append on striped file
> --
>
> Key: HDFS-7663
> URL: https://issues.apache.org/jira/browse/HDFS-7663
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Jing Zhao
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-7663-02.patch, HDFS-7663-03.patch, 
> HDFS-7663.00.txt, HDFS-7663.01.patch
>
>
> Append should be easy if we have variable length block support from 
> HDFS-3689, i.e., the new data will be appended to a new block. We need to 
> revisit whether and how to support appending data to the original last block.
> 1. Append to a closed striped file, with NEW_BLOCK flag enabled (this)
> 2. Append to a under-construction striped file, with NEW_BLOCK flag enabled 
> (HDFS-9173)
> 3. Append to a striped file, by appending to last block group (follow-on)
> This jira attempts to implement the #1, and also track #2, #3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7663) Erasure Coding: Append on striped file

2019-02-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758940#comment-16758940
 ] 

Hadoop QA commented on HDFS-7663:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  0s{color} | {color:orange} hadoop-hdfs-project: The patch generated 28 new 
+ 31 unchanged - 0 fixed = 59 total (was 31) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
44s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 2 new 
+ 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
45s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m  6s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  Impossible cast from RuntimeException to InterruptedException in new 
org.apache.hadoop.hdfs.DFSStripedOutputStream(DFSClient, String, EnumSet, 
Progressable, LocatedBlock, HdfsFileStatus, DataChecksum, String[])  At 
DFSStripedOutputStream.java:InterruptedException in new 
org.apache.hadoop.hdfs.DFSStripedOutputStream(DFSClient, String, EnumSet, 
Progressable, LocatedBlock, HdfsFileStatus, DataChecksum, String[])  At 
DFSStripedOutputStream.java:[line 366] |
|  |  new org.apache.hadoop.hdfs.DFSStripedOutputStream(DFSClient, String, 
EnumSet, Progressable, LocatedBlock, HdfsFileStatus, DataChecksum, String[]) 
may expose internal representation by