[jira] [Updated] (HDFS-13292) Crypto command should give proper exception when trying to set key on existing EZ directory

2018-04-06 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-13292:
--
Summary: Crypto command should give proper exception when trying to set key 
on existing EZ directory  (was: Crypto command should give proper exception 
when key is already exist for zone directory)

> Crypto command should give proper exception when trying to set key on 
> existing EZ directory
> ---
>
> Key: HDFS-13292
> URL: https://issues.apache.org/jira/browse/HDFS-13292
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, kms
>Affects Versions: 2.8.3
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-13292.001.patch, HDFS-13292.002.patch, Screenshot 
> from 2018-04-06 11-48-56.png
>
>
> {{Scenario:}}
>  # Create a Dir
>  # Create EZ for the above dir with Key1
>  # Again you can try to create ZONE for same directory with Diff Key i.e Key2
> {noformat}
> hadoopclient> hadoop key list
> Listing keys for KeyProvider: 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider@152aa092
> key2
> key1
> hadoopclient> hdfs dfs -mkdir /kms
> hadoopclient> hdfs dfs -put bigdata_env /kms/file1
> hadoopclient> hdfs crypto -createZone -keyName key1 -path /kms
> RemoteException: Attempt to create an encryption zone for a non-empty 
> directory.
> hadoopclient> hdfs dfs -rmr /kms/file1
> rmr: DEPRECATED: Please use '-rm -r' instead.
> Deleted /kms/file1
> hadoopclient> hdfs crypto -createZone -keyName key1 -path /kms
> Added encryption zone /kms
> hadoopclient> hdfs crypto -createZone -keyName key2 -path /kms
> RemoteException: Attempt to create an encryption zone for a non-empty 
> directory.
> hadoopclient>
>  {noformat}
> Actual Output:
> ===
> {{Exception should be Like Dir already having the ZONE will not allow to 
> create new ZONE on this Dir}}
> Expected Output:
> =
> {{RemoteException:Attempt to create an encryption zone for non-empty 
> directory}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13292) Crypto command should give proper exception when key is already exist for zone directory

2018-04-06 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429250#comment-16429250
 ] 

Surendra Singh Lilhore commented on HDFS-13292:
---

Thanks [~RANith].

+1

> Crypto command should give proper exception when key is already exist for 
> zone directory
> 
>
> Key: HDFS-13292
> URL: https://issues.apache.org/jira/browse/HDFS-13292
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, kms
>Affects Versions: 2.8.3
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-13292.001.patch, HDFS-13292.002.patch, Screenshot 
> from 2018-04-06 11-48-56.png
>
>
> {{Scenario:}}
>  # Create a Dir
>  # Create EZ for the above dir with Key1
>  # Again you can try to create ZONE for same directory with Diff Key i.e Key2
> {noformat}
> hadoopclient> hadoop key list
> Listing keys for KeyProvider: 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider@152aa092
> key2
> key1
> hadoopclient> hdfs dfs -mkdir /kms
> hadoopclient> hdfs dfs -put bigdata_env /kms/file1
> hadoopclient> hdfs crypto -createZone -keyName key1 -path /kms
> RemoteException: Attempt to create an encryption zone for a non-empty 
> directory.
> hadoopclient> hdfs dfs -rmr /kms/file1
> rmr: DEPRECATED: Please use '-rm -r' instead.
> Deleted /kms/file1
> hadoopclient> hdfs crypto -createZone -keyName key1 -path /kms
> Added encryption zone /kms
> hadoopclient> hdfs crypto -createZone -keyName key2 -path /kms
> RemoteException: Attempt to create an encryption zone for a non-empty 
> directory.
> hadoopclient>
>  {noformat}
> Actual Output:
> ===
> {{Exception should be Like Dir already having the ZONE will not allow to 
> create new ZONE on this Dir}}
> Expected Output:
> =
> {{RemoteException:Attempt to create an encryption zone for non-empty 
> directory}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13328) Abstract ReencryptionHandler recursive logic in separate class.

2018-04-06 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429233#comment-16429233
 ] 

Xiao Chen commented on HDFS-13328:
--

Thanks for the good work here [~surendrasingh] and [~rakeshr]!

I'm +1 pending a few minors:
- FSTreeTraverser: I think {{shouldSubmitCurrentBatch}} maybe a better name 
than {{canSubmitCurrentBatch}}, since the decision is really based on logic. 
{{canSubmitCurrentBatch}} makes me feel it may be due to resource etc.
- ReencryptionHandler: {{ZoneTraverseInfo}} could be private
- ReencryptionHandler: readLock / readUnlock could call parent class' method 
instead of explicitly doing it on fsn/fsd


Also an existing issue I found (not blocking this one) but maybe worth fixing:
- In {{FSTreeTraverser#traverseDirInt}}, currently the throttling logic is only 
done after submitting a batch. We'd need some the throttling to be higher, to 
make sure we don't hold the read lock while iterating through the inode's 
children. Otherwise if for some reason there is no batch submitted (for 
example, no new version keys for an EZ), the entire traversal does not release 
locks.

This isn't a critical issue because re-encryption currently is a maintenance 
operation, but would be bad if NN run into this serving normal requests.

> Abstract ReencryptionHandler recursive logic in separate class.
> ---
>
> Key: HDFS-13328
> URL: https://issues.apache.org/jira/browse/HDFS-13328
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-13328-01.patch, HDFS-13328-02.patch
>
>
> HDFS-10899 added DFS logic to scan a directory. It is good to abstract this 
> logic in separate class, so it can be used in some other feature like 
> SPS(HDFS-10285). I already tried abstracting DFS logic in HDFS-12291 and same 
> can be pushed in trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13384) RBF: Improve timeout RPC call mechanism

2018-04-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429188#comment-16429188
 ] 

genericqa commented on HDFS-13384:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m  
9s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13384 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917920/HDFS-13384.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux dda93ede9cf9 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 00905ef |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23816/testReport/ |
| Max. process+thread count | 1039 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23816/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: Improve timeout RPC call mechanism
> ---
>
> Key: HDFS-13384
> URL: 

[jira] [Commented] (HDFS-13408) MiniDFSCluster to support being built on randomized base directory

2018-04-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429182#comment-16429182
 ] 

Íñigo Goiri commented on HDFS-13408:


The Yetus run for  [^HDFS-13408.000.patch] was clean.
This change is just an interface change so I think it should be fine to add.
[~chris.douglas], do you mind taking a look?
This should enable other fixes.

> MiniDFSCluster to support being built on randomized base directory
> --
>
> Key: HDFS-13408
> URL: https://issues.apache.org/jira/browse/HDFS-13408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13408.000.patch
>
>
> Generated files of MiniDFSCluster during test are not properly cleaned in 
> Windows, which fails all subsequent test cases using the same default 
> directory (Windows does not allow other processes to delete them). By 
> migrating to randomized base directories, the conflict of test path of test 
> cases will be avoided, even if they are running at the same time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13409) FilterFileSystem#setConf does not update the configuration

2018-04-06 Thread Ahmed Eldawy (JIRA)
Ahmed Eldawy created HDFS-13409:
---

 Summary: FilterFileSystem#setConf does not update the configuration
 Key: HDFS-13409
 URL: https://issues.apache.org/jira/browse/HDFS-13409
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ahmed Eldawy


FilterFileSystem keeps its own copy of Configuration which is separate from the 
underlying filtered file system. When you call FilterFileSystem#setConf, it 
updates its own private configuration which is not accessible. When you call 
FilterFileSystem#getConf, it returns the Configuration of the underlying 
FileSystem. Please look at the code snippet below.

Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);
FilterFileSystem ffs = new FilterFileSystem(fs);
Configuration conf2 = new Configuration();
conf2.set("foo", "bar");
ffs.setConf(conf2);
assertEquals("bar", ffs.getConf().get("foo")); // This assertion should pass 
but it fails

 

To solve this problem, we need to implement the following function in 
FilterFileSystem

@Override

public void setConf(Configuration conf) {
fs.setConf(conf);
}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13395) Ozone: Plugins support in HDSL Datanode Service

2018-04-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429166#comment-16429166
 ] 

genericqa commented on HDFS-13395:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 39m 
49s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
35s{color} | {color:red} container-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
36s{color} | {color:red} integration-test in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
39s{color} | {color:red} objectstore-service in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-dist hadoop-ozone/acceptance-test hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
20s{color} | {color:red} hadoop-hdds/common in HDFS-7240 has 2 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
43s{color} | {color:red} container-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
44s{color} | {color:red} objectstore-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
39s{color} | {color:red} container-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
36s{color} | {color:red} integration-test in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
38s{color} | {color:red} objectstore-service in HDFS-7240 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
20s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
17s{color} | {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
17s{color} | {color:red} objectstore-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 37m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 37m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
 1s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
39s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
37s{color} | {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
39s{color} | {color:red} objectstore-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
38s{color} | {color:green} There were no new shelldocs issues. 

[jira] [Commented] (HDFS-13408) MiniDFSCluster to support being built on randomized base directory

2018-04-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429157#comment-16429157
 ] 

genericqa commented on HDFS-13408:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13408 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917910/HDFS-13408.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 133d47d782b5 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 024d7c0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23814/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23814/testReport/ |
| Max. process+thread count | 3596 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23814/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   

[jira] [Updated] (HDFS-13384) RBF: Improve timeout RPC call mechanism

2018-04-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13384:
---
Attachment: HDFS-13384.003.patch

> RBF: Improve timeout RPC call mechanism
> ---
>
> Key: HDFS-13384
> URL: https://issues.apache.org/jira/browse/HDFS-13384
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13384.000.patch, HDFS-13384.001.patch, 
> HDFS-13384.002.patch, HDFS-13384.003.patch
>
>
> When issuing RPC requests to subclusters, we have a time out mechanism 
> introduced in HDFS-12273. We need to improve this is handled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13404) RBF: TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fails

2018-04-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429153#comment-16429153
 ] 

Íñigo Goiri commented on HDFS-13404:


HDFS-13384 unit tests failed on April 6th because of this.

> RBF: TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fails
> --
>
> Key: HDFS-13404
> URL: https://issues.apache.org/jira/browse/HDFS-13404
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: detailed_error.log
>
>
> This is reported by [~elgoiri].
> {noformat}
> java.io.FileNotFoundException: 
> Failed to append to non-existent file /test/test/target for client 127.0.0.1
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp.appendFile(FSDirAppendOp.java:104)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2621)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:805)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:485)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
> ...
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:549)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:527)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:136)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$FsPathOutputStreamRunner$1.close(WebHdfsFileSystem.java:1013)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractAppendTest.testRenameFileBeingAppended(AbstractContractAppendTest.java:139)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13324) Ozone: Remove InfoPort and InfoSecurePort from DatanodeDetails

2018-04-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429119#comment-16429119
 ] 

genericqa commented on HDFS-13324:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
58s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
33s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
18s{color} | {color:red} container-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
18s{color} | {color:red} server-scm in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
20s{color} | {color:red} integration-test in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
17s{color} | {color:red} ozone-manager in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
17s{color} | {color:red} tools in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
55s{color} | {color:red} hadoop-hdds/common in HDFS-7240 has 2 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
18s{color} | {color:red} container-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
25s{color} | {color:red} server-scm in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
21s{color} | {color:red} ozone-manager in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
19s{color} | {color:red} tools in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
21s{color} | {color:red} container-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
22s{color} | {color:red} server-scm in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
16s{color} | {color:red} integration-test in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
16s{color} | {color:red} ozone-manager in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} tools in HDFS-7240 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
10s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
9s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
8s{color} | {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
9s{color} | {color:red} ozone-manager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
8s{color} | {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 
56s{color} | 

[jira] [Commented] (HDFS-13384) RBF: Improve timeout RPC call mechanism

2018-04-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429115#comment-16429115
 ] 

genericqa commented on HDFS-13384:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m  0s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.fs.contract.router.web.TestRouterWebHDFSContractAppend |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13384 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917911/HDFS-13384.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 42e79822cd6c 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 024d7c0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23815/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23815/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23815/testReport/ |
| Max. process+thread count | 949 (vs. ulimit of 

[jira] [Updated] (HDFS-11985) Intermittent unit test failures on 2.7.4 branch.

2018-04-06 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-11985:
---
Fix Version/s: (was: 2.7.6)

> Intermittent unit test failures on 2.7.4 branch.
> 
>
> Key: HDFS-11985
> URL: https://issues.apache.org/jira/browse/HDFS-11985
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.4
>Reporter: Konstantin Shvachko
>Priority: Major
>
> Some unit tests are failing intermittently on Jenkins nightly builds for 
> branch-2.7.
> Here is the list of test, which failed more than once within last week:
> * 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure.testUnderReplicationAfterVolFailure
> * 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWritten
> * 
> org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport.testXceiverCount
>   
> * org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11968) ViewFS: StoragePolicies commands fail with HDFS federation

2018-04-06 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-11968:
-
Fix Version/s: 3.0.3

> ViewFS: StoragePolicies commands fail with HDFS federation
> --
>
> Key: HDFS-11968
> URL: https://issues.apache.org/jira/browse/HDFS-11968
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HDFS-11968.001.patch, HDFS-11968.002.patch, 
> HDFS-11968.003.patch, HDFS-11968.004.patch, HDFS-11968.005.patch, 
> HDFS-11968.006.patch, HDFS-11968.007.patch, HDFS-11968.008.patch, 
> HDFS-11968.009.patch, HDFS-11968.010.patch, HDFS-11968.011.patch
>
>
> hdfs storagepolicies command fails with HDFS federation.
> For storage policies commands, a given user path should be resolved to a HDFS 
> path and
> storage policy command should be applied onto the resolved HDFS path.
> {code}
>   static DistributedFileSystem getDFS(Configuration conf)
>   throws IOException {
> FileSystem fs = FileSystem.get(conf);
> if (!(fs instanceof DistributedFileSystem)) {
>   throw new IllegalArgumentException("FileSystem " + fs.getUri() +
>   " is not an HDFS file system");
> }
> return (DistributedFileSystem)fs;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11968) ViewFS: StoragePolicies commands fail with HDFS federation

2018-04-06 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429081#comment-16429081
 ] 

Xiao Chen commented on HDFS-11968:
--

Pushed this to branch-3.0, thanks for the work here

> ViewFS: StoragePolicies commands fail with HDFS federation
> --
>
> Key: HDFS-11968
> URL: https://issues.apache.org/jira/browse/HDFS-11968
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HDFS-11968.001.patch, HDFS-11968.002.patch, 
> HDFS-11968.003.patch, HDFS-11968.004.patch, HDFS-11968.005.patch, 
> HDFS-11968.006.patch, HDFS-11968.007.patch, HDFS-11968.008.patch, 
> HDFS-11968.009.patch, HDFS-11968.010.patch, HDFS-11968.011.patch
>
>
> hdfs storagepolicies command fails with HDFS federation.
> For storage policies commands, a given user path should be resolved to a HDFS 
> path and
> storage policy command should be applied onto the resolved HDFS path.
> {code}
>   static DistributedFileSystem getDFS(Configuration conf)
>   throws IOException {
> FileSystem fs = FileSystem.get(conf);
> if (!(fs instanceof DistributedFileSystem)) {
>   throw new IllegalArgumentException("FileSystem " + fs.getUri() +
>   " is not an HDFS file system");
> }
> return (DistributedFileSystem)fs;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13326) RBF: Improve the interfaces to modify and view mount tables

2018-04-06 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429077#comment-16429077
 ] 

Wei Yan commented on HDFS-13326:


[~gangli2384], some comments on [^HDFS-13326.000.patch].

(1) For add cmd, if mount entry exists, we can directly print out error message 
and leave the "update" to the update cmd (following existing RPC 
implementation).

(2) For update cmd, currently RPC side supports both add and update, so we can 
just follow. To be simple, just parsing the input parameters (like what we did 
in add cmd) and building a MountTable object, then sending to RPC end.

Some minors:
(1) Change the "namespace" to "nameservice"
(2) Set readonly to false if no input parameter
(3) Remove the line change in TestRouterAdmin.java.
(4) Also need to update 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
(5) Need to fix the checkstyle errors

> RBF: Improve the interfaces to modify and view mount tables
> ---
>
> Key: HDFS-13326
> URL: https://issues.apache.org/jira/browse/HDFS-13326
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Gang Li
>Priority: Minor
> Attachments: HDFS-13326.000.patch
>
>
> 1. From DFSRouterAdmin cmd, currently the update logic is implemented inside 
> add operation, where it has some limitation (e.g. cannot update "readonly" or 
> removing a destination).  Given the RPC alreadys separate add and update 
> operations, it would be better to do the same in cmd level.
> 2. Currently in the MountTable tab, the "readonly" field always show empty, 
> no matter whether the mount entry is readonly or not. From the code 
> perspective, it tries to show:
> {code:java}
> {code}
> The federationhealth.html will load hadoop.css, however the hadoop.css 
> doesn't have classes with a prefix "dfshealth-mount-read-only". This could be 
> fixed in HDFS-13204.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13384) RBF: Improve timeout RPC call mechanism

2018-04-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429061#comment-16429061
 ] 

Íñigo Goiri commented on HDFS-13384:


[^HDFS-13384.002.patch] uses independent subclusters not sharing DNs.
Eventually we should use this in more places.

> RBF: Improve timeout RPC call mechanism
> ---
>
> Key: HDFS-13384
> URL: https://issues.apache.org/jira/browse/HDFS-13384
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13384.000.patch, HDFS-13384.001.patch, 
> HDFS-13384.002.patch
>
>
> When issuing RPC requests to subclusters, we have a time out mechanism 
> introduced in HDFS-12273. We need to improve this is handled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13384) RBF: Improve timeout RPC call mechanism

2018-04-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13384:
---
Attachment: HDFS-13384.002.patch

> RBF: Improve timeout RPC call mechanism
> ---
>
> Key: HDFS-13384
> URL: https://issues.apache.org/jira/browse/HDFS-13384
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13384.000.patch, HDFS-13384.001.patch, 
> HDFS-13384.002.patch
>
>
> When issuing RPC requests to subclusters, we have a time out mechanism 
> introduced in HDFS-12273. We need to improve this is handled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13408) MiniDFSCluster to support being built on randomized base directory

2018-04-06 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13408:
--
Status: Patch Available  (was: Open)

> MiniDFSCluster to support being built on randomized base directory
> --
>
> Key: HDFS-13408
> URL: https://issues.apache.org/jira/browse/HDFS-13408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13408.000.patch
>
>
> Generated files of MiniDFSCluster during test are not properly cleaned in 
> Windows, which fails all subsequent test cases using the same default 
> directory (Windows does not allow other processes to delete them). By 
> migrating to randomized base directories, the conflict of test path of test 
> cases will be avoided, even if they are running at the same time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13408) MiniDFSCluster to support being built on randomized base directory

2018-04-06 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13408:
--
Attachment: HDFS-13408.000.patch

> MiniDFSCluster to support being built on randomized base directory
> --
>
> Key: HDFS-13408
> URL: https://issues.apache.org/jira/browse/HDFS-13408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13408.000.patch
>
>
> Generated files of MiniDFSCluster during test are not properly cleaned in 
> Windows, which fails all subsequent test cases using the same default 
> directory (Windows does not allow other processes to delete them). By 
> migrating to randomized base directories, the conflict of test path of test 
> cases will be avoided, even if they are running at the same time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13324) Ozone: Remove InfoPort and InfoSecurePort from DatanodeDetails

2018-04-06 Thread Shashikant Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429003#comment-16429003
 ] 

Shashikant Banerjee commented on HDFS-13324:


Rebased patch v0.

> Ozone: Remove InfoPort and InfoSecurePort from DatanodeDetails
> --
>
> Key: HDFS-13324
> URL: https://issues.apache.org/jira/browse/HDFS-13324
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-13324-HDFS-7240.000.patch, 
> HDFS-13324-HDFS-7240.001.patch
>
>
> We have removed the dependency of DatanodeID in HDSL/Ozone and there is no 
> need for InfoPort and InfoSecurePort.  It is now safe to remove InfoPort and 
> InfoSecurePort from DatanodeDetails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13324) Ozone: Remove InfoPort and InfoSecurePort from DatanodeDetails

2018-04-06 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-13324:
---
Attachment: HDFS-13324-HDFS-7240.001.patch

> Ozone: Remove InfoPort and InfoSecurePort from DatanodeDetails
> --
>
> Key: HDFS-13324
> URL: https://issues.apache.org/jira/browse/HDFS-13324
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-13324-HDFS-7240.000.patch, 
> HDFS-13324-HDFS-7240.001.patch
>
>
> We have removed the dependency of DatanodeID in HDSL/Ozone and there is no 
> need for InfoPort and InfoSecurePort.  It is now safe to remove InfoPort and 
> InfoSecurePort from DatanodeDetails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13384) RBF: Improve timeout RPC call mechanism

2018-04-06 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428985#comment-16428985
 ] 

Ajay Kumar commented on HDFS-13384:
---

[~elgoiri]  TestKMS or TestNetworkToplogy. It is one liner like below:
{code}@Rule
  public final Timeout testTimeout = new Timeout(1);{code}

> RBF: Improve timeout RPC call mechanism
> ---
>
> Key: HDFS-13384
> URL: https://issues.apache.org/jira/browse/HDFS-13384
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13384.000.patch, HDFS-13384.001.patch
>
>
> When issuing RPC requests to subclusters, we have a time out mechanism 
> introduced in HDFS-12273. We need to improve this is handled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13384) RBF: Improve timeout RPC call mechanism

2018-04-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428940#comment-16428940
 ] 

Íñigo Goiri commented on HDFS-13384:


[~ajayydv], can you point to any unit test that uses the rule approach? I'm not 
familiar with it.
I'll try to figure if I can generate MiniDFSCluster with DNs spread across 
nameservices.

> RBF: Improve timeout RPC call mechanism
> ---
>
> Key: HDFS-13384
> URL: https://issues.apache.org/jira/browse/HDFS-13384
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13384.000.patch, HDFS-13384.001.patch
>
>
> When issuing RPC requests to subclusters, we have a time out mechanism 
> introduced in HDFS-12273. We need to improve this is handled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13384) RBF: Improve timeout RPC call mechanism

2018-04-06 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428907#comment-16428907
 ] 

Ajay Kumar commented on HDFS-13384:
---

[~elgoiri] thanks for updating the patch. Ya, you are right, seems datanodes 
are registering for all nameservices. 
MiniRouterDFSCluster#generateNamenodeConfiguration L419 add all nameservices to 
datanode config. I think its ok if we don't separate DataNodes for nameservices 
but then we can skip (L217-233) assertion after slowdown of only subcluster0 as 
we know we will get 4 live datanodes under current settings. For timeout shall 
we create a junit rule as that will bind other testcases in timeout as well?

> RBF: Improve timeout RPC call mechanism
> ---
>
> Key: HDFS-13384
> URL: https://issues.apache.org/jira/browse/HDFS-13384
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13384.000.patch, HDFS-13384.001.patch
>
>
> When issuing RPC requests to subclusters, we have a time out mechanism 
> introduced in HDFS-12273. We need to improve this is handled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13324) Ozone: Remove InfoPort and InfoSecurePort from DatanodeDetails

2018-04-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428898#comment-16428898
 ] 

genericqa commented on HDFS-13324:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-13324 does not apply to HDFS-7240. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-13324 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917192/HDFS-13324-HDFS-7240.000.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23812/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Remove InfoPort and InfoSecurePort from DatanodeDetails
> --
>
> Key: HDFS-13324
> URL: https://issues.apache.org/jira/browse/HDFS-13324
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-13324-HDFS-7240.000.patch
>
>
> We have removed the dependency of DatanodeID in HDSL/Ozone and there is no 
> need for InfoPort and InfoSecurePort.  It is now safe to remove InfoPort and 
> InfoSecurePort from DatanodeDetails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13324) Ozone: Remove InfoPort and InfoSecurePort from DatanodeDetails

2018-04-06 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-13324:
---
Status: Patch Available  (was: Open)

> Ozone: Remove InfoPort and InfoSecurePort from DatanodeDetails
> --
>
> Key: HDFS-13324
> URL: https://issues.apache.org/jira/browse/HDFS-13324
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-13324-HDFS-7240.000.patch
>
>
> We have removed the dependency of DatanodeID in HDSL/Ozone and there is no 
> need for InfoPort and InfoSecurePort.  It is now safe to remove InfoPort and 
> InfoSecurePort from DatanodeDetails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13301) Ozone: Remove containerPort, ratisPort and ozoneRestPort from DatanodeID and DatanodeIDProto

2018-04-06 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428878#comment-16428878
 ] 

Nanda kumar commented on HDFS-13301:


Thanks [~shashikant] for the contribution, I have committed this to the feature 
branch.

> Ozone: Remove containerPort, ratisPort and ozoneRestPort from DatanodeID and 
> DatanodeIDProto
> 
>
> Key: HDFS-13301
> URL: https://issues.apache.org/jira/browse/HDFS-13301
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-13301-HDFS-7240.000.patch, 
> HDFS-13301-HDFS-7240.001.patch
>
>
> HDFS-13300 decouples DatanodeID from HDSL/Ozone, it's now safe to remove 
> {{containerPort}}, {{ratisPort}} and {{ozoneRestPort}} from {{DatanodeID}} 
> and {{DatanodeIDProto}}. This jira is to track the removal of Ozone related 
> fields from {{DatanodeID}} and {{DatanodeIDProto}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13301) Ozone: Remove containerPort, ratisPort and ozoneRestPort from DatanodeID and DatanodeIDProto

2018-04-06 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-13301:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Ozone: Remove containerPort, ratisPort and ozoneRestPort from DatanodeID and 
> DatanodeIDProto
> 
>
> Key: HDFS-13301
> URL: https://issues.apache.org/jira/browse/HDFS-13301
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-13301-HDFS-7240.000.patch, 
> HDFS-13301-HDFS-7240.001.patch
>
>
> HDFS-13300 decouples DatanodeID from HDSL/Ozone, it's now safe to remove 
> {{containerPort}}, {{ratisPort}} and {{ozoneRestPort}} from {{DatanodeID}} 
> and {{DatanodeIDProto}}. This jira is to track the removal of Ozone related 
> fields from {{DatanodeID}} and {{DatanodeIDProto}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13301) Ozone: Remove containerPort, ratisPort and ozoneRestPort from DatanodeID and DatanodeIDProto

2018-04-06 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-13301:
---
Summary: Ozone: Remove containerPort, ratisPort and ozoneRestPort from 
DatanodeID and DatanodeIDProto  (was: Remove containerPort, ratisPort and 
ozoneRestPort from DatanodeID and DatanodeIDProto)

> Ozone: Remove containerPort, ratisPort and ozoneRestPort from DatanodeID and 
> DatanodeIDProto
> 
>
> Key: HDFS-13301
> URL: https://issues.apache.org/jira/browse/HDFS-13301
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-13301-HDFS-7240.000.patch, 
> HDFS-13301-HDFS-7240.001.patch
>
>
> HDFS-13300 decouples DatanodeID from HDSL/Ozone, it's now safe to remove 
> {{containerPort}}, {{ratisPort}} and {{ozoneRestPort}} from {{DatanodeID}} 
> and {{DatanodeIDProto}}. This jira is to track the removal of Ozone related 
> fields from {{DatanodeID}} and {{DatanodeIDProto}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13301) Remove containerPort, ratisPort and ozoneRestPort from DatanodeID and DatanodeIDProto

2018-04-06 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428870#comment-16428870
 ] 

Nanda kumar commented on HDFS-13301:


+1, LGTM. I will commit this shortly.

> Remove containerPort, ratisPort and ozoneRestPort from DatanodeID and 
> DatanodeIDProto
> -
>
> Key: HDFS-13301
> URL: https://issues.apache.org/jira/browse/HDFS-13301
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-13301-HDFS-7240.000.patch, 
> HDFS-13301-HDFS-7240.001.patch
>
>
> HDFS-13300 decouples DatanodeID from HDSL/Ozone, it's now safe to remove 
> {{containerPort}}, {{ratisPort}} and {{ozoneRestPort}} from {{DatanodeID}} 
> and {{DatanodeIDProto}}. This jira is to track the removal of Ozone related 
> fields from {{DatanodeID}} and {{DatanodeIDProto}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13402) RBF: Fix java doc for StateStoreFileSystemImpl

2018-04-06 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428851#comment-16428851
 ] 

Ajay Kumar commented on HDFS-13402:
---

[~elgoiri] +1 for new javadoc you suggested. minor tweak

{code}The path can be specified by setting
 * dfs.federation.router.driver.fs.path=hdfs://host:port/path/to/store {code}

> RBF: Fix  java doc for StateStoreFileSystemImpl
> ---
>
> Key: HDFS-13402
> URL: https://issues.apache.org/jira/browse/HDFS-13402
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: Yiran Wu
>Assignee: Yiran Wu
>Priority: Minor
> Attachments: HDFS-13402.001.patch, HDFS-13402.002.patch
>
>
> {code:java}
> /**
>  *StateStoreDriver}implementation based on a filesystem. The most common uses
>  * HDFS as a backend.
>  */
> {code}
> to
> {code:java}
> /**
>  * {@link StateStoreDriver} implementation based on a filesystem. The most 
> common uses
>  * HDFS as a backend.
>  */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13348) Ozone: Update IP and hostname in Datanode from SCM's response to the register call

2018-04-06 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428848#comment-16428848
 ] 

Nanda kumar commented on HDFS-13348:


Thanks [~shashikant] for bringing it up, we have to make both clusterId and 
DatanodeUuid as required field and handle it properly.
As of now, we don't do anything with the response of datanode registration. We 
should validate the clusterId and also the datanodeUuid as done in 
{{HeartbeatEndpointTask#processResponse}} line:133. We can do this in follow-up 
jiras.

> Ozone: Update IP and hostname in Datanode from SCM's response to the register 
> call
> --
>
> Key: HDFS-13348
> URL: https://issues.apache.org/jira/browse/HDFS-13348
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13348-HDFS-7240.000.patch, 
> HDFS-13348-HDFS-7240.001.patch
>
>
> Whenever a Datanode registers with SCM, the SCM resolves the IP address and 
> hostname of the Datanode form the RPC call. This IP address and hostname 
> should be sent back to Datanode in the response to register call and the 
> Datanode has to update the values from the response to its 
> {{DatanodeDetails}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13395) Ozone: Plugins support in HDSL Datanode Service

2018-04-06 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428823#comment-16428823
 ] 

Nanda kumar commented on HDFS-13395:


Thanks [~elek] and [~shashikant] for reviewing the patch. I have addressed the 
review comments in v001.

bq. Can we just remove it...
Removed DataNodeServicePlugin.
bq. There is a typo...
Fixed.
bq. Please also update the docker-compose file in...
Done.
bq. I would add additional check to the OzoneHdslDatanodeService.start()...
Added error level log message stating that plugin will not be started unless 
invoked through {{HddsDatanodeService}}
bq.  I would consider to make it as a default value...
Done.

Since the renaming of ObjectStoreRestPlugin is also handled in this jira, we 
can resolve HDFS-13325 as duplicate once this is committed.

> Ozone: Plugins support in HDSL Datanode Service
> ---
>
> Key: HDFS-13395
> URL: https://issues.apache.org/jira/browse/HDFS-13395
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDFS-13395-HDFS-7240.000.patch, 
> HDFS-13395-HDFS-7240.001.patch
>
>
> As part of Datanode, we start {{HdslDatanodeService}} if {{ozone}} is 
> enabled. We need provision to load plugins like {{Ozone Rest Service}} as 
> part of  {{HdslDatanodeService}} start.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13395) Ozone: Plugins support in HDSL Datanode Service

2018-04-06 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-13395:
---
Attachment: HDFS-13395-HDFS-7240.001.patch

> Ozone: Plugins support in HDSL Datanode Service
> ---
>
> Key: HDFS-13395
> URL: https://issues.apache.org/jira/browse/HDFS-13395
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDFS-13395-HDFS-7240.000.patch, 
> HDFS-13395-HDFS-7240.001.patch
>
>
> As part of Datanode, we start {{HdslDatanodeService}} if {{ozone}} is 
> enabled. We need provision to load plugins like {{Ozone Rest Service}} as 
> part of  {{HdslDatanodeService}} start.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11915) Sync rbw dir on the first hsync() to avoid file lost on power failure

2018-04-06 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-11915:
---
Fix Version/s: 2.9.1

> Sync rbw dir on the first hsync() to avoid file lost on power failure
> -
>
> Key: HDFS-11915
> URL: https://issues.apache.org/jira/browse/HDFS-11915
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kanaka Kumar Avvaru
>Assignee: Vinayakumar B
>Priority: Critical
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: HDFS-11915-01.patch, HDFS-11915-branch-2-01.patch, 
> HDFS-11915-branch-2-02.patch
>
>
> As discussed in HDFS-5042, there is a chance to lose blocks on power failure 
> if rbw file creation entry is not yet sync to device. Then the block created 
> is nowhere exists on disk. Neither in rbw nor in finalized. 
> As suggested by [~kihwal], will discuss and track it in this JIRA.
> As suggested by [~vinayrpet], May be first hsync() request on block file can 
> call fsync on its parent directory (rbw) directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13045) RBF: Improve error message returned from subcluster

2018-04-06 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428768#comment-16428768
 ] 

Wei Yan commented on HDFS-13045:


I cannot produce the error pattern I mentioned above through some fs 
operations. Not sure whether it is still valid. The following code is just a 
quick test against this error pattern.
{code:java}
String msg = "Parent directory doesn't exist: /a/a/b";
String src = "/ns1/a";
String dst = "/a";
String newMsg = msg.replaceAll(dst, src);
int minLen = Math.min(dst.length(), src.length());
for (int i = 0; newMsg.equals(msg) && i < minLen; i++) {
  // Check if we can replace sub folders
  String dst1 = dst.substring(0, dst.length() - 1 - i);
  String src1 = src.substring(0, src.length() - 1 - i);
  newMsg = msg.replaceAll(dst1, src1);
}
System.out.println(newMsg);{code}
I think currently the patch cannot handle it, as it replaceAll at the first 
place.

One more nit thing is, we may also need to set the stackTrace back when 
generating the new exception.

> RBF: Improve error message returned from subcluster
> ---
>
> Key: HDFS-13045
> URL: https://issues.apache.org/jira/browse/HDFS-13045
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: HDFS-13045.000.patch, HDFS-13045.001.patch, 
> HDFS-13045.002.patch
>
>
> Currently, Router directly returns exception response from subcluster to 
> client, which may not have the correct error message, especially when the 
> error message containing a path.
> One example, we have a mount path "/a/b" mapped to subclusterA's "/c/d". If 
> user1 does a chown operation on "/a/b", and he doesn't have corresponding 
> privilege, currently the error msg looks like "Permission denied. user=user1 
> is not the owner of inode=/c/d", which may confuse user. Would be better to 
> reverse the path back to original mount path.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13408) MiniDFSCluster to support being built on randomized base directory

2018-04-06 Thread Xiao Liang (JIRA)
Xiao Liang created HDFS-13408:
-

 Summary: MiniDFSCluster to support being built on randomized base 
directory
 Key: HDFS-13408
 URL: https://issues.apache.org/jira/browse/HDFS-13408
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Xiao Liang
Assignee: Xiao Liang


Generated files of MiniDFSCluster during test are not properly cleaned in 
Windows, which fails all subsequent test cases using the same default directory 
(Windows does not allow other processes to delete them). By migrating to 
randomized base directories, the conflict of test path of test cases will be 
avoided, even if they are running at the same time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13364) RBF: Support NamenodeProtocol in the Router

2018-04-06 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13364:
---
Fix Version/s: (was: 2.9.2)
   2.9.1

> RBF: Support NamenodeProtocol in the Router
> ---
>
> Key: HDFS-13364
> URL: https://issues.apache.org/jira/browse/HDFS-13364
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.2.0, 3.0.3
>
> Attachments: HDFS-13364-branch-2.001.patch, 
> HDFS-13364-branch-3.0.addendum.patch, HDFS-13364.000.patch, 
> HDFS-13364.001.patch, HDFS-13364.002.patch, HDFS-13364.003.patch, 
> HDFS-13364.004.patch, HDFS-13364.005.patch
>
>
> The Router should support the NamenodeProtocol to get blocks, versions, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13384) RBF: Improve timeout RPC call mechanism

2018-04-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428671#comment-16428671
 ] 

genericqa commented on HDFS-13384:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
16s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13384 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917874/HDFS-13384.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cf82305272c6 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 024d7c0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23810/testReport/ |
| Max. process+thread count | 938 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23810/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: Improve timeout RPC call mechanism
> ---
>
> Key: HDFS-13384
> URL: 

[jira] [Commented] (HDFS-13364) RBF: Support NamenodeProtocol in the Router

2018-04-06 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428632#comment-16428632
 ] 

Wei-Chiu Chuang commented on HDFS-13364:


HDFS-13347 breaks the build in branch-2.9.1.

I am going to cherry pick this Jira (HDFS-13364) in branch-2.9.1 too.

> RBF: Support NamenodeProtocol in the Router
> ---
>
> Key: HDFS-13364
> URL: https://issues.apache.org/jira/browse/HDFS-13364
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.2.0, 2.9.2, 3.0.3
>
> Attachments: HDFS-13364-branch-2.001.patch, 
> HDFS-13364-branch-3.0.addendum.patch, HDFS-13364.000.patch, 
> HDFS-13364.001.patch, HDFS-13364.002.patch, HDFS-13364.003.patch, 
> HDFS-13364.004.patch, HDFS-13364.005.patch
>
>
> The Router should support the NamenodeProtocol to get blocks, versions, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13347) RBF: Cache datanode reports

2018-04-06 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428630#comment-16428630
 ] 

Wei-Chiu Chuang commented on HDFS-13347:


Next time, please make sure to link HDFS-13364 in this ijra.

Thanks guys!

> RBF: Cache datanode reports
> ---
>
> Key: HDFS-13347
> URL: https://issues.apache.org/jira/browse/HDFS-13347
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-13347-branch-2.000.patch, HDFS-13347.000.patch, 
> HDFS-13347.001.patch, HDFS-13347.002.patch, HDFS-13347.003.patch, 
> HDFS-13347.004.patch, HDFS-13347.005.patch, HDFS-13347.006.patch
>
>
> Getting the datanode reports is an expensive operation and can be executed 
> very frequently by the UI and watchdogs. We should cache this information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13399) Make Client field AlignmentContext non-static.

2018-04-06 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428622#comment-16428622
 ] 

Erik Krogen commented on HDFS-13399:


I think on (2) if the change is not relevant to this JIRA let's remove it. Your 
logic in (1) seems sound; we can wait until a valid use case comes along and if 
necessary they can add the new method with overload. Otherwise LGTM, [~shv] do 
you want to take a look as well?

> Make Client field AlignmentContext non-static.
> --
>
> Key: HDFS-13399
> URL: https://issues.apache.org/jira/browse/HDFS-13399
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS-13399-HDFS-12943.000.patch
>
>
> In HDFS-12977, DFSClient's constructor was altered to make use of a new 
> static method in Client that allowed one to set an AlignmentContext. This 
> work is to remove that static field and make each DFSClient pass it's 
> AlignmentContext down to the proxy Call level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13399) Make Client field AlignmentContext non-static.

2018-04-06 Thread Plamen Jeliazkov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428591#comment-16428591
 ] 

Plamen Jeliazkov commented on HDFS-13399:
-

Hey [~xkrogen], thanks for the prompt review. :)

*(1) We currently don't have a version of 
NameNodeProxies#createProxy(Configuration, URI, Class, AtomicBoolean) which 
accepts an alignment context, should we add one?*

I saw that this method was only utilized by unit tests via {{NNConnector}}, 
{{DFSAdmin}}, and {{GetGroups}}. I decided not to modify it as it seems 
{{DFSClients}} are not expressly utilizing this call for proxies. If you feel 
otherwise though I would be glad to add it in my next patch.

*(2) Why is one getProxy() method removed from ProtobufRpcEngine?*

I was finding that that particular method was now unused and since it was not 
an {{@Override}} from {{RpcEngine}} I assumed it was acceptable to delete. I 
thought at the time that it became unused _due_ to the change in {{RpcEngine}} 
but I see now that shouldn't have affected it. I'll happily add it back – less 
deltas in the patch and retains expected public calls.

*(3) Why is IPFailoverProxyProvider passing null as its assignment context 
instead of getAlignmentContext()?*

Ah, an unfortunate accident from overloading so many methods. Good catch. It 
should be {{getAlignmentContext()}}.

> Make Client field AlignmentContext non-static.
> --
>
> Key: HDFS-13399
> URL: https://issues.apache.org/jira/browse/HDFS-13399
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS-13399-HDFS-12943.000.patch
>
>
> In HDFS-12977, DFSClient's constructor was altered to make use of a new 
> static method in Client that allowed one to set an AlignmentContext. This 
> work is to remove that static field and make each DFSClient pass it's 
> AlignmentContext down to the proxy Call level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13384) RBF: Improve timeout RPC call mechanism

2018-04-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428569#comment-16428569
 ] 

Íñigo Goiri commented on HDFS-13384:


[~ajayydv], thanks for the comments; tackled most of them in  
[^HDFS-13384.001.patch].
Regarding the subcluster0 being unavailable resulting in 2 DNs available, I 
also thought the same and that was my initial intention.
However, I the MiniDFSCluster topology for federated DNs, makes all DNs join 
all subclusters.
I tried to see if there was an easy way to tune that but I couldn't find it; we 
should add one easy way to set that setup up and using in the RBF tests.
Given this scenario, I kept the check when only subcluster0 is slow; once we 
have the new setup we can look for only 2 in that case (for now, I would keep 
that check for completeness).


> RBF: Improve timeout RPC call mechanism
> ---
>
> Key: HDFS-13384
> URL: https://issues.apache.org/jira/browse/HDFS-13384
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13384.000.patch, HDFS-13384.001.patch
>
>
> When issuing RPC requests to subclusters, we have a time out mechanism 
> introduced in HDFS-12273. We need to improve this is handled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13384) RBF: Improve timeout RPC call mechanism

2018-04-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13384:
---
Attachment: HDFS-13384.001.patch

> RBF: Improve timeout RPC call mechanism
> ---
>
> Key: HDFS-13384
> URL: https://issues.apache.org/jira/browse/HDFS-13384
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13384.000.patch, HDFS-13384.001.patch
>
>
> When issuing RPC requests to subclusters, we have a time out mechanism 
> introduced in HDFS-12273. We need to improve this is handled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7765) FSOutputSummer throwing ArrayIndexOutOfBoundsException

2018-04-06 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428532#comment-16428532
 ] 

Jonathan Eagles commented on HDFS-7765:
---

[~janmejay], I think [~wankunde]'s assessment matches my experience for when 
this issue happens. Once an IOException happens at max buffer size, this class 
becomes unusable.

Much like this other apache stream class as reference, flush if we can't write, 
then write. That way the state is not modified until safe. 
https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/output/ByteArrayOutputStream.java#L171
{code}
  public synchronized void write(int b) throws IOException {
int newcount = count + 1;
if (newcount > buf.length) {
  flushBuffer();
}
buf[count++] = (byte)b;
  }
{code}

I haven't checked the rest of the FSOutputSummer for correctness. That is worth 
checking.

> FSOutputSummer throwing ArrayIndexOutOfBoundsException
> --
>
> Key: HDFS-7765
> URL: https://issues.apache.org/jira/browse/HDFS-7765
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
> Environment: Centos 6, Open JDK 7, Amazon EC2, Accumulo 1.6.2RC4
>Reporter: Keith Turner
>Assignee: Janmejay Singh
>Priority: Major
> Attachments: 
> 0001-PATCH-HDFS-7765-FSOutputSummer-throwing-ArrayIndexOu.patch, 
> HDFS-7765.patch
>
>
> While running an Accumulo test, saw exceptions like the following while 
> trying to write to write ahead log in HDFS. 
> The exception occurrs at 
> [FSOutputSummer.java:76|https://github.com/apache/hadoop/blob/release-2.6.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java#L76]
>  which is attempting to update a byte array.
> {noformat}
> 2015-02-06 19:46:49,769 [log.DfsLogger] WARN : Exception syncing 
> java.lang.reflect.InvocationTargetException
> java.lang.ArrayIndexOutOfBoundsException: 4608
> at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:76)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:50)
> at java.io.DataOutputStream.write(DataOutputStream.java:88)
> at java.io.DataOutputStream.writeByte(DataOutputStream.java:153)
> at 
> org.apache.accumulo.tserver.logger.LogFileKey.write(LogFileKey.java:87)
> at org.apache.accumulo.tserver.log.DfsLogger.write(DfsLogger.java:526)
> at 
> org.apache.accumulo.tserver.log.DfsLogger.logFileData(DfsLogger.java:540)
> at 
> org.apache.accumulo.tserver.log.DfsLogger.logManyTablets(DfsLogger.java:573)
> at 
> org.apache.accumulo.tserver.log.TabletServerLogger$6.write(TabletServerLogger.java:373)
> at 
> org.apache.accumulo.tserver.log.TabletServerLogger.write(TabletServerLogger.java:274)
> at 
> org.apache.accumulo.tserver.log.TabletServerLogger.logManyTablets(TabletServerLogger.java:365)
> at 
> org.apache.accumulo.tserver.TabletServer$ThriftClientHandler.flush(TabletServer.java:1667)
> at 
> org.apache.accumulo.tserver.TabletServer$ThriftClientHandler.closeUpdate(TabletServer.java:1754)
> at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.accumulo.trace.instrument.thrift.RpcServerInvocationHandler.invoke(RpcServerInvocationHandler.java:46)
> at 
> org.apache.accumulo.server.util.RpcWrapper$1.invoke(RpcWrapper.java:47)
> at com.sun.proxy.$Proxy22.closeUpdate(Unknown Source)
> at 
> org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$closeUpdate.getResult(TabletClientService.java:2370)
> at 
> org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$closeUpdate.getResult(TabletClientService.java:2354)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at 
> org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:168)
> at 
> org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:516)
> at 
> org.apache.accumulo.server.util.CustomNonBlockingServer$1.run(CustomNonBlockingServer.java:77)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at 
> org.apache.accumulo.trace.instrument.TraceRunnable.run(TraceRunnable.java:47)
> at 
> 

[jira] [Commented] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts

2018-04-06 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428527#comment-16428527
 ] 

Xiao Chen commented on HDFS-13056:
--

Casting my official +1 on this, will let it float for a few days in case Steve 
or other watchers want to review. Will commit on Tuesday if further comments.

[~dennishuo], please make sure to consider Steve's comment about DFSClient in 
the webhdfs subtask, to deprecate methods instead of simply remove.

> Expose file-level composite CRCs in HDFS which are comparable across 
> different instances/layouts
> 
>
> Key: HDFS-13056
> URL: https://issues.apache.org/jira/browse/HDFS-13056
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, distcp, erasure-coding, federation, hdfs
>Affects Versions: 3.0.0
>Reporter: Dennis Huo
>Assignee: Dennis Huo
>Priority: Major
> Attachments: HDFS-13056-branch-2.8.001.patch, 
> HDFS-13056-branch-2.8.002.patch, HDFS-13056-branch-2.8.003.patch, 
> HDFS-13056-branch-2.8.004.patch, HDFS-13056-branch-2.8.005.patch, 
> HDFS-13056-branch-2.8.poc1.patch, HDFS-13056.001.patch, HDFS-13056.002.patch, 
> HDFS-13056.003.patch, HDFS-13056.003.patch, HDFS-13056.004.patch, 
> HDFS-13056.005.patch, HDFS-13056.006.patch, HDFS-13056.007.patch, 
> HDFS-13056.008.patch, HDFS-13056.009.patch, HDFS-13056.010.patch, 
> HDFS-13056.011.patch, HDFS-13056.012.patch, HDFS-13056.013.patch, 
> HDFS-13056.014.patch, Reference_only_zhen_PPOC_hadoop2.6.X.diff, 
> hdfs-file-composite-crc32-v1.pdf, hdfs-file-composite-crc32-v2.pdf, 
> hdfs-file-composite-crc32-v3.pdf
>
>
> FileChecksum was first introduced in 
> [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then 
> has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are 
> already stored as part of datanode metadata, and the MD5 approach is used to 
> compute an aggregate value in a distributed manner, with individual datanodes 
> computing the MD5-of-CRCs per-block in parallel, and the HDFS client 
> computing the second-level MD5.
>  
> A shortcoming of this approach which is often brought up is the fact that 
> this FileChecksum is sensitive to the internal block-size and chunk-size 
> configuration, and thus different HDFS files with different block/chunk 
> settings cannot be compared. More commonly, one might have different HDFS 
> clusters which use different block sizes, in which case any data migration 
> won't be able to use the FileChecksum for distcp's rsync functionality or for 
> verifying end-to-end data integrity (on top of low-level data integrity 
> checks applied at data transfer time).
>  
> This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 
> during the addition of checksum support for striped erasure-coded files; 
> while there was some discussion of using CRC composability, it still 
> ultimately settled on hierarchical MD5 approach, which also adds the problem 
> that checksums of basic replicated files are not comparable to striped files.
>  
> This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses 
> CRC composition to remain completely chunk/block agnostic, and allows 
> comparison between striped vs replicated files, between different HDFS 
> instances, and possible even between HDFS and other external storage systems. 
> This feature can also be added in-place to be compatible with existing block 
> metadata, and doesn't need to change the normal path of chunk verification, 
> so is minimally invasive. This also means even large preexisting HDFS 
> deployments could adopt this feature to retroactively sync data. A detailed 
> design document can be found here: 
> https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13363) Record file path when FSDirAclOp throws AclException

2018-04-06 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428516#comment-16428516
 ] 

Xiao Chen commented on HDFS-13363:
--

My suggestion is based on the following reasons:
- For regular paths, the input parameter would be resolved to the iip, and 
iip.getPath will be the same as the input. 
- For advanced cases like /.reserved/inode Daryn mentioned, as a client, 
getting an exception mentioning my input path had an exception seems more 
intuitive than getting an exception mentioning the resolved path. 
- This also won't have the security concern of unnecessarily revealing paths 
that the user doesn't have permission to - which currently one has to examine 
the code to make sure.
- Path resolution isn't free. Since this is done inside the write lock, the 
cheaper the better.

If you worry about resolving the path to the inode, maybe we can add the 
inodeid to the message. Client usually doesn't care about the inodeid though, 
since that's NN internals.

> Record file path when FSDirAclOp throws AclException
> 
>
> Key: HDFS-13363
> URL: https://issues.apache.org/jira/browse/HDFS-13363
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HDFS-13363.001.patch, HDFS-13363.002.patch
>
>
> When AclTransformation methods throws AclException, it does not record the 
> file path that has the exception. Therefore even if it throws an exception, 
> we would never know which file has those invalid ACLs.
>  
> These AclTransformation methods are invoked in FSDirAclOp methods, which know 
> the file path. These FSDirAclOp methods can catch AclException, and then add 
> the file path in the error message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13402) RBF: Fix java doc for StateStoreFileSystemImpl

2018-04-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428501#comment-16428501
 ] 

Íñigo Goiri commented on HDFS-13402:


Thanks [~yiran], in  [^HDFS-13402.002.patch] the line is still longer than 80 
characters.
Regarding [~ajayydv]'s comment, I agree we could say something like:
{code}
/**
 * {@link StateStoreDriver} implementation based on a filesystem. The common
 * implementation uses HDFS as a backend. The path can be specified setting
 * dfs.federation.router.driver.fs.path=hdfs://host:port/path/to/store.
 */
{code}

> RBF: Fix  java doc for StateStoreFileSystemImpl
> ---
>
> Key: HDFS-13402
> URL: https://issues.apache.org/jira/browse/HDFS-13402
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: Yiran Wu
>Assignee: Yiran Wu
>Priority: Minor
> Attachments: HDFS-13402.001.patch, HDFS-13402.002.patch
>
>
> {code:java}
> /**
>  *StateStoreDriver}implementation based on a filesystem. The most common uses
>  * HDFS as a backend.
>  */
> {code}
> to
> {code:java}
> /**
>  * {@link StateStoreDriver} implementation based on a filesystem. The most 
> common uses
>  * HDFS as a backend.
>  */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13176) WebHdfs file path gets truncated when having semicolon (;) inside

2018-04-06 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HDFS-13176:
-
   Resolution: Fixed
Fix Version/s: 3.2.0
   2.10.0
   Status: Resolved  (was: Patch Available)

> WebHdfs file path gets truncated when having semicolon (;) inside
> -
>
> Key: HDFS-13176
> URL: https://issues.apache.org/jira/browse/HDFS-13176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Major
> Fix For: 2.10.0, 3.2.0
>
> Attachments: HDFS-13176-branch-2.01.patch, 
> HDFS-13176-branch-2.03.patch, HDFS-13176-branch-2.03.patch, 
> HDFS-13176-branch-2.03.patch, HDFS-13176-branch-2.03.patch, 
> HDFS-13176-branch-2.03.patch, HDFS-13176-branch-2.04.patch, 
> HDFS-13176-branch-2_yetus.log, HDFS-13176.01.patch, HDFS-13176.02.patch, 
> TestWebHdfsUrl.testWebHdfsSpecialCharacterFile.patch
>
>
> Find attached a patch having a test case that tries to reproduce the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13176) WebHdfs file path gets truncated when having semicolon (;) inside

2018-04-06 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428344#comment-16428344
 ] 

Sean Mackrory commented on HDFS-13176:
--

+1 and committed to branch-2. Also ran Yetus, etc. myself.

> WebHdfs file path gets truncated when having semicolon (;) inside
> -
>
> Key: HDFS-13176
> URL: https://issues.apache.org/jira/browse/HDFS-13176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HDFS-13176-branch-2.01.patch, 
> HDFS-13176-branch-2.03.patch, HDFS-13176-branch-2.03.patch, 
> HDFS-13176-branch-2.03.patch, HDFS-13176-branch-2.03.patch, 
> HDFS-13176-branch-2.03.patch, HDFS-13176-branch-2.04.patch, 
> HDFS-13176-branch-2_yetus.log, HDFS-13176.01.patch, HDFS-13176.02.patch, 
> TestWebHdfsUrl.testWebHdfsSpecialCharacterFile.patch
>
>
> Find attached a patch having a test case that tries to reproduce the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13407) Ozone: Use separated version schema for Hdds/Ozone projects

2018-04-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428343#comment-16428343
 ] 

genericqa commented on HDFS-13407:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
 5s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m  
2s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 19m 
28s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
22s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-cblock in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
18s{color} | {color:red} server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} tools in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
17s{color} | {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
18s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
17s{color} | {color:red} framework in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
21s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
17s{color} | {color:red} tools in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
20s{color} | {color:red} client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
17s{color} | {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
21s{color} | {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
18s{color} | {color:red} objectstore-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
18s{color} | {color:red} ozone-manager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
18s{color} | {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 19m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | 

[jira] [Assigned] (HDFS-13322) fuse dfs - uid persists when switching between ticket caches

2018-04-06 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HDFS-13322:
-

Assignee: Gabor Bota

> fuse dfs - uid persists when switching between ticket caches
> 
>
> Key: HDFS-13322
> URL: https://issues.apache.org/jira/browse/HDFS-13322
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.6.0
> Environment: Linux xx.xx.xx.xxx 3.10.0-514.el7.x86_64 #1 SMP Wed 
> Oct 19 11:24:13 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
>  
>Reporter: Alex Volskiy
>Assignee: Gabor Bota
>Priority: Minor
>
> The symptoms of this issue are the same as described in HDFS-3608 except the 
> workaround that was applied (detect changes in UID ticket cache) doesn't 
> resolve the issue when multiple ticket caches are in use by the same user.
> Our use case requires that a job scheduler running as a specific uid obtain 
> separate kerberos sessions per job and that each of these sessions use a 
> separate cache. When switching sessions this way, no change is made to the 
> original ticket cache so the cached filesystem instance doesn't get 
> regenerated.
>  
> {{$ export KRB5CCNAME=/tmp/krb5cc_session1}}
> {{$ kinit user_a@domain}}
> {{$ touch /fuse_mount/tmp/testfile1}}
> {{$ ls -l /fuse_mount/tmp/testfile1}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile1*}}
> {{$ export KRB5CCNAME=/tmp/krb5cc_session2}}
> {{$ kinit user_b@domain}}
> {{$ touch /fuse_mount/tmp/testfile2}}
> {{$ ls -l /fuse_mount/tmp/testfile2}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile2*}}
> {{   }}{color:#d04437}*{{** expected owner to be user_b **}}*{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13342) Ozone: Rename and fix ozone CLI scripts

2018-04-06 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-13342:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

Thanks for the contribution [~shashikant] and [~elek] for the review. I have 
committed this to the feature branch.

> Ozone: Rename and fix ozone CLI scripts
> ---
>
> Key: HDFS-13342
> URL: https://issues.apache.org/jira/browse/HDFS-13342
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-13342-HDFS-7240.001.patch, 
> HDFS-13342-HDFS-7240.002.patch, HDFS-13342-HDFS-7240.003.patch, 
> HDFS-13342-HDFS-7240.004.patch, HDFS-13342-HDFS-7240.005.patch
>
>
> The Ozone (oz script) has wrong classnames for freon etc, As a result of 
> which freon cannot be started from command line. This Jira proposes to fix 
> all these. The oz script will be renamed to Ozone as well. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13342) Ozone: Rename and fix ozone CLI scripts

2018-04-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428191#comment-16428191
 ] 

genericqa commented on HDFS-13342:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
2s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
26s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m  
3s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
33s{color} | {color:red} server-scm in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
33s{color} | {color:red} common in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
32s{color} | {color:red} ozone-manager in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-dist hadoop-ozone/acceptance-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
30s{color} | {color:red} server-scm in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
29s{color} | {color:red} common in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
30s{color} | {color:red} ozone-manager in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
30s{color} | {color:red} server-scm in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
29s{color} | {color:red} common in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
30s{color} | {color:red} ozone-manager in HDFS-7240 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
12s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
12s{color} | {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
12s{color} | {color:red} ozone-manager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
31s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
25s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
29s{color} | {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
30s{color} | {color:red} ozone-manager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
24s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
36s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} 

[jira] [Commented] (HDFS-13342) Ozone: Rename and fix ozone CLI scripts

2018-04-06 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428189#comment-16428189
 ] 

Mukul Kumar Singh commented on HDFS-13342:
--

Thanks for the review [~elek]. I will commit this shortly.

> Ozone: Rename and fix ozone CLI scripts
> ---
>
> Key: HDFS-13342
> URL: https://issues.apache.org/jira/browse/HDFS-13342
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13342-HDFS-7240.001.patch, 
> HDFS-13342-HDFS-7240.002.patch, HDFS-13342-HDFS-7240.003.patch, 
> HDFS-13342-HDFS-7240.004.patch, HDFS-13342-HDFS-7240.005.patch
>
>
> The Ozone (oz script) has wrong classnames for freon etc, As a result of 
> which freon cannot be started from command line. This Jira proposes to fix 
> all these. The oz script will be renamed to Ozone as well. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13342) Ozone: Rename and fix ozone CLI scripts

2018-04-06 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428187#comment-16428187
 ] 

Elek, Marton commented on HDFS-13342:
-

+1 I also tested: compiled and started a cluster with docker-compose. Worked 
well. Thanks to update it to use the apache/hadoop-runner image.

> Ozone: Rename and fix ozone CLI scripts
> ---
>
> Key: HDFS-13342
> URL: https://issues.apache.org/jira/browse/HDFS-13342
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13342-HDFS-7240.001.patch, 
> HDFS-13342-HDFS-7240.002.patch, HDFS-13342-HDFS-7240.003.patch, 
> HDFS-13342-HDFS-7240.004.patch, HDFS-13342-HDFS-7240.005.patch
>
>
> The Ozone (oz script) has wrong classnames for freon etc, As a result of 
> which freon cannot be started from command line. This Jira proposes to fix 
> all these. The oz script will be renamed to Ozone as well. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13342) Ozone: Rename and fix ozone CLI scripts

2018-04-06 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-13342:

Summary: Ozone: Rename and fix ozone CLI scripts  (was: Ozone: Fix the 
class names in Ozone Script)

> Ozone: Rename and fix ozone CLI scripts
> ---
>
> Key: HDFS-13342
> URL: https://issues.apache.org/jira/browse/HDFS-13342
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13342-HDFS-7240.001.patch, 
> HDFS-13342-HDFS-7240.002.patch, HDFS-13342-HDFS-7240.003.patch, 
> HDFS-13342-HDFS-7240.004.patch, HDFS-13342-HDFS-7240.005.patch
>
>
> The Ozone (oz script) has wrong classnames for freon etc, As a result of 
> which freon cannot be started from command line. This Jira proposes to fix 
> all these. The oz script will be renamed to Ozone as well. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13393) Improve OOM logging

2018-04-06 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HDFS-13393:
-

Assignee: Gabor Bota

> Improve OOM logging
> ---
>
> Key: HDFS-13393
> URL: https://issues.apache.org/jira/browse/HDFS-13393
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover, datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
>
> It is not uncommon to find "java.lang.OutOfMemoryError: unable to create new 
> native thread" errors in a HDFS cluster. Most often this happens when 
> DataNode creating DataXceiver threads, or when balancer creates threads for 
> moving blocks around.
> In most of cases, the "OOM" is a symptom of number of threads reaching system 
> limit, rather than actually running out of memory, and the current logging of 
> this message is usually misleading (suggesting this is due to insufficient 
> memory)
> How about capturing the OOM, and if it is due to "unable to create new native 
> thread", print some more helpful message like "bump your ulimit" or "take a 
> jstack of the process"?
> Even better, surface this error to make it more visible. It usually takes a 
> while for an in-depth investigation after users notice some job fails, by the 
> time the evidences may already been gone (like jstack output).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13407) Ozone: Use separated version schema for Hdds/Ozone projects

2018-04-06 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-13407:

Status: Patch Available  (was: Open)

> Ozone: Use separated version schema for Hdds/Ozone projects
> ---
>
> Key: HDFS-13407
> URL: https://issues.apache.org/jira/browse/HDFS-13407
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDFS-13407-HDFS-7240.001.patch
>
>
> The community is voted to manage Hdds/Ozone in-tree but with different 
> release cycle. To achieve this we need to separated the versions of 
> hdds/ozone projects from the mainline hadoop version (3.2.0-currently).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13407) Ozone: Use separated version schema for Hdds/Ozone projects

2018-04-06 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-13407:

Attachment: HDFS-13407-HDFS-7240.001.patch

> Ozone: Use separated version schema for Hdds/Ozone projects
> ---
>
> Key: HDFS-13407
> URL: https://issues.apache.org/jira/browse/HDFS-13407
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDFS-13407-HDFS-7240.001.patch
>
>
> The community is voted to manage Hdds/Ozone in-tree but with different 
> release cycle. To achieve this we need to separated the versions of 
> hdds/ozone projects from the mainline hadoop version (3.2.0-currently).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13407) Ozone: Use separated version schema for Hdds/Ozone projects

2018-04-06 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428166#comment-16428166
 ] 

Elek, Marton commented on HDFS-13407:
-

We need HADOOP-15369 first. 

> Ozone: Use separated version schema for Hdds/Ozone projects
> ---
>
> Key: HDFS-13407
> URL: https://issues.apache.org/jira/browse/HDFS-13407
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> The community is voted to manage Hdds/Ozone in-tree but with different 
> release cycle. To achieve this we need to separated the versions of 
> hdds/ozone projects from the mainline hadoop version (3.2.0-currently).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13407) Ozone: Use separated version schema for Hdds/Ozone projects

2018-04-06 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-13407:
---

 Summary: Ozone: Use separated version schema for Hdds/Ozone 
projects
 Key: HDFS-13407
 URL: https://issues.apache.org/jira/browse/HDFS-13407
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: HDFS-7240
Reporter: Elek, Marton
Assignee: Elek, Marton


The community is voted to manage Hdds/Ozone in-tree but with different release 
cycle. To achieve this we need to separated the versions of hdds/ozone projects 
from the mainline hadoop version (3.2.0-currently).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7967) Reduce the performance impact of the balancer

2018-04-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428162#comment-16428162
 ] 

genericqa commented on HDFS-7967:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HDFS-7967 does not apply to branch-2.8. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-7967 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12847070/HDFS-7967.branch-2.8.003.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23808/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Reduce the performance impact of the balancer
> -
>
> Key: HDFS-7967
> URL: https://issues.apache.org/jira/browse/HDFS-7967
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-7967-branch-2.8.patch, HDFS-7967-branch-2.patch, 
> HDFS-7967.branch-2-1.patch, HDFS-7967.branch-2.001.patch, 
> HDFS-7967.branch-2.002.patch, HDFS-7967.branch-2.8-1.patch, 
> HDFS-7967.branch-2.8.001.patch, HDFS-7967.branch-2.8.002.patch, 
> HDFS-7967.branch-2.8.003.patch
>
>
> The balancer needs to query for blocks to move from overly full DNs.  The 
> block lookup is extremely inefficient.  An iterator of the node's blocks is 
> created from the iterators of its storages' blocks.  A random number is 
> chosen corresponding to how many blocks will be skipped via the iterator.  
> Each skip requires costly scanning of triplets.
> The current design also only considers node imbalances while ignoring 
> imbalances within the nodes's storages.  A more efficient and intelligent 
> design may eliminate the costly skipping of blocks via round-robin selection 
> of blocks from the storages based on remaining capacity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12136) BlockSender performance regression due to volume scanner edge case

2018-04-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-12136:
--
Target Version/s: 2.8.4  (was: 2.8.3)

> BlockSender performance regression due to volume scanner edge case
> --
>
> Key: HDFS-12136
> URL: https://issues.apache.org/jira/browse/HDFS-12136
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-12136.branch-2.patch, HDFS-12136.trunk.patch
>
>
> HDFS-11160 attempted to fix a volume scan race for a file appended mid-scan 
> by reading the last checksum of finalized blocks within the {{BlockSender}} 
> ctor.  Unfortunately it's holding the exclusive dataset lock to open and read 
> the metafile multiple times  Block sender instantiation becomes serialized.
> Performance completely collapses under heavy disk i/o utilization or high 
> xceiver activity.  Ex. lost node replication, balancing, or decommissioning.  
> The xceiver threads congest creating block senders and impair the heartbeat 
> processing that is contending for the same lock.  Combined with other lock 
> contention issues, pipelines break and nodes sporadically go dead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13111) Close recovery may incorrectly mark blocks corrupt

2018-04-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-13111:
--
Target Version/s: 2.8.4  (was: 2.8.3)

> Close recovery may incorrectly mark blocks corrupt
> --
>
> Key: HDFS-13111
> URL: https://issues.apache.org/jira/browse/HDFS-13111
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Priority: Critical
>
> Close recovery can leave a block marked corrupt until the next FBR arrives 
> from one of the DNs.  The reason is unclear but has happened multiple times 
> when a DN has io saturated disks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7967) Reduce the performance impact of the balancer

2018-04-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-7967:
-
Target Version/s: 2.8.4  (was: 2.8.3)

> Reduce the performance impact of the balancer
> -
>
> Key: HDFS-7967
> URL: https://issues.apache.org/jira/browse/HDFS-7967
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-7967-branch-2.8.patch, HDFS-7967-branch-2.patch, 
> HDFS-7967.branch-2-1.patch, HDFS-7967.branch-2.001.patch, 
> HDFS-7967.branch-2.002.patch, HDFS-7967.branch-2.8-1.patch, 
> HDFS-7967.branch-2.8.001.patch, HDFS-7967.branch-2.8.002.patch, 
> HDFS-7967.branch-2.8.003.patch
>
>
> The balancer needs to query for blocks to move from overly full DNs.  The 
> block lookup is extremely inefficient.  An iterator of the node's blocks is 
> created from the iterators of its storages' blocks.  A random number is 
> chosen corresponding to how many blocks will be skipped via the iterator.  
> Each skip requires costly scanning of triplets.
> The current design also only considers node imbalances while ignoring 
> imbalances within the nodes's storages.  A more efficient and intelligent 
> design may eliminate the costly skipping of blocks via round-robin selection 
> of blocks from the storages based on remaining capacity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12703) Exceptions are fatal to decommissioning monitor

2018-04-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-12703:
--
Target Version/s: 2.8.4  (was: 2.8.3)

> Exceptions are fatal to decommissioning monitor
> ---
>
> Key: HDFS-12703
> URL: https://issues.apache.org/jira/browse/HDFS-12703
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Priority: Critical
>
> The {{DecommissionManager.Monitor}} runs as an executor scheduled task.  If 
> an exception occurs, all decommissioning ceases until the NN is restarted.  
> Per javadoc for {{executor#scheduleAtFixedRate}}: *If any execution of the 
> task encounters an exception, subsequent executions are suppressed*.  The 
> monitor thread is alive but blocked waiting for an executor task that will 
> never come.  The code currently disposes of the future so the actual 
> exception that aborted the task is gone.
> Failover is insufficient since the task is also likely dead on the standby.  
> Replication queue init after the transition to active will fix the under 
> replication of blocks on currently decommissioning nodes but future nodes 
> never decommission.  The standby must be bounced prior to failover – and 
> hopefully the error condition does not reoccur.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12704) FBR may corrupt block state

2018-04-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-12704:
--
Target Version/s: 2.8.4  (was: 2.8.3)

> FBR may corrupt block state
> ---
>
> Key: HDFS-12704
> URL: https://issues.apache.org/jira/browse/HDFS-12704
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Priority: Critical
>
> If FBR processing generates a runtime exception it is believed to foul the 
> block state and lead to unpredictable behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12747) Lease monitor may infinitely loop on the same lease

2018-04-06 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428151#comment-16428151
 ] 

Junping Du commented on HDFS-12747:
---

move to 2.8.4 as 2.8.3 has been released.

> Lease monitor may infinitely loop on the same lease
> ---
>
> Key: HDFS-12747
> URL: https://issues.apache.org/jira/browse/HDFS-12747
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>
> Lease recovery incorrectly handles UC files if the last block is complete but 
> the penultimate block is committed.  Incorrectly handles is the euphemism for 
> infinitely loops for days and leaves all abandoned streams open until 
> customers complain.
> The problem may manifest when:
> # Block1 is committed but seemingly never completed
> # Block2 is allocated
> # Lease recovery is initiated for block2
> # Commit block synchronization invokes {{FSNamesytem#closeFileCommitBlocks}}, 
> causing:
> #* {{commitOrCompleteLastBlock}} to mark block2 as complete
> #* 
> {{finalizeINodeFileUnderConstruction}}/{{INodeFile.assertAllBlocksComplete}} 
> to throw {{IllegalStateException}} because the penultimate block1 is 
> "COMMITTED but not COMPLETE"
> # The next lease recovery results in an infinite loop.
> The {{LeaseManager}} expects that {{FSNamesystem#internalReleaseLease}} will 
> either init recovery and renew the lease, or remove the lease.  In the 
> described state it does neither.  The switch case will break out if the last 
> block is complete.  (The case statement ironically contains an assert).  
> Since nothing changed, the lease is still the “next” lease to be processed.  
> The lease monitor loops for 25ms on the same lease, sleeps for 2s, loops on 
> it again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12747) Lease monitor may infinitely loop on the same lease

2018-04-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-12747:
--
Target Version/s: 2.8.4  (was: 2.8.3)

> Lease monitor may infinitely loop on the same lease
> ---
>
> Key: HDFS-12747
> URL: https://issues.apache.org/jira/browse/HDFS-12747
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>
> Lease recovery incorrectly handles UC files if the last block is complete but 
> the penultimate block is committed.  Incorrectly handles is the euphemism for 
> infinitely loops for days and leaves all abandoned streams open until 
> customers complain.
> The problem may manifest when:
> # Block1 is committed but seemingly never completed
> # Block2 is allocated
> # Lease recovery is initiated for block2
> # Commit block synchronization invokes {{FSNamesytem#closeFileCommitBlocks}}, 
> causing:
> #* {{commitOrCompleteLastBlock}} to mark block2 as complete
> #* 
> {{finalizeINodeFileUnderConstruction}}/{{INodeFile.assertAllBlocksComplete}} 
> to throw {{IllegalStateException}} because the penultimate block1 is 
> "COMMITTED but not COMPLETE"
> # The next lease recovery results in an infinite loop.
> The {{LeaseManager}} expects that {{FSNamesystem#internalReleaseLease}} will 
> either init recovery and renew the lease, or remove the lease.  In the 
> described state it does neither.  The switch case will break out if the last 
> block is complete.  (The case statement ironically contains an assert).  
> Since nothing changed, the lease is still the “next” lease to be processed.  
> The lease monitor loops for 25ms on the same lease, sleeps for 2s, loops on 
> it again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11610) sun.net.spi.nameservice.NameService has moved to a new location

2018-04-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428136#comment-16428136
 ] 

genericqa commented on HDFS-11610:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
39m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 49s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}166m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-11610 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917824/HDFS-11610.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux 5fa91d8d13eb 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ea3849f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23805/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23805/testReport/ |
| Max. process+thread count | 2748 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23805/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> sun.net.spi.nameservice.NameService has moved to a new location
> 

[jira] [Commented] (HDFS-11610) sun.net.spi.nameservice.NameService has moved to a new location

2018-04-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428130#comment-16428130
 ] 

genericqa commented on HDFS-11610:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
37m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 29s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}164m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-11610 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879343/HDFS-11610.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux 536b23a2564d 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ea3849f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23803/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23803/testReport/ |
| Max. process+thread count | 2617 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23803/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> sun.net.spi.nameservice.NameService has 

[jira] [Commented] (HDFS-13342) Ozone: Fix the class names in Ozone Script

2018-04-06 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428125#comment-16428125
 ] 

Mukul Kumar Singh commented on HDFS-13342:
--

Thanks for working on this [~shashikant].

+1, The v5 patch looks good to me. I have tested this patch and it acceptance 
tests are working for me.

> Ozone: Fix the class names in Ozone Script
> --
>
> Key: HDFS-13342
> URL: https://issues.apache.org/jira/browse/HDFS-13342
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13342-HDFS-7240.001.patch, 
> HDFS-13342-HDFS-7240.002.patch, HDFS-13342-HDFS-7240.003.patch, 
> HDFS-13342-HDFS-7240.004.patch, HDFS-13342-HDFS-7240.005.patch
>
>
> The Ozone (oz script) has wrong classnames for freon etc, As a result of 
> which freon cannot be started from command line. This Jira proposes to fix 
> all these. The oz script will be renamed to Ozone as well. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13342) Ozone: Fix the class names in Ozone Script

2018-04-06 Thread Shashikant Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428104#comment-16428104
 ] 

Shashikant Banerjee commented on HDFS-13342:


Patch v5 addresses the acceptance tests issues.

> Ozone: Fix the class names in Ozone Script
> --
>
> Key: HDFS-13342
> URL: https://issues.apache.org/jira/browse/HDFS-13342
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13342-HDFS-7240.001.patch, 
> HDFS-13342-HDFS-7240.002.patch, HDFS-13342-HDFS-7240.003.patch, 
> HDFS-13342-HDFS-7240.004.patch, HDFS-13342-HDFS-7240.005.patch
>
>
> The Ozone (oz script) has wrong classnames for freon etc, As a result of 
> which freon cannot be started from command line. This Jira proposes to fix 
> all these. The oz script will be renamed to Ozone as well. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13342) Ozone: Fix the class names in Ozone Script

2018-04-06 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-13342:
---
Attachment: HDFS-13342-HDFS-7240.005.patch

> Ozone: Fix the class names in Ozone Script
> --
>
> Key: HDFS-13342
> URL: https://issues.apache.org/jira/browse/HDFS-13342
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13342-HDFS-7240.001.patch, 
> HDFS-13342-HDFS-7240.002.patch, HDFS-13342-HDFS-7240.003.patch, 
> HDFS-13342-HDFS-7240.004.patch, HDFS-13342-HDFS-7240.005.patch
>
>
> The Ozone (oz script) has wrong classnames for freon etc, As a result of 
> which freon cannot be started from command line. This Jira proposes to fix 
> all these. The oz script will be renamed to Ozone as well. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11610) sun.net.spi.nameservice.NameService has moved to a new location

2018-04-06 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428096#comment-16428096
 ] 

Takanobu Asanuma edited comment on HDFS-11610 at 4/6/18 8:24 AM:
-

Thanks for updating it, [~ajisakaa]! I've just confirmed that the patch works 
fine with java 10. +1 (non-binding).


was (Author: tasanuma0829):
Thaks for updating it, [~ajisakaa]! I've just confirmed that the patch works 
fine with java 10. +1 (non-binding).

> sun.net.spi.nameservice.NameService has moved to a new location
> ---
>
> Key: HDFS-11610
> URL: https://issues.apache.org/jira/browse/HDFS-11610
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-11610.001.patch, HDFS-11610.002.patch
>
>
> sun.net.spi.nameservice.NameService was moved to 
> java.net.InetAddress$NameService in Java 9. TestDFSClientFailover uses this 
> class to spy nameservice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11610) sun.net.spi.nameservice.NameService has moved to a new location

2018-04-06 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428096#comment-16428096
 ] 

Takanobu Asanuma commented on HDFS-11610:
-

Thaks for updating it, [~ajisakaa]! I've just confirmed that the patch works 
fine with java 10. +1 (non-binding).

> sun.net.spi.nameservice.NameService has moved to a new location
> ---
>
> Key: HDFS-11610
> URL: https://issues.apache.org/jira/browse/HDFS-11610
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-11610.001.patch, HDFS-11610.002.patch
>
>
> sun.net.spi.nameservice.NameService was moved to 
> java.net.InetAddress$NameService in Java 9. TestDFSClientFailover uses this 
> class to spy nameservice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13402) RBF: Fix java doc for StateStoreFileSystemImpl

2018-04-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428088#comment-16428088
 ] 

genericqa commented on HDFS-13402:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
20s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13402 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917826/HDFS-13402.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f67af131c949 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ea3849f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23806/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23806/testReport/ |
| Max. process+thread count | 1351 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console 

[jira] [Resolved] (HDFS-13394) Ozone: ContainerID has incorrect package name

2018-04-06 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain resolved HDFS-13394.

Resolution: Not A Problem

> Ozone: ContainerID has incorrect package name
> -
>
> Key: HDFS-13394
> URL: https://issues.apache.org/jira/browse/HDFS-13394
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: newbie
>
> {{ContainerID}} package name and the directory structure where the class is 
> present doesn't match.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13342) Ozone: Fix the class names in Ozone Script

2018-04-06 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-13342:
-
Attachment: (was: HDFS-13342-HDFS-7240.mukul.patch)

> Ozone: Fix the class names in Ozone Script
> --
>
> Key: HDFS-13342
> URL: https://issues.apache.org/jira/browse/HDFS-13342
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13342-HDFS-7240.001.patch, 
> HDFS-13342-HDFS-7240.002.patch, HDFS-13342-HDFS-7240.003.patch, 
> HDFS-13342-HDFS-7240.004.patch
>
>
> The Ozone (oz script) has wrong classnames for freon etc, As a result of 
> which freon cannot be started from command line. This Jira proposes to fix 
> all these. The oz script will be renamed to Ozone as well. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13342) Ozone: Fix the class names in Ozone Script

2018-04-06 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-13342:
-
Attachment: HDFS-13342-HDFS-7240.mukul.patch

> Ozone: Fix the class names in Ozone Script
> --
>
> Key: HDFS-13342
> URL: https://issues.apache.org/jira/browse/HDFS-13342
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13342-HDFS-7240.001.patch, 
> HDFS-13342-HDFS-7240.002.patch, HDFS-13342-HDFS-7240.003.patch, 
> HDFS-13342-HDFS-7240.004.patch
>
>
> The Ozone (oz script) has wrong classnames for freon etc, As a result of 
> which freon cannot be started from command line. This Jira proposes to fix 
> all these. The oz script will be renamed to Ozone as well. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13402) RBF: Fix java doc for StateStoreFileSystemImpl

2018-04-06 Thread Yiran Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428046#comment-16428046
 ] 

Yiran Wu edited comment on HDFS-13402 at 4/6/18 7:05 AM:
-

Thanks [~elgoiri] and [~ajayydv] for review my code, I added a new patch.


was (Author: yiran):
Thanks [~elgoiri] and [~ajayydv] for reviewing my code, I added a new patch.

> RBF: Fix  java doc for StateStoreFileSystemImpl
> ---
>
> Key: HDFS-13402
> URL: https://issues.apache.org/jira/browse/HDFS-13402
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: Yiran Wu
>Assignee: Yiran Wu
>Priority: Minor
> Attachments: HDFS-13402.001.patch, HDFS-13402.002.patch
>
>
> {code:java}
> /**
>  *StateStoreDriver}implementation based on a filesystem. The most common uses
>  * HDFS as a backend.
>  */
> {code}
> to
> {code:java}
> /**
>  * {@link StateStoreDriver} implementation based on a filesystem. The most 
> common uses
>  * HDFS as a backend.
>  */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13402) RBF: Fix java doc for StateStoreFileSystemImpl

2018-04-06 Thread Yiran Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428046#comment-16428046
 ] 

Yiran Wu commented on HDFS-13402:
-

Thanks [~elgoiri] and [~ajayydv] for reviewing my code, I added a new patch.

> RBF: Fix  java doc for StateStoreFileSystemImpl
> ---
>
> Key: HDFS-13402
> URL: https://issues.apache.org/jira/browse/HDFS-13402
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: Yiran Wu
>Assignee: Yiran Wu
>Priority: Minor
> Attachments: HDFS-13402.001.patch, HDFS-13402.002.patch
>
>
> {code:java}
> /**
>  *StateStoreDriver}implementation based on a filesystem. The most common uses
>  * HDFS as a backend.
>  */
> {code}
> to
> {code:java}
> /**
>  * {@link StateStoreDriver} implementation based on a filesystem. The most 
> common uses
>  * HDFS as a backend.
>  */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13402) RBF: Fix java doc for StateStoreFileSystemImpl

2018-04-06 Thread Yiran Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated HDFS-13402:

Status: Patch Available  (was: Open)

> RBF: Fix  java doc for StateStoreFileSystemImpl
> ---
>
> Key: HDFS-13402
> URL: https://issues.apache.org/jira/browse/HDFS-13402
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: Yiran Wu
>Assignee: Yiran Wu
>Priority: Minor
> Attachments: HDFS-13402.001.patch, HDFS-13402.002.patch
>
>
> {code:java}
> /**
>  *StateStoreDriver}implementation based on a filesystem. The most common uses
>  * HDFS as a backend.
>  */
> {code}
> to
> {code:java}
> /**
>  * {@link StateStoreDriver} implementation based on a filesystem. The most 
> common uses
>  * HDFS as a backend.
>  */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13402) RBF: Fix java doc for StateStoreFileSystemImpl

2018-04-06 Thread Yiran Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated HDFS-13402:

Status: Open  (was: Patch Available)

> RBF: Fix  java doc for StateStoreFileSystemImpl
> ---
>
> Key: HDFS-13402
> URL: https://issues.apache.org/jira/browse/HDFS-13402
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: Yiran Wu
>Assignee: Yiran Wu
>Priority: Minor
> Attachments: HDFS-13402.001.patch, HDFS-13402.002.patch
>
>
> {code:java}
> /**
>  *StateStoreDriver}implementation based on a filesystem. The most common uses
>  * HDFS as a backend.
>  */
> {code}
> to
> {code:java}
> /**
>  * {@link StateStoreDriver} implementation based on a filesystem. The most 
> common uses
>  * HDFS as a backend.
>  */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13402) RBF: Fix java doc for StateStoreFileSystemImpl

2018-04-06 Thread Yiran Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated HDFS-13402:

Attachment: HDFS-13402.002.patch

> RBF: Fix  java doc for StateStoreFileSystemImpl
> ---
>
> Key: HDFS-13402
> URL: https://issues.apache.org/jira/browse/HDFS-13402
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: Yiran Wu
>Assignee: Yiran Wu
>Priority: Minor
> Attachments: HDFS-13402.001.patch, HDFS-13402.002.patch
>
>
> {code:java}
> /**
>  *StateStoreDriver}implementation based on a filesystem. The most common uses
>  * HDFS as a backend.
>  */
> {code}
> to
> {code:java}
> /**
>  * {@link StateStoreDriver} implementation based on a filesystem. The most 
> common uses
>  * HDFS as a backend.
>  */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13402) RBF: Fix java doc for StateStoreFileSystemImpl

2018-04-06 Thread Yiran Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated HDFS-13402:

Description: 
{code:java}
/**
 *StateStoreDriver}implementation based on a filesystem. The most common uses
 * HDFS as a backend.
 */
{code}

to

{code:java}
/**
 * {@link StateStoreDriver} implementation based on a filesystem. The most 
common uses
 * HDFS as a backend.
 */
{code}


  was:
{code:java}
/**
 *StateStoreDriver}implementation based on a filesystem. The most common uses
 * HDFS as a backend.
 */
{code}

to

{code:java}
/**
 * {@link StateStoreDriver}implementation based on a filesystem. The most 
common uses
 * HDFS as a backend.
 */
{code}



> RBF: Fix  java doc for StateStoreFileSystemImpl
> ---
>
> Key: HDFS-13402
> URL: https://issues.apache.org/jira/browse/HDFS-13402
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: Yiran Wu
>Assignee: Yiran Wu
>Priority: Minor
> Attachments: HDFS-13402.001.patch
>
>
> {code:java}
> /**
>  *StateStoreDriver}implementation based on a filesystem. The most common uses
>  * HDFS as a backend.
>  */
> {code}
> to
> {code:java}
> /**
>  * {@link StateStoreDriver} implementation based on a filesystem. The most 
> common uses
>  * HDFS as a backend.
>  */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11610) sun.net.spi.nameservice.NameService has moved to a new location

2018-04-06 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428026#comment-16428026
 ] 

Akira Ajisaka commented on HDFS-11610:
--

Thanks [~tasanuma0829]! Updated the patch to activate the profile when the java 
version is 9 or upper.

> sun.net.spi.nameservice.NameService has moved to a new location
> ---
>
> Key: HDFS-11610
> URL: https://issues.apache.org/jira/browse/HDFS-11610
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-11610.001.patch, HDFS-11610.002.patch
>
>
> sun.net.spi.nameservice.NameService was moved to 
> java.net.InetAddress$NameService in Java 9. TestDFSClientFailover uses this 
> class to spy nameservice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11610) sun.net.spi.nameservice.NameService has moved to a new location

2018-04-06 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-11610:
-
Attachment: HDFS-11610.002.patch

> sun.net.spi.nameservice.NameService has moved to a new location
> ---
>
> Key: HDFS-11610
> URL: https://issues.apache.org/jira/browse/HDFS-11610
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-11610.001.patch, HDFS-11610.002.patch
>
>
> sun.net.spi.nameservice.NameService was moved to 
> java.net.InetAddress$NameService in Java 9. TestDFSClientFailover uses this 
> class to spy nameservice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13292) Crypto command should give proper exception when key is already exist for zone directory

2018-04-06 Thread Ranith Sardar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428012#comment-16428012
 ] 

Ranith Sardar commented on HDFS-13292:
--

Thanks [~shahrs87] and [~surendrasingh] for your valuable comments.

!Screenshot from 2018-04-06 11-48-56.png!
Locally all test cases are passing.

> Crypto command should give proper exception when key is already exist for 
> zone directory
> 
>
> Key: HDFS-13292
> URL: https://issues.apache.org/jira/browse/HDFS-13292
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, kms
>Affects Versions: 2.8.3
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-13292.001.patch, HDFS-13292.002.patch, Screenshot 
> from 2018-04-06 11-48-56.png
>
>
> {{Scenario:}}
>  # Create a Dir
>  # Create EZ for the above dir with Key1
>  # Again you can try to create ZONE for same directory with Diff Key i.e Key2
> {noformat}
> hadoopclient> hadoop key list
> Listing keys for KeyProvider: 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider@152aa092
> key2
> key1
> hadoopclient> hdfs dfs -mkdir /kms
> hadoopclient> hdfs dfs -put bigdata_env /kms/file1
> hadoopclient> hdfs crypto -createZone -keyName key1 -path /kms
> RemoteException: Attempt to create an encryption zone for a non-empty 
> directory.
> hadoopclient> hdfs dfs -rmr /kms/file1
> rmr: DEPRECATED: Please use '-rm -r' instead.
> Deleted /kms/file1
> hadoopclient> hdfs crypto -createZone -keyName key1 -path /kms
> Added encryption zone /kms
> hadoopclient> hdfs crypto -createZone -keyName key2 -path /kms
> RemoteException: Attempt to create an encryption zone for a non-empty 
> directory.
> hadoopclient>
>  {noformat}
> Actual Output:
> ===
> {{Exception should be Like Dir already having the ZONE will not allow to 
> create new ZONE on this Dir}}
> Expected Output:
> =
> {{RemoteException:Attempt to create an encryption zone for non-empty 
> directory}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13292) Crypto command should give proper exception when key is already exist for zone directory

2018-04-06 Thread Ranith Sardar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-13292:
-
Attachment: Screenshot from 2018-04-06 11-48-56.png

> Crypto command should give proper exception when key is already exist for 
> zone directory
> 
>
> Key: HDFS-13292
> URL: https://issues.apache.org/jira/browse/HDFS-13292
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, kms
>Affects Versions: 2.8.3
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-13292.001.patch, HDFS-13292.002.patch, Screenshot 
> from 2018-04-06 11-48-56.png
>
>
> {{Scenario:}}
>  # Create a Dir
>  # Create EZ for the above dir with Key1
>  # Again you can try to create ZONE for same directory with Diff Key i.e Key2
> {noformat}
> hadoopclient> hadoop key list
> Listing keys for KeyProvider: 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider@152aa092
> key2
> key1
> hadoopclient> hdfs dfs -mkdir /kms
> hadoopclient> hdfs dfs -put bigdata_env /kms/file1
> hadoopclient> hdfs crypto -createZone -keyName key1 -path /kms
> RemoteException: Attempt to create an encryption zone for a non-empty 
> directory.
> hadoopclient> hdfs dfs -rmr /kms/file1
> rmr: DEPRECATED: Please use '-rm -r' instead.
> Deleted /kms/file1
> hadoopclient> hdfs crypto -createZone -keyName key1 -path /kms
> Added encryption zone /kms
> hadoopclient> hdfs crypto -createZone -keyName key2 -path /kms
> RemoteException: Attempt to create an encryption zone for a non-empty 
> directory.
> hadoopclient>
>  {noformat}
> Actual Output:
> ===
> {{Exception should be Like Dir already having the ZONE will not allow to 
> create new ZONE on this Dir}}
> Expected Output:
> =
> {{RemoteException:Attempt to create an encryption zone for non-empty 
> directory}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11610) sun.net.spi.nameservice.NameService has moved to a new location

2018-04-06 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428010#comment-16428010
 ] 

Takanobu Asanuma commented on HDFS-11610:
-

Hi [~ajisakaa], how about supporting Java 10 which was released a few weeks ago?

> sun.net.spi.nameservice.NameService has moved to a new location
> ---
>
> Key: HDFS-11610
> URL: https://issues.apache.org/jira/browse/HDFS-11610
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-11610.001.patch
>
>
> sun.net.spi.nameservice.NameService was moved to 
> java.net.InetAddress$NameService in Java 9. TestDFSClientFailover uses this 
> class to spy nameservice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org