[jira] [Updated] (HDFS-13877) HttpFS: Implement GETSNAPSHOTDIFF

2018-10-02 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13877:
--
Attachment: HDFS-13877.004.patch
Status: Patch Available  (was: In Progress)

[~jojochuang] Uploaded rev 004: Checking more specific type of exceptions now. 
Fixed one more checkstyle.

> HttpFS: Implement GETSNAPSHOTDIFF
> -
>
> Key: HDFS-13877
> URL: https://issues.apache.org/jira/browse/HDFS-13877
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13877.001.patch, HDFS-13877.001.patch, 
> HDFS-13877.002.patch, HDFS-13877.003.patch, HDFS-13877.004.patch
>
>
> Implement GETSNAPSHOTDIFF (from HDFS-13052) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-521) Implement DeleteBucket REST endpoint

2018-10-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636425#comment-16636425
 ] 

Hadoop QA commented on HDDS-521:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} s3gateway in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-521 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942230/HDDS-521.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux fd4788e87d04 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / fa7f707 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1271/testReport/ |
| Max. process+thread count | 337 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/s3gateway U: hadoop-ozone/s3gateway |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1271/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Implement DeleteBucket REST endpoint
> 
>
> Key: HDDS-521
> URL: https://issues.apache.org/jira/browse/HDDS-521
> Project: Hadoop 

[jira] [Updated] (HDFS-13877) HttpFS: Implement GETSNAPSHOTDIFF

2018-10-02 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13877:
--
Status: In Progress  (was: Patch Available)

> HttpFS: Implement GETSNAPSHOTDIFF
> -
>
> Key: HDFS-13877
> URL: https://issues.apache.org/jira/browse/HDFS-13877
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13877.001.patch, HDFS-13877.001.patch, 
> HDFS-13877.002.patch, HDFS-13877.003.patch
>
>
> Implement GETSNAPSHOTDIFF (from HDFS-13052) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-521) Implement DeleteBucket REST endpoint

2018-10-02 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-521:

Target Version/s: 0.2.2  (was: 0.3.0)

> Implement DeleteBucket REST endpoint
> 
>
> Key: HDDS-521
> URL: https://issues.apache.org/jira/browse/HDDS-521
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-521.00.patch, HDDS-521.01.patch
>
>
> The delete bucket will do the opposite of the create bucket call. It will 
> locate the volume via the username in the delete call.
> Reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketDELETE.html
> This is implemented as part of HDDS-444 but we need the double check the 
> headers and add acceptance tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-521) Implement DeleteBucket REST endpoint

2018-10-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636421#comment-16636421
 ] 

Hadoop QA commented on HDDS-521:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} s3gateway in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-521 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942230/HDDS-521.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a115ae7df109 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / fa7f707 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1272/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/s3gateway U: hadoop-ozone/s3gateway |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1272/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Implement DeleteBucket REST endpoint
> 
>
> Key: HDDS-521
> URL: https://issues.apache.org/jira/browse/HDDS-521
> Project: Hadoop 

[jira] [Commented] (HDFS-1915) fuse-dfs does not support append

2018-10-02 Thread Pranay Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636410#comment-16636410
 ] 

Pranay Singh commented on HDFS-1915:


I made the below changes to config file and the append seems to be working 
fine in 
a single node cluster.


  dfs.client.block.write.replace-datanode-on-failure.enable
  NEVER


  dfs.client.block.write.replace-datanode-on-failure.policy
  NEVER


> fuse-dfs does not support append
> 
>
> Key: HDFS-1915
> URL: https://issues.apache.org/jira/browse/HDFS-1915
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fuse-dfs
>Affects Versions: 0.20.2
> Environment: Ubuntu 10.04 LTS on EC2
>Reporter: Sampath K
>Assignee: Pranay Singh
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-1915.001.patch, HDFS-1915.002.patch, 
> HDFS-1915.003.patch, HDFS-1915.004.patch
>
>
> Environment:  CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name 
> node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using 
> fuse-dfs. 
> Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the 
> same, I get the following error. I am using vsFTPd on the server.
> Changed the mounted folder permissions to a+w to rule out any WRITE 
> permission issues. I was able to do a FTP GET on the same mounted 
> volume.
> Please advise
> FTPd Log
> ==
> Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1"
> Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1"
> Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec
> Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec
> Error in Namenode Log (I did a ftp GET on counter.txt and PUT with 
> counter1.txt) 
> ===
> 2011-05-11 01:03:02,822 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:02,825 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,275 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,290 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null
> perm=null
> 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.startFile: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
> 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from 
> 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to 
> non-existent file /upload/counter1.txt on client 10.32.77.36
> java.io.FileNotFoundException: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409)
> No activity shows up in datanode logs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-1915) fuse-dfs does not support append

2018-10-02 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-1915:
---
Status: Patch Available  (was: In Progress)

> fuse-dfs does not support append
> 
>
> Key: HDFS-1915
> URL: https://issues.apache.org/jira/browse/HDFS-1915
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fuse-dfs
>Affects Versions: 0.20.2
> Environment: Ubuntu 10.04 LTS on EC2
>Reporter: Sampath K
>Assignee: Pranay Singh
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-1915.001.patch, HDFS-1915.002.patch, 
> HDFS-1915.003.patch, HDFS-1915.004.patch
>
>
> Environment:  CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name 
> node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using 
> fuse-dfs. 
> Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the 
> same, I get the following error. I am using vsFTPd on the server.
> Changed the mounted folder permissions to a+w to rule out any WRITE 
> permission issues. I was able to do a FTP GET on the same mounted 
> volume.
> Please advise
> FTPd Log
> ==
> Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1"
> Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1"
> Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec
> Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec
> Error in Namenode Log (I did a ftp GET on counter.txt and PUT with 
> counter1.txt) 
> ===
> 2011-05-11 01:03:02,822 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:02,825 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,275 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,290 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null
> perm=null
> 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.startFile: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
> 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from 
> 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to 
> non-existent file /upload/counter1.txt on client 10.32.77.36
> java.io.FileNotFoundException: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409)
> No activity shows up in datanode logs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-1915) fuse-dfs does not support append

2018-10-02 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-1915:
---
Status: In Progress  (was: Patch Available)

> fuse-dfs does not support append
> 
>
> Key: HDFS-1915
> URL: https://issues.apache.org/jira/browse/HDFS-1915
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fuse-dfs
>Affects Versions: 0.20.2
> Environment: Ubuntu 10.04 LTS on EC2
>Reporter: Sampath K
>Assignee: Pranay Singh
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-1915.001.patch, HDFS-1915.002.patch, 
> HDFS-1915.003.patch, HDFS-1915.004.patch
>
>
> Environment:  CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name 
> node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using 
> fuse-dfs. 
> Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the 
> same, I get the following error. I am using vsFTPd on the server.
> Changed the mounted folder permissions to a+w to rule out any WRITE 
> permission issues. I was able to do a FTP GET on the same mounted 
> volume.
> Please advise
> FTPd Log
> ==
> Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1"
> Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1"
> Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec
> Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec
> Error in Namenode Log (I did a ftp GET on counter.txt and PUT with 
> counter1.txt) 
> ===
> 2011-05-11 01:03:02,822 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:02,825 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,275 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,290 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null
> perm=null
> 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.startFile: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
> 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from 
> 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to 
> non-existent file /upload/counter1.txt on client 10.32.77.36
> java.io.FileNotFoundException: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409)
> No activity shows up in datanode logs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-1915) fuse-dfs does not support append

2018-10-02 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-1915:
---
Attachment: HDFS-1915.004.patch

> fuse-dfs does not support append
> 
>
> Key: HDFS-1915
> URL: https://issues.apache.org/jira/browse/HDFS-1915
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fuse-dfs
>Affects Versions: 0.20.2
> Environment: Ubuntu 10.04 LTS on EC2
>Reporter: Sampath K
>Assignee: Pranay Singh
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-1915.001.patch, HDFS-1915.002.patch, 
> HDFS-1915.003.patch, HDFS-1915.004.patch
>
>
> Environment:  CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name 
> node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using 
> fuse-dfs. 
> Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the 
> same, I get the following error. I am using vsFTPd on the server.
> Changed the mounted folder permissions to a+w to rule out any WRITE 
> permission issues. I was able to do a FTP GET on the same mounted 
> volume.
> Please advise
> FTPd Log
> ==
> Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1"
> Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1"
> Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec
> Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec
> Error in Namenode Log (I did a ftp GET on counter.txt and PUT with 
> counter1.txt) 
> ===
> 2011-05-11 01:03:02,822 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:02,825 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,275 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,290 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null
> perm=null
> 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.startFile: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
> 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from 
> 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to 
> non-existent file /upload/counter1.txt on client 10.32.77.36
> java.io.FileNotFoundException: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409)
> No activity shows up in datanode logs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13944) [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module

2018-10-02 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636401#comment-16636401
 ] 

Hudson commented on HDFS-13944:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15102 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15102/])
HDFS-13944. [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module. (aajisaka: 
rev fa7f7078a713c44783425195a891582bcf8a6d5c)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaUsage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/BaseRecord.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/FileSubclusterResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/package-info.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/Query.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/StateStoreUtils.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/StateStoreService.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/MountTable.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/StateStoreRecordOperations.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/utils/ConsistentHashRing.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/order/LocalResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterSafemodeService.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/package-info.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/order/AvailableSpaceResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/RouterStore.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/MembershipState.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/MembershipStore.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/CachedRecordStore.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreBaseImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NameserviceManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MultipleDestinationMountTableResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/ActiveNamenodeResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/NamenodeStatusReport.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
* (edit) 

[jira] [Updated] (HDDS-567) Rename Mapping to ContainerManager in SCM

2018-10-02 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-567:
-
Status: Patch Available  (was: Open)

> Rename Mapping to ContainerManager in SCM
> -
>
> Key: HDDS-567
> URL: https://issues.apache.org/jira/browse/HDDS-567
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-567.000.patch
>
>
> In SCM we have an interface named {{Mapping}} which is actually for container 
> management, it is better to rename this interface as {{ContainerManager}} 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication

2018-10-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636389#comment-16636389
 ] 

Hadoop QA commented on HDFS-12284:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13532 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
28s{color} | {color:green} HDFS-13532 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} HDFS-13532 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} HDFS-13532 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} HDFS-13532 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-13532 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-13532 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
39s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-12284 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942226/HDFS-12284-HDFS-13532.006.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 0e8e4583c2c7 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13532 / 96ae4ac |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25190/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25190/testReport/ |
| Max. process+thread count | 951 (vs. ulimit of 1) |
| modules | 

[jira] [Updated] (HDFS-13944) [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module

2018-10-02 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-13944:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~elgoiri] for the fix!

> [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module
> 
>
> Key: HDFS-13944
> URL: https://issues.apache.org/jira/browse/HDFS-13944
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 3.1.1
>Reporter: Akira Ajisaka
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-13944.000.patch, HDFS-13944.001.patch, 
> HDFS-13944.002.patch, HDFS-13944.003.patch, HDFS-13944.004.patch, 
> javadoc-rbf-000.log, javadoc-rbf.log
>
>
> There are 34 errors in hadoop-hdfs-rbf module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-520) Implement HeadBucket REST endpoint

2018-10-02 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636385#comment-16636385
 ] 

Hudson commented on HDDS-520:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15101 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15101/])
HDDS-520. Implement HeadBucket REST endpoint. Contributed by Bharat (bharat: 
rev ec075791dab032d434b1697107de14bc5db8c087)
* (add) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/bucket/TestHeadBucket.java
* (add) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/bucket/HeadBucket.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/EndpointBase.java


> Implement HeadBucket REST endpoint
> --
>
> Key: HDDS-520
> URL: https://issues.apache.org/jira/browse/HDDS-520
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.2
>
> Attachments: HDDS-520.00.patch, HDDS-520.01.patch, HDDS-520.02.patch
>
>
> This operation is useful to determine if a bucket exists and you have 
> permission to access it. The operation returns a 200 OK if the bucket exists 
> and you have permission to access it. Otherwise, the operation might return 
> responses such as 404 Not Found and 403 Forbidden.  
> See the reference here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketHEAD.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-521) Implement DeleteBucket REST endpoint

2018-10-02 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-521:

Attachment: HDDS-521.01.patch

> Implement DeleteBucket REST endpoint
> 
>
> Key: HDDS-521
> URL: https://issues.apache.org/jira/browse/HDDS-521
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-521.00.patch, HDDS-521.01.patch
>
>
> The delete bucket will do the opposite of the create bucket call. It will 
> locate the volume via the username in the delete call.
> Reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketDELETE.html
> This is implemented as part of HDDS-444 but we need the double check the 
> headers and add acceptance tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13952) Update hadoop.version in the trunk, which is causing compilation failure

2018-10-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636380#comment-16636380
 ] 

Hadoop QA commented on HDFS-13952:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
12s{color} | {color:red} root in trunk failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
13s{color} | {color:red} root in trunk failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
15s{color} | {color:red} root in trunk failed. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  1m 
24s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
13s{color} | {color:red} root in trunk failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-project in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 15m 16s{color} 
| {color:red} root generated 1327 new + 0 unchanged - 0 fixed = 1327 total (was 
0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 13m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  6m 
50s{color} | {color:red} root generated 4221 new + 0 unchanged - 0 fixed = 4221 
total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}175m 45s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}246m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks |
|   | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | 
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 |
|   | hadoop.yarn.server.nodemanager.containermanager.TestNMProxy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13952 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942190/HDFS-13952.01.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux 68ddc6ebf39c 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HDDS-520) Implement HeadBucket REST endpoint

2018-10-02 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-520:

Fix Version/s: 0.2.2

> Implement HeadBucket REST endpoint
> --
>
> Key: HDDS-520
> URL: https://issues.apache.org/jira/browse/HDDS-520
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.2
>
> Attachments: HDDS-520.00.patch, HDDS-520.01.patch, HDDS-520.02.patch
>
>
> This operation is useful to determine if a bucket exists and you have 
> permission to access it. The operation returns a 200 OK if the bucket exists 
> and you have permission to access it. Otherwise, the operation might return 
> responses such as 404 Not Found and 403 Forbidden.  
> See the reference here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketHEAD.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-520) Implement HeadBucket REST endpoint

2018-10-02 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-520:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thank you, [~elek] for review.

I have committed it to the trunk.

> Implement HeadBucket REST endpoint
> --
>
> Key: HDDS-520
> URL: https://issues.apache.org/jira/browse/HDDS-520
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-520.00.patch, HDDS-520.01.patch, HDDS-520.02.patch
>
>
> This operation is useful to determine if a bucket exists and you have 
> permission to access it. The operation returns a 200 OK if the bucket exists 
> and you have permission to access it. Otherwise, the operation might return 
> responses such as 404 Not Found and 403 Forbidden.  
> See the reference here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketHEAD.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-520) Implement HeadBucket REST endpoint

2018-10-02 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636373#comment-16636373
 ] 

Bharat Viswanadham commented on HDDS-520:
-

Thank You [~elek] for review.

I will commit this patch shortly, will commit checkstyle issue during 
committing.

> Implement HeadBucket REST endpoint
> --
>
> Key: HDDS-520
> URL: https://issues.apache.org/jira/browse/HDDS-520
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-520.00.patch, HDDS-520.01.patch, HDDS-520.02.patch
>
>
> This operation is useful to determine if a bucket exists and you have 
> permission to access it. The operation returns a 200 OK if the bucket exists 
> and you have permission to access it. Otherwise, the operation might return 
> responses such as 404 Not Found and 403 Forbidden.  
> See the reference here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketHEAD.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-490) Improve om and scm start up options

2018-10-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636363#comment-16636363
 ] 

Hadoop QA commented on HDDS-490:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test hadoop-ozone/dist hadoop-ozone/docs {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} dist in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
35s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test hadoop-ozone/dist hadoop-ozone/docs {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
56s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
46s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 35s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} dist in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} docs in the patch passed. {color} |
| 

[jira] [Commented] (HDFS-13944) [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module

2018-10-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636357#comment-16636357
 ] 

Hadoop QA commented on HDFS-13944:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 1 new + 19 unchanged - 0 fixed = 20 total (was 19) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
46s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13944 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942223/HDFS-13944.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2184d2f32c8b 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 96ae4ac |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25189/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25189/testReport/ |
| Max. process+thread count | 973 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf 

[jira] [Commented] (HDFS-13952) Update hadoop.version in the trunk, which is causing compilation failure

2018-10-02 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636350#comment-16636350
 ] 

Sunil Govindan commented on HDFS-13952:
---

Folks, Thanks for quickly responding. Its  my bad.

Command which used to update pom file didnt change one file and I somehow 
missed committing that change post my local compile.

> Update hadoop.version in the trunk, which is causing compilation failure
> 
>
> Key: HDFS-13952
> URL: https://issues.apache.org/jira/browse/HDFS-13952
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HDFS-13952.00.patch, HDFS-13952.01.patch
>
>
> 3.2.0-SNAPSHOT to
> 3.3.0-SNAPSHOT
>  
> On trunk compilation failure:
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireProperty failed 
> with message:
> The hadoop.version property should be set and should be 3.3.0-SNAPSHOT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12284) RBF: Support for Kerberos authentication

2018-10-02 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12284:
---
Attachment: HDFS-12284-HDFS-13532.006.patch

> RBF: Support for Kerberos authentication
> 
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-12284-HDFS-13532.004.patch, 
> HDFS-12284-HDFS-13532.005.patch, HDFS-12284-HDFS-13532.006.patch, 
> HDFS-12284.000.patch, HDFS-12284.001.patch, HDFS-12284.002.patch, 
> HDFS-12284.003.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-520) Implement HeadBucket REST endpoint

2018-10-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636336#comment-16636336
 ] 

Hadoop QA commented on HDDS-520:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} hadoop-ozone/s3gateway: The patch generated 1 
new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} s3gateway in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-520 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942182/HDDS-520.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 49f2609d565a 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 96ae4ac |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1270/artifact/out/diff-checkstyle-hadoop-ozone_s3gateway.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1270/testReport/ |
| Max. process+thread count | 307 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/s3gateway U: hadoop-ozone/s3gateway |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1270/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Commented] (HDFS-13944) [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module

2018-10-02 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636331#comment-16636331
 ] 

Akira Ajisaka commented on HDFS-13944:
--

+1 pending Jenkins for the v4 patch. Thanks!

> [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module
> 
>
> Key: HDFS-13944
> URL: https://issues.apache.org/jira/browse/HDFS-13944
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 3.1.1
>Reporter: Akira Ajisaka
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13944.000.patch, HDFS-13944.001.patch, 
> HDFS-13944.002.patch, HDFS-13944.003.patch, HDFS-13944.004.patch, 
> javadoc-rbf-000.log, javadoc-rbf.log
>
>
> There are 34 errors in hadoop-hdfs-rbf module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication

2018-10-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636326#comment-16636326
 ] 

Hadoop QA commented on HDFS-12284:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13532 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
35s{color} | {color:green} HDFS-13532 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} HDFS-13532 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} HDFS-13532 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} HDFS-13532 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-13532 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} HDFS-13532 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 2 new + 2 unchanged - 0 fixed = 4 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
28s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-12284 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942213/HDFS-12284-HDFS-13532.005.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux e5368a4a417d 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13532 / 96ae4ac |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25188/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25188/testReport/ |
| Max. process+thread count | 950 (vs. ulimit of 1) |
| modules | 

[jira] [Updated] (HDFS-13944) [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module

2018-10-02 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13944:
---
Attachment: HDFS-13944.004.patch

> [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module
> 
>
> Key: HDFS-13944
> URL: https://issues.apache.org/jira/browse/HDFS-13944
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 3.1.1
>Reporter: Akira Ajisaka
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13944.000.patch, HDFS-13944.001.patch, 
> HDFS-13944.002.patch, HDFS-13944.003.patch, HDFS-13944.004.patch, 
> javadoc-rbf-000.log, javadoc-rbf.log
>
>
> There are 34 errors in hadoop-hdfs-rbf module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13944) [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module

2018-10-02 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636318#comment-16636318
 ] 

Akira Ajisaka commented on HDFS-13944:
--

Found a typo 'authrozied' in RouterAdminServer.java.

> [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module
> 
>
> Key: HDFS-13944
> URL: https://issues.apache.org/jira/browse/HDFS-13944
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 3.1.1
>Reporter: Akira Ajisaka
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13944.000.patch, HDFS-13944.001.patch, 
> HDFS-13944.002.patch, HDFS-13944.003.patch, javadoc-rbf-000.log, 
> javadoc-rbf.log
>
>
> There are 34 errors in hadoop-hdfs-rbf module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13944) [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module

2018-10-02 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13944:
---
Attachment: HDFS-13944.003.patch

> [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module
> 
>
> Key: HDFS-13944
> URL: https://issues.apache.org/jira/browse/HDFS-13944
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 3.1.1
>Reporter: Akira Ajisaka
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13944.000.patch, HDFS-13944.001.patch, 
> HDFS-13944.002.patch, HDFS-13944.003.patch, javadoc-rbf-000.log, 
> javadoc-rbf.log
>
>
> There are 34 errors in hadoop-hdfs-rbf module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13944) [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module

2018-10-02 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636317#comment-16636317
 ] 

Íñigo Goiri commented on HDFS-13944:


Thanks [~ajisakaa], attached  [^HDFS-13944.003.patch] fixing the error.

> [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module
> 
>
> Key: HDFS-13944
> URL: https://issues.apache.org/jira/browse/HDFS-13944
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 3.1.1
>Reporter: Akira Ajisaka
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13944.000.patch, HDFS-13944.001.patch, 
> HDFS-13944.002.patch, HDFS-13944.003.patch, javadoc-rbf-000.log, 
> javadoc-rbf.log
>
>
> There are 34 errors in hadoop-hdfs-rbf module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13944) [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module

2018-10-02 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636315#comment-16636315
 ] 

Akira Ajisaka commented on HDFS-13944:
--

Thank you for updating the patch!

There is a javadoc error in RecordStore.java. Would you fix this?
{noformat}
[ERROR] 
/Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/RecordStore.java:64:
 error: @param name not found
[ERROR]* @param  The type of the record to store by this interface.
{noformat}

bq. I'm not sure how to handle the checkstyle, if I split the line, javadoc 
complains, if I don't, checkstyle complains.
I prefer to fixing javadoc warnings rather than fixing checkstyle.

> [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module
> 
>
> Key: HDFS-13944
> URL: https://issues.apache.org/jira/browse/HDFS-13944
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 3.1.1
>Reporter: Akira Ajisaka
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13944.000.patch, HDFS-13944.001.patch, 
> HDFS-13944.002.patch, javadoc-rbf-000.log, javadoc-rbf.log
>
>
> There are 34 errors in hadoop-hdfs-rbf module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13952) Update hadoop.version in the trunk, which is causing compilation failure

2018-10-02 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13952:
--
Resolution: Not A Problem
Status: Resolved  (was: Patch Available)

> Update hadoop.version in the trunk, which is causing compilation failure
> 
>
> Key: HDFS-13952
> URL: https://issues.apache.org/jira/browse/HDFS-13952
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HDFS-13952.00.patch, HDFS-13952.01.patch
>
>
> 3.2.0-SNAPSHOT to
> 3.3.0-SNAPSHOT
>  
> On trunk compilation failure:
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireProperty failed 
> with message:
> The hadoop.version property should be set and should be 3.3.0-SNAPSHOT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13952) Update hadoop.version in the trunk, which is causing compilation failure

2018-10-02 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636313#comment-16636313
 ] 

Bharat Viswanadham commented on HDFS-13952:
---

Then I think we can close this issue, as other change is okay to skip.

I will close out this jira.

> Update hadoop.version in the trunk, which is causing compilation failure
> 
>
> Key: HDFS-13952
> URL: https://issues.apache.org/jira/browse/HDFS-13952
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HDFS-13952.00.patch, HDFS-13952.01.patch
>
>
> 3.2.0-SNAPSHOT to
> 3.3.0-SNAPSHOT
>  
> On trunk compilation failure:
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireProperty failed 
> with message:
> The hadoop.version property should be set and should be 3.3.0-SNAPSHOT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-517) Implement HeadObject REST endpoint

2018-10-02 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636292#comment-16636292
 ] 

Elek, Marton commented on HDDS-517:
---

Thanks [~GeLiXin], will test it soon.

Actually I have no idea how Range could be supported. As AWS writes: 'The HEAD 
operation retrieves metadata from an object *without* returning the object 
itself'. So what's the point in defining ranges?

> Implement HeadObject REST endpoint
> --
>
> Key: HDDS-517
> URL: https://issues.apache.org/jira/browse/HDDS-517
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-517.000.patch
>
>
> The HEAD operation retrieves metadata from an object without returning the 
> object itself. This operation is useful if you are interested only in an 
> object's metadata. To use HEAD, you must have READ access to the object.
> Steps:
>  1. Look up the volume
>  2. Read the key and return to the user.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html
> We have a simple version of this call in HDDS-444 but without Range support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13952) Update hadoop.version in the trunk, which is causing compilation failure

2018-10-02 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636281#comment-16636281
 ] 

Elek, Marton commented on HDFS-13952:
-

The fix is already committed to the trunk by [~rkanter] as an addendum. 

https://github.com/apache/hadoop/commit/96ae4ac45fe84b3da696a7beb3b6590af031543b

The only difference is hadoop.assemblies.version is not replaced with a 
variable.

> Update hadoop.version in the trunk, which is causing compilation failure
> 
>
> Key: HDFS-13952
> URL: https://issues.apache.org/jira/browse/HDFS-13952
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HDFS-13952.00.patch, HDFS-13952.01.patch
>
>
> 3.2.0-SNAPSHOT to
> 3.3.0-SNAPSHOT
>  
> On trunk compilation failure:
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireProperty failed 
> with message:
> The hadoop.version property should be set and should be 3.3.0-SNAPSHOT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13952) Update hadoop.version in the trunk, which is causing compilation failure

2018-10-02 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636275#comment-16636275
 ] 

Elek, Marton commented on HDFS-13952:
-

+1. LGTM.

Will commit it to the trunk, soon...

> Update hadoop.version in the trunk, which is causing compilation failure
> 
>
> Key: HDFS-13952
> URL: https://issues.apache.org/jira/browse/HDFS-13952
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HDFS-13952.00.patch, HDFS-13952.01.patch
>
>
> 3.2.0-SNAPSHOT to
> 3.3.0-SNAPSHOT
>  
> On trunk compilation failure:
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireProperty failed 
> with message:
> The hadoop.version property should be set and should be 3.3.0-SNAPSHOT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-490) Improve om and scm start up options

2018-10-02 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-490:
--
Attachment: HDDS-490.002.patch

> Improve om and scm start up options 
> 
>
> Key: HDDS-490
> URL: https://issues.apache.org/jira/browse/HDDS-490
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
>  Labels: alpha2, incompatible
> Attachments: HDDS-490.001.patch, HDDS-490.002.patch
>
>
> I propose the following changes:
>  # Rename createObjectStore to format
>  # Change the flag to use --createObjectStore instead of using 
> -createObjectStore. It is also applicable to other scm and om startup options.
>  # Fail to format existing object store. If a user runs:
> {code:java}
> ozone om -createObjectStore{code}
> And there is already an object store, it should give a warning message and 
> exit the process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-521) Implement DeleteBucket REST endpoint

2018-10-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636269#comment-16636269
 ] 

Hadoop QA commented on HDDS-521:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDDS-521 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-521 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942216/HDDS-521.00.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1268/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Implement DeleteBucket REST endpoint
> 
>
> Key: HDDS-521
> URL: https://issues.apache.org/jira/browse/HDDS-521
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-521.00.patch
>
>
> The delete bucket will do the opposite of the create bucket call. It will 
> locate the volume via the username in the delete call.
> Reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketDELETE.html
> This is implemented as part of HDDS-444 but we need the double check the 
> headers and add acceptance tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-521) Implement DeleteBucket REST endpoint

2018-10-02 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636268#comment-16636268
 ] 

Bharat Viswanadham commented on HDDS-521:
-

This patch is dependant on HDDS-520.

> Implement DeleteBucket REST endpoint
> 
>
> Key: HDDS-521
> URL: https://issues.apache.org/jira/browse/HDDS-521
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-521.00.patch
>
>
> The delete bucket will do the opposite of the create bucket call. It will 
> locate the volume via the username in the delete call.
> Reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketDELETE.html
> This is implemented as part of HDDS-444 but we need the double check the 
> headers and add acceptance tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-521) Implement DeleteBucket REST endpoint

2018-10-02 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-521:

Status: Patch Available  (was: In Progress)

> Implement DeleteBucket REST endpoint
> 
>
> Key: HDDS-521
> URL: https://issues.apache.org/jira/browse/HDDS-521
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-521.00.patch
>
>
> The delete bucket will do the opposite of the create bucket call. It will 
> locate the volume via the username in the delete call.
> Reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketDELETE.html
> This is implemented as part of HDDS-444 but we need the double check the 
> headers and add acceptance tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-521) Implement DeleteBucket REST endpoint

2018-10-02 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-521:

Attachment: HDDS-521.00.patch

> Implement DeleteBucket REST endpoint
> 
>
> Key: HDDS-521
> URL: https://issues.apache.org/jira/browse/HDDS-521
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-521.00.patch
>
>
> The delete bucket will do the opposite of the create bucket call. It will 
> locate the volume via the username in the delete call.
> Reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketDELETE.html
> This is implemented as part of HDDS-444 but we need the double check the 
> headers and add acceptance tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication

2018-10-02 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636259#comment-16636259
 ] 

Íñigo Goiri commented on HDFS-12284:


The branch was broken because I happened to rebase to trunk while the 
transition from 3.2 to 3.3 broke it.
Rebased, let's see if now this comes cleaner.

> RBF: Support for Kerberos authentication
> 
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-12284-HDFS-13532.004.patch, 
> HDFS-12284-HDFS-13532.005.patch, HDFS-12284.000.patch, HDFS-12284.001.patch, 
> HDFS-12284.002.patch, HDFS-12284.003.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12284) RBF: Support for Kerberos authentication

2018-10-02 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12284:
---
Attachment: HDFS-12284-HDFS-13532.005.patch

> RBF: Support for Kerberos authentication
> 
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-12284-HDFS-13532.004.patch, 
> HDFS-12284-HDFS-13532.005.patch, HDFS-12284.000.patch, HDFS-12284.001.patch, 
> HDFS-12284.002.patch, HDFS-12284.003.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-354) VolumeInfo.getScmUsed throws NPE

2018-10-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636243#comment-16636243
 ] 

Hadoop QA commented on HDDS-354:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
10s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  1m 
33s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
44s{color} | {color:red} hadoop-hdds/container-service in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
32s{color} | {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
31s{color} | {color:red} container-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  0m 
10s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
52s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
46s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-354 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942200/HDDS-354.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2800e9bb7a48 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision 

[jira] [Resolved] (HDFS-13954) Add missing cleanupSSLConfig() call for TestTimelineClient test

2018-10-02 Thread Aki Tanaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aki Tanaka resolved HDFS-13954.
---
Resolution: Fixed

I created this issue in a wrong project. sorry.

> Add missing cleanupSSLConfig() call for TestTimelineClient test
> ---
>
> Key: HDFS-13954
> URL: https://issues.apache.org/jira/browse/HDFS-13954
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Aki Tanaka
>Priority: Minor
>
> Tests that setup SSLConfigs can leave conf-files lingering unless they are 
> cleaned up via {{KeyStoreTestUtil.cleanupSSLConfig}} call. TestTimelineClient 
> test is missing this call.
> If the cleanup method is not called explicitly, a modified ssl-client.xml is 
> left in {{test-classes}}, might affect to subsequent test cases.
>  
> There was a similar report in HDFS-11042, but looks that we need to fix 
> TestTimelineClient test too.
>  
> {code:java}
> $ mvn test -Dtest=TestTimelineClient
> $ find .|grep ssl-client.xml$
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/test-classes/ssl-client.xml
> $ cat 
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/test-classes/ssl-client.xml
> 
> ssl.client.truststore.reload.interval1000falseprogrammatically
> ssl.client.truststore.location/Users/tanakah/work/hadoop-2.8.5/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/test-dir/trustKS.jksfalseprogrammatically
> ssl.client.keystore.keypasswordclientPfalseprogrammatically
> ssl.client.keystore.location/Users/tanakah/work/hadoop-2.8.5/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/test-dir/clientKS.jksfalseprogrammatically
> ssl.client.truststore.passwordtrustPfalseprogrammatically
> ssl.client.keystore.passwordclientPfalseprogrammatically
> {code}
>  
> After applying this patch, the ssl-client.xml is not generated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10636) Modify ReplicaInfo to remove the assumption that replica metadata and data are stored in java.io.File.

2018-10-02 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-10636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-10636:
--
Status: Patch Available  (was: Reopened)

Posted patch for branch-2.9

> Modify ReplicaInfo to remove the assumption that replica metadata and data 
> are stored in java.io.File.
> --
>
> Key: HDFS-10636
> URL: https://issues.apache.org/jira/browse/HDFS-10636
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10636-branch-2.9.001.patch, HDFS-10636.001.patch, 
> HDFS-10636.002.patch, HDFS-10636.003.patch, HDFS-10636.004.patch, 
> HDFS-10636.005.patch, HDFS-10636.006.patch, HDFS-10636.007.patch, 
> HDFS-10636.008.patch, HDFS-10636.009.patch, HDFS-10636.010.patch
>
>
> Replace java.io.File related APIs from {{ReplicaInfo}}, and enable the 
> definition of new {{ReplicaInfo}} sub-classes whose metadata and data can be 
> present on external storages (HDFS-9806). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10636) Modify ReplicaInfo to remove the assumption that replica metadata and data are stored in java.io.File.

2018-10-02 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-10636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-10636:
--
Attachment: HDFS-10636-branch-2.9.001.patch

> Modify ReplicaInfo to remove the assumption that replica metadata and data 
> are stored in java.io.File.
> --
>
> Key: HDFS-10636
> URL: https://issues.apache.org/jira/browse/HDFS-10636
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10636-branch-2.9.001.patch, HDFS-10636.001.patch, 
> HDFS-10636.002.patch, HDFS-10636.003.patch, HDFS-10636.004.patch, 
> HDFS-10636.005.patch, HDFS-10636.006.patch, HDFS-10636.007.patch, 
> HDFS-10636.008.patch, HDFS-10636.009.patch, HDFS-10636.010.patch
>
>
> Replace java.io.File related APIs from {{ReplicaInfo}}, and enable the 
> definition of new {{ReplicaInfo}} sub-classes whose metadata and data can be 
> present on external storages (HDFS-9806). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-517) Implement HeadObject REST endpoint

2018-10-02 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636235#comment-16636235
 ] 

Bharat Viswanadham edited comment on HDDS-517 at 10/2/18 11:35 PM:
---

[~GeLiXin]

HDDS-560 got checked in, now we can use OS3Exception classes, and convert the 
error response to XML.


was (Author: bharatviswa):
[~GeLiXin]

HDDS-560 checked in, now we can use OS3Exception classes, and convert the error 
response to XML.

> Implement HeadObject REST endpoint
> --
>
> Key: HDDS-517
> URL: https://issues.apache.org/jira/browse/HDDS-517
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-517.000.patch
>
>
> The HEAD operation retrieves metadata from an object without returning the 
> object itself. This operation is useful if you are interested only in an 
> object's metadata. To use HEAD, you must have READ access to the object.
> Steps:
>  1. Look up the volume
>  2. Read the key and return to the user.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html
> We have a simple version of this call in HDDS-444 but without Range support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13954) Add missing cleanupSSLConfig() call for TestTimelineClient test

2018-10-02 Thread Aki Tanaka (JIRA)
Aki Tanaka created HDFS-13954:
-

 Summary: Add missing cleanupSSLConfig() call for 
TestTimelineClient test
 Key: HDFS-13954
 URL: https://issues.apache.org/jira/browse/HDFS-13954
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Aki Tanaka


Tests that setup SSLConfigs can leave conf-files lingering unless they are 
cleaned up via {{KeyStoreTestUtil.cleanupSSLConfig}} call. TestTimelineClient 
test is missing this call.

If the cleanup method is not called explicitly, a modified ssl-client.xml is 
left in {{test-classes}}, might affect to subsequent test cases.

 

There was a similar report in HDFS-11042, but looks that we need to fix 
TestTimelineClient test too.

 
{code:java}
$ mvn test -Dtest=TestTimelineClient
$ find .|grep ssl-client.xml$
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/test-classes/ssl-client.xml
$ cat 
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/test-classes/ssl-client.xml

ssl.client.truststore.reload.interval1000falseprogrammatically
ssl.client.truststore.location/Users/tanakah/work/hadoop-2.8.5/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/test-dir/trustKS.jksfalseprogrammatically
ssl.client.keystore.keypasswordclientPfalseprogrammatically
ssl.client.keystore.location/Users/tanakah/work/hadoop-2.8.5/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/test-dir/clientKS.jksfalseprogrammatically
ssl.client.truststore.passwordtrustPfalseprogrammatically
ssl.client.keystore.passwordclientPfalseprogrammatically
{code}
 

After applying this patch, the ssl-client.xml is not generated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-517) Implement HeadObject REST endpoint

2018-10-02 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636235#comment-16636235
 ] 

Bharat Viswanadham commented on HDDS-517:
-

[~GeLiXin]

HDDS-560 checked in, now we can use OS3Exception classes, and convert the error 
response to XML.

> Implement HeadObject REST endpoint
> --
>
> Key: HDDS-517
> URL: https://issues.apache.org/jira/browse/HDDS-517
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-517.000.patch
>
>
> The HEAD operation retrieves metadata from an object without returning the 
> object itself. This operation is useful if you are interested only in an 
> object's metadata. To use HEAD, you must have READ access to the object.
> Steps:
>  1. Look up the volume
>  2. Read the key and return to the user.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html
> We have a simple version of this call in HDDS-444 but without Range support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication

2018-10-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636233#comment-16636233
 ] 

Hadoop QA commented on HDFS-12284:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13532 Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} root in HDFS-13532 failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-hdfs-rbf in HDFS-13532 failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
43s{color} | {color:green} HDFS-13532 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
10s{color} | {color:red} hadoop-hdfs-rbf in HDFS-13532 failed. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m  
8s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
10s{color} | {color:red} hadoop-hdfs-rbf in HDFS-13532 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-hdfs-rbf in HDFS-13532 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
9s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
10s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 10s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 3 new + 2 unchanged - 0 fixed = 5 total (was 2) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
10s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  0m 
13s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
10s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m  
9s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m  9s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  6m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-12284 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942195/HDFS-12284-HDFS-13532.004.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 51b0b01e1384 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13532 / e8b8604 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25186/artifact/out/branch-mvninstall-root.txt
 |
| compile | 

[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication

2018-10-02 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636222#comment-16636222
 ] 

CR Hota commented on HDFS-12284:


Thanks [~elgoiri] for the new patch.

Will get back to you in a couple of days.

> RBF: Support for Kerberos authentication
> 
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-12284-HDFS-13532.004.patch, HDFS-12284.000.patch, 
> HDFS-12284.001.patch, HDFS-12284.002.patch, HDFS-12284.003.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-354) VolumeInfo.getScmUsed throws NPE

2018-10-02 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636218#comment-16636218
 ] 

Hanisha Koneru commented on HDDS-354:
-

Thanks [~ajayydv] for reporting this and [~nandakumar131] for root causing the 
issue.

I have uploaded patch v01. Changed the lock in VolumeSet to a 
ReentrantReadWriteLock to synchronize  {{VolumeSet#volumeMap}}.

> VolumeInfo.getScmUsed throws NPE
> 
>
> Key: HDDS-354
> URL: https://issues.apache.org/jira/browse/HDDS-354
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-354.001.patch
>
>
> {code}java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:366)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:264)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
>   at java.util.concurrent.FutureTask.run(FutureTask.java)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-354) VolumeInfo.getScmUsed throws NPE

2018-10-02 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-354:

Status: Patch Available  (was: Open)

> VolumeInfo.getScmUsed throws NPE
> 
>
> Key: HDDS-354
> URL: https://issues.apache.org/jira/browse/HDDS-354
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-354.001.patch
>
>
> {code}java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:366)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:264)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
>   at java.util.concurrent.FutureTask.run(FutureTask.java)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-354) VolumeInfo.getScmUsed throws NPE

2018-10-02 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-354:

Attachment: HDDS-354.001.patch

> VolumeInfo.getScmUsed throws NPE
> 
>
> Key: HDDS-354
> URL: https://issues.apache.org/jira/browse/HDDS-354
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-354.001.patch
>
>
> {code}java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:366)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:264)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
>   at java.util.concurrent.FutureTask.run(FutureTask.java)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13947) Review of DirectoryScanner Class

2018-10-02 Thread Virajith Jalaparti (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636209#comment-16636209
 ] 

Virajith Jalaparti commented on HDFS-13947:
---

Thanks for working this [~belugabehr].

[~elgoiri] The changes related to the PROVIDED part LGTM.

> Review of DirectoryScanner Class
> 
>
> Key: HDFS-13947
> URL: https://issues.apache.org/jira/browse/HDFS-13947
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HDFS-13947.1.patch, HDFS-13947.2.patch, 
> HDFS-13947.3.patch, HDFS-13947.4.patch
>
>
> Review of Directory Scanner.   Replaced a lot of code with Guava MultiMap.  
> Some general house cleaning and improved logging.  For performance, using 
> {{ArrayList}} instead of {{LinkedList}} where possible, especially since 
> these lists can be quite large a LinkedList will consume a lot of memory and 
> be slow to sort/iterate over.
> https://stackoverflow.com/questions/322715/when-to-use-linkedlist-over-arraylist-in-java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13947) Review of DirectoryScanner Class

2018-10-02 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636187#comment-16636187
 ] 

Íñigo Goiri commented on HDFS-13947:


{{TestBlockReaderLocal}} has been failing for a while but I cannot find a 
related JIRA.
{{TestNameNodeMetadataConsistency}} started recently and same business.

Do you mind removing the white spaces in the patch?

> Review of DirectoryScanner Class
> 
>
> Key: HDFS-13947
> URL: https://issues.apache.org/jira/browse/HDFS-13947
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HDFS-13947.1.patch, HDFS-13947.2.patch, 
> HDFS-13947.3.patch, HDFS-13947.4.patch
>
>
> Review of Directory Scanner.   Replaced a lot of code with Guava MultiMap.  
> Some general house cleaning and improved logging.  For performance, using 
> {{ArrayList}} instead of {{LinkedList}} where possible, especially since 
> these lists can be quite large a LinkedList will consume a lot of memory and 
> be slow to sort/iterate over.
> https://stackoverflow.com/questions/322715/when-to-use-linkedlist-over-arraylist-in-java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication

2018-10-02 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636175#comment-16636175
 ] 

Íñigo Goiri commented on HDFS-12284:


[~crh], I went ahead with the rebase and changed that.
Do you mind checking if everything looks good?

> RBF: Support for Kerberos authentication
> 
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-12284-HDFS-13532.004.patch, HDFS-12284.000.patch, 
> HDFS-12284.001.patch, HDFS-12284.002.patch, HDFS-12284.003.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12284) RBF: Support for Kerberos authentication

2018-10-02 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12284:
---
Attachment: HDFS-12284-HDFS-13532.004.patch

> RBF: Support for Kerberos authentication
> 
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-12284-HDFS-13532.004.patch, HDFS-12284.000.patch, 
> HDFS-12284.001.patch, HDFS-12284.002.patch, HDFS-12284.003.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12284) RBF: Support for Kerberos authentication

2018-10-02 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12284:
---
Attachment: (was: HDFS-12284-HDFS-13532.004.patch)

> RBF: Support for Kerberos authentication
> 
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-12284-HDFS-13532.004.patch, HDFS-12284.000.patch, 
> HDFS-12284.001.patch, HDFS-12284.002.patch, HDFS-12284.003.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12284) RBF: Support for Kerberos authentication

2018-10-02 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12284:
---
Attachment: HDFS-12284-HDFS-13532.004.patch

> RBF: Support for Kerberos authentication
> 
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-12284-HDFS-13532.004.patch, HDFS-12284.000.patch, 
> HDFS-12284.001.patch, HDFS-12284.002.patch, HDFS-12284.003.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13947) Review of DirectoryScanner Class

2018-10-02 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636165#comment-16636165
 ] 

BELUGA BEHR commented on HDFS-13947:


{code}
org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testStatisticsForErasureCodingRead
Ran a couple of times locally and it eventually passed.

[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal
[WARNING] Tests run: 38, Failures: 0, Errors: 0, Skipped: 37, Time elapsed: 
12.324 s - in org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal
[INFO] 
[INFO] Results:
[INFO] 
[WARNING] Tests run: 38, Failures: 0, Errors: 0, Skipped: 37
{code}

{code}
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency

Failed locally, even without patch applied.
{code}

Please accept this patch (and remove the white space).  Thanks!

> Review of DirectoryScanner Class
> 
>
> Key: HDFS-13947
> URL: https://issues.apache.org/jira/browse/HDFS-13947
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HDFS-13947.1.patch, HDFS-13947.2.patch, 
> HDFS-13947.3.patch, HDFS-13947.4.patch
>
>
> Review of Directory Scanner.   Replaced a lot of code with Guava MultiMap.  
> Some general house cleaning and improved logging.  For performance, using 
> {{ArrayList}} instead of {{LinkedList}} where possible, especially since 
> these lists can be quite large a LinkedList will consume a lot of memory and 
> be slow to sort/iterate over.
> https://stackoverflow.com/questions/322715/when-to-use-linkedlist-over-arraylist-in-java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-526) Clean previous chill mode code from NodeManager.

2018-10-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636162#comment-16636162
 ] 

Hadoop QA commented on HDDS-526:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
14s{color} | {color:red} root in trunk failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
14s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
26s{color} | {color:red} server-scm in trunk failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
11s{color} | {color:red} integration-test in trunk failed. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m  
4s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
16s{color} | {color:red} server-scm in trunk failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
11s{color} | {color:red} integration-test in trunk failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
16s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
11s{color} | {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
13s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 13s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 6s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
21s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
11s{color} | {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  0m 
15s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
17s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
11s{color} | {color:red} integration-test in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 17s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 11s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce 

[jira] [Commented] (HDDS-479) Add more ozone fs tests in the robot integration framework

2018-10-02 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636153#comment-16636153
 ] 

Anu Engineer commented on HDDS-479:
---

[~nilotpalnandi] Thanks for the patch. I am +1 on this patch. 

When I ran tests, I got a failure 
{code:java}

11:25:29.444 INFO 2018-10-02 18:25:29 ERROR ChunkGroupOutputStream:274 - Try to 
allocate more blocks for write failed, already allocated 0 blocks for this 
write.
copyFromLocal: Allocate block failed, error:INTERNAL_ERROR
{code}
This was the command that was run by robot.
 * ozone fs -copyFromLocal NOTICE.txt o3://bucket1.fstest/testdir/deep/

cc:[~shashikant], [~nandakumar131], [~ljain]

 

I will wait for a day to hear comments if anyone has any before committing. 
Also the trunk is broken right now, so will have to wait for that to be fixed 
too.

 

> Add more ozone fs tests in the robot integration framework
> --
>
> Key: HDDS-479
> URL: https://issues.apache.org/jira/browse/HDDS-479
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Minor
>  Labels: alpha2
> Attachments: HDDS-479.001.patch
>
>
> Currently , we have few number of ozone fs tests in robot integration 
> framework.
> Need to add more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13952) Update hadoop.version in the trunk, which is causing compilation failure

2018-10-02 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636148#comment-16636148
 ] 

Bharat Viswanadham edited comment on HDFS-13952 at 10/2/18 9:37 PM:


[~jojochuang] Good catch.

I have updated it to use hadoop.version, instead of 3.3.0-SNAPSHOT. So, that we 
will have one place to update the version.

 

Thank You [~anu] for review. Updated the patch.


was (Author: bharatviswa):
[~jojochuang] Good catch.

I have updated it to use hadoop.version, instead of 3.3.0-SNAPSHOT. So, that we 
will have one place to update the version.

> Update hadoop.version in the trunk, which is causing compilation failure
> 
>
> Key: HDFS-13952
> URL: https://issues.apache.org/jira/browse/HDFS-13952
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HDFS-13952.00.patch, HDFS-13952.01.patch
>
>
> 3.2.0-SNAPSHOT to
> 3.3.0-SNAPSHOT
>  
> On trunk compilation failure:
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireProperty failed 
> with message:
> The hadoop.version property should be set and should be 3.3.0-SNAPSHOT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-567) Rename Mapping to ContainerManager in SCM

2018-10-02 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636149#comment-16636149
 ] 

Anu Engineer commented on HDDS-567:
---

You can fix the below issue while committing. +1. No need for another patch.
 * {{ContainerStateManager.java:Line 218:}}
  {{Event and State Transition Mapping:}} I think we should leave this comment 
as is. In the current patch we have replaced it as {{Event and State Transition 
*ContainerManager*}}

Really appreciate you taking care of this, this was long over due and 
appreciate you fixing this.

I am *not* committing since the trunk is *broken* right now. Please feel free 
to commit when the trunk is healthy.

> Rename Mapping to ContainerManager in SCM
> -
>
> Key: HDDS-567
> URL: https://issues.apache.org/jira/browse/HDDS-567
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-567.000.patch
>
>
> In SCM we have an interface named {{Mapping}} which is actually for container 
> management, it is better to rename this interface as {{ContainerManager}} 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13952) Update hadoop.version in the trunk, which is causing compilation failure

2018-10-02 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636148#comment-16636148
 ] 

Bharat Viswanadham commented on HDFS-13952:
---

[~jojochuang] Good catch.

I have updated it to use hadoop.version, instead of 3.3.0-SNAPSHOT. So, that we 
will have one place to update the version.

> Update hadoop.version in the trunk, which is causing compilation failure
> 
>
> Key: HDFS-13952
> URL: https://issues.apache.org/jira/browse/HDFS-13952
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HDFS-13952.00.patch, HDFS-13952.01.patch
>
>
> 3.2.0-SNAPSHOT to
> 3.3.0-SNAPSHOT
>  
> On trunk compilation failure:
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireProperty failed 
> with message:
> The hadoop.version property should be set and should be 3.3.0-SNAPSHOT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13952) Update hadoop.version in the trunk, which is causing compilation failure

2018-10-02 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13952:
--
Attachment: HDFS-13952.01.patch

> Update hadoop.version in the trunk, which is causing compilation failure
> 
>
> Key: HDFS-13952
> URL: https://issues.apache.org/jira/browse/HDFS-13952
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HDFS-13952.00.patch, HDFS-13952.01.patch
>
>
> 3.2.0-SNAPSHOT to
> 3.3.0-SNAPSHOT
>  
> On trunk compilation failure:
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireProperty failed 
> with message:
> The hadoop.version property should be set and should be 3.3.0-SNAPSHOT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13952) Update hadoop.version in the trunk, which is causing compilation failure

2018-10-02 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636137#comment-16636137
 ] 

Anu Engineer edited comment on HDFS-13952 at 10/2/18 9:24 PM:
--

[~jojochuang] I can see that this build is past this stage in the Jenkins.

[https://builds.apache.org/job/PreCommit-HDFS-Build/25184/console]

and I am able to build this locally. I am +1 on this change, shall I go ahead 
and commit this since the branch is broken without it?

 cc:[~ajfabbri], [~sunil.gov...@gmail.com]

 

[~jojochuang] , sorry my comment crossed with yours, we will wait for the next 
update of the patch. [~bharatviswa]


was (Author: anu):
[~jojochuang] I can see that this build is past this stage in the Jenkins.

[https://builds.apache.org/job/PreCommit-HDFS-Build/25184/console]

and I am able to build this locally. I am +1 on this change, shall I go ahead 
and commit this since the branch is broken without it?

 cc:[~ajfabbri], [~sunil.gov...@gmail.com]

> Update hadoop.version in the trunk, which is causing compilation failure
> 
>
> Key: HDFS-13952
> URL: https://issues.apache.org/jira/browse/HDFS-13952
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HDFS-13952.00.patch
>
>
> 3.2.0-SNAPSHOT to
> 3.3.0-SNAPSHOT
>  
> On trunk compilation failure:
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireProperty failed 
> with message:
> The hadoop.version property should be set and should be 3.3.0-SNAPSHOT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13952) Update hadoop.version in the trunk, which is causing compilation failure

2018-10-02 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636137#comment-16636137
 ] 

Anu Engineer edited comment on HDFS-13952 at 10/2/18 9:24 PM:
--

[~jojochuang] I can see that this build is past this stage in the Jenkins.

[https://builds.apache.org/job/PreCommit-HDFS-Build/25184/console]

and I am able to build this locally. I am +1 on this change, shall I go ahead 
and commit this since the branch is broken without it?

 cc:[~ajfabbri], [~sunil.gov...@gmail.com]


was (Author: anu):
[~jojochuang] I can see that this build is past this stage in the Jenkins. 

[https://builds.apache.org/job/PreCommit-HDFS-Build/25184/console]

and I am able to build this locally. I am +1 on this change, shall I go ahead 
and commit this since the branch is broken without it?

 

> Update hadoop.version in the trunk, which is causing compilation failure
> 
>
> Key: HDFS-13952
> URL: https://issues.apache.org/jira/browse/HDFS-13952
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HDFS-13952.00.patch
>
>
> 3.2.0-SNAPSHOT to
> 3.3.0-SNAPSHOT
>  
> On trunk compilation failure:
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireProperty failed 
> with message:
> The hadoop.version property should be set and should be 3.3.0-SNAPSHOT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13952) Update hadoop.version in the trunk, which is causing compilation failure

2018-10-02 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636137#comment-16636137
 ] 

Anu Engineer commented on HDFS-13952:
-

[~jojochuang] I can see that this build is past this stage in the Jenkins. 

[https://builds.apache.org/job/PreCommit-HDFS-Build/25184/console]

and I am able to build this locally. I am +1 on this change, shall I go ahead 
and commit this since the branch is broken without it?

 

> Update hadoop.version in the trunk, which is causing compilation failure
> 
>
> Key: HDFS-13952
> URL: https://issues.apache.org/jira/browse/HDFS-13952
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HDFS-13952.00.patch
>
>
> 3.2.0-SNAPSHOT to
> 3.3.0-SNAPSHOT
>  
> On trunk compilation failure:
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireProperty failed 
> with message:
> The hadoop.version property should be set and should be 3.3.0-SNAPSHOT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13952) Update hadoop.version in the trunk, which is causing compilation failure

2018-10-02 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636132#comment-16636132
 ] 

Wei-Chiu Chuang commented on HDFS-13952:


There's still one remaining reference to 3.2.0-SNAPSHOT
{code:title=hadoop-project/pom.xml}
3.2.0-SNAPSHOT
{code}
Can you update it even though it's not being used anywhere?

> Update hadoop.version in the trunk, which is causing compilation failure
> 
>
> Key: HDFS-13952
> URL: https://issues.apache.org/jira/browse/HDFS-13952
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HDFS-13952.00.patch
>
>
> 3.2.0-SNAPSHOT to
> 3.3.0-SNAPSHOT
>  
> On trunk compilation failure:
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireProperty failed 
> with message:
> The hadoop.version property should be set and should be 3.3.0-SNAPSHOT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-526) Clean previous chill mode code from NodeManager.

2018-10-02 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636130#comment-16636130
 ] 

Ajay Kumar commented on HDDS-526:
-

patch v2 rebased with trunk.

> Clean previous chill mode code from NodeManager. 
> -
>
> Key: HDDS-526
> URL: https://issues.apache.org/jira/browse/HDDS-526
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-526.00.patch, HDDS-526.01.patch, HDDS-526.02.patch
>
>
> Clean previous chill mode code from NodeManager, BlockManagerImpl and add jmx 
> attribute for chill mode status. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-526) Clean previous chill mode code from NodeManager.

2018-10-02 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-526:

Attachment: HDDS-526.02.patch

> Clean previous chill mode code from NodeManager. 
> -
>
> Key: HDDS-526
> URL: https://issues.apache.org/jira/browse/HDDS-526
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-526.00.patch, HDDS-526.01.patch, HDDS-526.02.patch
>
>
> Clean previous chill mode code from NodeManager, BlockManagerImpl and add jmx 
> attribute for chill mode status. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13952) Update hadoop.version in the trunk, which is causing compilation failure

2018-10-02 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636127#comment-16636127
 ] 

Bharat Viswanadham commented on HDFS-13952:
---

[~jojochuang]

Good to know that after maven version upgrade it worked.

I will go ahead and check in this.

As this is causing trunk compilation issue and will cause other Jenkins jobs to 
fail, will check this in. Let me know if we still want to wait for Jenkins?

> Update hadoop.version in the trunk, which is causing compilation failure
> 
>
> Key: HDFS-13952
> URL: https://issues.apache.org/jira/browse/HDFS-13952
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HDFS-13952.00.patch
>
>
> 3.2.0-SNAPSHOT to
> 3.3.0-SNAPSHOT
>  
> On trunk compilation failure:
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireProperty failed 
> with message:
> The hadoop.version property should be set and should be 3.3.0-SNAPSHOT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13951) HDFS DelegationTokenFetcher can't print non-HDFS tokens in a tokenfile

2018-10-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636128#comment-16636128
 ] 

Hadoop QA commented on HDFS-13951:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
12s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  1m 
57s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 5 unchanged - 1 fixed = 5 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  0m 
11s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 49s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}125m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13951 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942154/HDFS-13951-001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux bb88fc69b813 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e8b8604 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25183/artifact/out/branch-mvninstall-root.txt
 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Comment Edited] (HDFS-13952) Update hadoop.version in the trunk, which is causing compilation failure

2018-10-02 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636127#comment-16636127
 ] 

Bharat Viswanadham edited comment on HDFS-13952 at 10/2/18 9:13 PM:


[~jojochuang]

Good to know that after maven version upgrade it worked.

As this is causing trunk compilation issue and will cause other Jenkins jobs to 
fail, can we check this in. Let me know if we still want to wait for Jenkins?


was (Author: bharatviswa):
[~jojochuang]

Good to know that after maven version upgrade it worked.

I will go ahead and check in this.

As this is causing trunk compilation issue and will cause other Jenkins jobs to 
fail, will check this in. Let me know if we still want to wait for Jenkins?

> Update hadoop.version in the trunk, which is causing compilation failure
> 
>
> Key: HDFS-13952
> URL: https://issues.apache.org/jira/browse/HDFS-13952
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HDFS-13952.00.patch
>
>
> 3.2.0-SNAPSHOT to
> 3.3.0-SNAPSHOT
>  
> On trunk compilation failure:
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireProperty failed 
> with message:
> The hadoop.version property should be set and should be 3.3.0-SNAPSHOT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-521) Implement DeleteBucket REST endpoint

2018-10-02 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-521 started by Bharat Viswanadham.
---
> Implement DeleteBucket REST endpoint
> 
>
> Key: HDDS-521
> URL: https://issues.apache.org/jira/browse/HDDS-521
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
>
> The delete bucket will do the opposite of the create bucket call. It will 
> locate the volume via the username in the delete call.
> Reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketDELETE.html
> This is implemented as part of HDDS-444 but we need the double check the 
> headers and add acceptance tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-8) Add OzoneManager Delegation Token support

2018-10-02 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636103#comment-16636103
 ] 

Ajay Kumar commented on HDDS-8:
---

Test failures are unrelated.

> Add OzoneManager Delegation Token support
> -
>
> Key: HDDS-8
> URL: https://issues.apache.org/jira/browse/HDDS-8
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-8-HDDS-4.00.patch, HDDS-8-HDDS-4.01.patch, 
> HDDS-8-HDDS-4.02.patch, HDDS-8-HDDS-4.03.patch, HDDS-8-HDDS-4.04.patch, 
> HDDS-8-HDDS-4.05.patch, HDDS-8-HDDS-4.06.patch, HDDS-8-HDDS-4.07.patch, 
> HDDS-8-HDDS-4.08.patch, HDDS-8-HDDS-4.09.patch, HDDS-8-HDDS-4.10.patch, 
> HDDS-8-HDDS-4.11.patch, HDDS-8-HDDS-4.12.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13952) Update hadoop.version in the trunk, which is causing compilation failure

2018-10-02 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636096#comment-16636096
 ] 

Wei-Chiu Chuang commented on HDFS-13952:


Got it. It looks like it doesn't work for maven 3.3.3 for me.
Once I switched to Maven 3.5.4 and it works for me now.

I'll file another jira to correct that (or bump supported maven version -- 
currently we claim we support Maven 3.0.2 and above)

+1

> Update hadoop.version in the trunk, which is causing compilation failure
> 
>
> Key: HDFS-13952
> URL: https://issues.apache.org/jira/browse/HDFS-13952
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HDFS-13952.00.patch
>
>
> 3.2.0-SNAPSHOT to
> 3.3.0-SNAPSHOT
>  
> On trunk compilation failure:
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireProperty failed 
> with message:
> The hadoop.version property should be set and should be 3.3.0-SNAPSHOT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-354) VolumeInfo.getScmUsed throws NPE

2018-10-02 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru reassigned HDDS-354:
---

Assignee: Hanisha Koneru

> VolumeInfo.getScmUsed throws NPE
> 
>
> Key: HDDS-354
> URL: https://issues.apache.org/jira/browse/HDDS-354
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Hanisha Koneru
>Priority: Major
>
> {code}java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:366)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:264)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
>   at java.util.concurrent.FutureTask.run(FutureTask.java)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-520) Implement HeadBucket REST endpoint

2018-10-02 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-520:

Attachment: HDDS-520.02.patch

> Implement HeadBucket REST endpoint
> --
>
> Key: HDDS-520
> URL: https://issues.apache.org/jira/browse/HDDS-520
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-520.00.patch, HDDS-520.01.patch, HDDS-520.02.patch
>
>
> This operation is useful to determine if a bucket exists and you have 
> permission to access it. The operation returns a 200 OK if the bucket exists 
> and you have permission to access it. Otherwise, the operation might return 
> responses such as 404 Not Found and 403 Forbidden.  
> See the reference here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketHEAD.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-520) Implement HeadBucket REST endpoint

2018-10-02 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636062#comment-16636062
 ] 

Bharat Viswanadham commented on HDDS-520:
-

Updated the patch to fix the compilation failure.

 

> Implement HeadBucket REST endpoint
> --
>
> Key: HDDS-520
> URL: https://issues.apache.org/jira/browse/HDDS-520
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-520.00.patch, HDDS-520.01.patch
>
>
> This operation is useful to determine if a bucket exists and you have 
> permission to access it. The operation returns a 200 OK if the bucket exists 
> and you have permission to access it. Otherwise, the operation might return 
> responses such as 404 Not Found and 403 Forbidden.  
> See the reference here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketHEAD.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-520) Implement HeadBucket REST endpoint

2018-10-02 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-520:

Attachment: HDDS-520.01.patch

> Implement HeadBucket REST endpoint
> --
>
> Key: HDDS-520
> URL: https://issues.apache.org/jira/browse/HDDS-520
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-520.00.patch, HDDS-520.01.patch
>
>
> This operation is useful to determine if a bucket exists and you have 
> permission to access it. The operation returns a 200 OK if the bucket exists 
> and you have permission to access it. Otherwise, the operation might return 
> responses such as 404 Not Found and 403 Forbidden.  
> See the reference here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketHEAD.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13532) RBF: Adding security

2018-10-02 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636060#comment-16636060
 ] 

CR Hota commented on HDFS-13532:


All, I was able to code a small prototype based on earlier feedback on the 
designs.

Have set-up a meeting for everyone to join and share thoughts on the prototype 
and design.

Time - Oct 8th 2018, 3-4 PM PST

This is the zoom link, [https://uber.zoom.us/j/273426631]

> RBF: Adding security
> 
>
> Key: HDFS-13532
> URL: https://issues.apache.org/jira/browse/HDFS-13532
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: RBF _ Security delegation token thoughts.pdf, RBF _ 
> Security delegation token thoughts_updated.pdf, RBF _ Security delegation 
> token thoughts_updated_2.pdf, RBF-DelegationToken-Approach1b.pdf, RBF_ 
> Security delegation token thoughts_updated_3.pdf, Security_for_Router-based 
> Federation_design_doc.pdf
>
>
> HDFS Router based federation should support security. This includes 
> authentication and delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-512) update test.sh to remove robot framework & python-pip installation

2018-10-02 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636042#comment-16636042
 ] 

Anu Engineer commented on HDDS-512:
---

[~arpitagarwal] makes sense. 
{quote}How will the tests wait for chill-mode exit? Poll, sleep, and loop?
{quote}
We should have some command line which can get the chill mode status? perhaps 
we already have it ? [~ajayydv]

> update test.sh to remove robot framework & python-pip installation
> --
>
> Key: HDDS-512
> URL: https://issues.apache.org/jira/browse/HDDS-512
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-512.001.patch
>
>
> update test.sh to remove robot framework & python-pip installation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-520) Implement HeadBucket REST endpoint

2018-10-02 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636041#comment-16636041
 ] 

Bharat Viswanadham commented on HDDS-520:
-

Need a rebase, as HDDS-560 changed the method api.

Will post a patch soon.

> Implement HeadBucket REST endpoint
> --
>
> Key: HDDS-520
> URL: https://issues.apache.org/jira/browse/HDDS-520
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-520.00.patch
>
>
> This operation is useful to determine if a bucket exists and you have 
> permission to access it. The operation returns a 200 OK if the bucket exists 
> and you have permission to access it. Otherwise, the operation might return 
> responses such as 404 Not Found and 403 Forbidden.  
> See the reference here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketHEAD.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13952) Update hadoop.version in the trunk, which is causing compilation failure

2018-10-02 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636034#comment-16636034
 ] 

Bharat Viswanadham commented on HDFS-13952:
---

[~jojochuang]

Thanks for checking out.

I am able to compile by changing hadoop.version. Not sure maven version has any 
role here. My maven version is 3.5.0.

When I check maven javadoc plugin 3.0.1 documentation.

https://maven.apache.org/plugins/maven-javadoc-plugin/javadoc-mojo.html
Set an additional option(s) on the command line. This value should include 
quotes as necessary for parameters that include spaces. Useful for a custom 
doclet. * *Type*: {{java.lang.String[]}}
 * *Since*: {{3.0.0}}
 * *Required*: {{No}}

 

> Update hadoop.version in the trunk, which is causing compilation failure
> 
>
> Key: HDFS-13952
> URL: https://issues.apache.org/jira/browse/HDFS-13952
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HDFS-13952.00.patch
>
>
> 3.2.0-SNAPSHOT to
> 3.3.0-SNAPSHOT
>  
> On trunk compilation failure:
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireProperty failed 
> with message:
> The hadoop.version property should be set and should be 3.3.0-SNAPSHOT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-512) update test.sh to remove robot framework & python-pip installation

2018-10-02 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636027#comment-16636027
 ] 

Arpit Agarwal edited comment on HDDS-512 at 10/2/18 7:48 PM:
-

bq. long-term: create a new subcommand to the scmcli which could monitor the 
scm and wait until the specified number of the datanodes are joined.
bq. That is support the Node count also. It should be a trivial patch with our 
current architecture.
These are similar ideas. We can have a startup option to the SCM that says wait 
for x number of DataNodes, and the wait can be enforced using chill-mode. e.g. 
_ozone scm --wait-for-datanodes=3_.

Couple of questions:
# Does the docker-compose setup support passing arbitrary parameters to start 
SCM for different tests?
# How will the tests wait for chill-mode exit? Poll, sleep, and loop?


was (Author: arpitagarwal):
bq. long-term: create a new subcommand to the scmcli which could monitor the 
scm and wait until the specified number of the datanodes are joined.
bq. That is support the Node count also. It should be a trivial patch with our 
current architecture.
These are similar ideas. We can have a startup option to the SCM that says wait 
for x number of DataNodes, and the wait can be enforced using chill-mode. e.g. 
_ozone sum --wait-for-datanodes=3_.

Couple of questions:
# Does the docker-compose setup support passing arbitrary parameters to start 
SCM for different tests?
# How will the tests wait for chill-mode exit? Poll, sleep, and loop?

> update test.sh to remove robot framework & python-pip installation
> --
>
> Key: HDDS-512
> URL: https://issues.apache.org/jira/browse/HDDS-512
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-512.001.patch
>
>
> update test.sh to remove robot framework & python-pip installation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-512) update test.sh to remove robot framework & python-pip installation

2018-10-02 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636027#comment-16636027
 ] 

Arpit Agarwal commented on HDDS-512:


bq. long-term: create a new subcommand to the scmcli which could monitor the 
scm and wait until the specified number of the datanodes are joined.
bq. That is support the Node count also. It should be a trivial patch with our 
current architecture.
These are similar ideas. We can have a startup option to the SCM that says wait 
for x number of DataNodes, and the wait can be enforced using chill-mode. e.g. 
_ozone sum --wait-for-datanodes=3_.

Couple of questions:
# Does the docker-compose setup support passing arbitrary parameters to start 
SCM for different tests?
# How will the tests wait for chill-mode exit? Poll, sleep, and loop?

> update test.sh to remove robot framework & python-pip installation
> --
>
> Key: HDDS-512
> URL: https://issues.apache.org/jira/browse/HDDS-512
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-512.001.patch
>
>
> update test.sh to remove robot framework & python-pip installation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-565) TestContainerPersistence fails regularly in Jenkins

2018-10-02 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia reassigned HDDS-565:
--

Assignee: Dinesh Chitlangia

> TestContainerPersistence fails regularly in Jenkins
> ---
>
> Key: HDDS-565
> URL: https://issues.apache.org/jira/browse/HDDS-565
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Hanisha Koneru
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
>
> TestContainerPersistence tests are regularly failing in Jenkins with the 
> error - "\{{Unable to create directory /dfs/data}}". 
> In \{{#init()}}, we are setting HDDS_DATANODE_DIR_KEY to a test dir location. 
> But in {{#setupPaths}}, we are using DFS_DATANODE_DATA_DIR_KEY as the data 
> dir location. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-512) update test.sh to remove robot framework & python-pip installation

2018-10-02 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636011#comment-16636011
 ] 

Anu Engineer commented on HDDS-512:
---

[~ajayydv] May be the right thing to do is to fix the chill mode properly. That 
is support the Node count also. It should be a trivial patch with our current 
architecture. [~elek] I prefer fixing Chill Mode since that is both long term 
and plays with the system well. thoughts?

> update test.sh to remove robot framework & python-pip installation
> --
>
> Key: HDDS-512
> URL: https://issues.apache.org/jira/browse/HDDS-512
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-512.001.patch
>
>
> update test.sh to remove robot framework & python-pip installation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13877) HttpFS: Implement GETSNAPSHOTDIFF

2018-10-02 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636006#comment-16636006
 ] 

Siyao Meng commented on HDFS-13877:
---

[~jojochuang] I'm putting that one-line check here as an insurance in case 
there is a future refactor/patch that breaks it. Plus this is a test case which 
won't put overhead on daily usage.

> HttpFS: Implement GETSNAPSHOTDIFF
> -
>
> Key: HDFS-13877
> URL: https://issues.apache.org/jira/browse/HDFS-13877
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13877.001.patch, HDFS-13877.001.patch, 
> HDFS-13877.002.patch, HDFS-13877.003.patch
>
>
> Implement GETSNAPSHOTDIFF (from HDFS-13052) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13953) Failure of last datanode in the pipeline results in block recovery failure and subsequent NPE during fsck

2018-10-02 Thread Hrishikesh Gadre (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated HDFS-13953:

Description: 
A user reported following scenario,
 * HBase region server created WAL and attempted to write
 * As part of the pipeline write, following events happened,
 ** The last data node in the pipeline failed. 
 ** The region server could not identify this last data node as the root cause 
of write failure and instead reported NN the first data node in the pipeline as 
the cause of failure.
 ** NN created a new write pipeline by replacing the good data node and 
retaining the faulty data node.
 ** This process continued for three iterations until NN encountered an NPE.
 * Now the fsck on the /bhase directory also failing due to NPE in NN

Following stack traces was found in region server logs
{noformat}
WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
java.lang.NullPointerException
  at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.removeStaleReplicas(BlockManager.java:3238)
  at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.updateLastBlock(BlockManager.java:3633)
  at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipelineInternal(FSNamesystem.java:7374)
  at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipeline(FSNamesystem.java:7339)
  at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updatePipeline(NameNodeRpcServer.java:777)
  at 
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.updatePipeline(AuthorizationProviderProxyClientProtocol.java:654){noformat}
 

AND

 
{noformat}
WARN org.apache.hadoop.hbase.util.FSHDFSUtils: attempt=0 on 
file=hdfs://nameservice1/hbase/genie/WALs/ABC,60020,1525325654855-splitting/abc%2C60020%2C1525325654855.null0.1536002440010
 after 6ms
org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoUnderConstruction$ReplicaUnderConstruction.isAlive(BlockInfoUnderConstruction.java:121)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoUnderConstruction.initializeBlockRecovery(BlockInfoUnderConstruction.java:288)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.internalReleaseLease(FSNamesystem.java:4846)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:3252)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLease(FSNamesystem.java:3196)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.recoverLease(NameNodeRpcServer.java:630)
at 
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.recoverLease(AuthorizationProviderProxyClientProtocol.java:372)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.recoverLease(ClientNamenodeProtocolServerSideTranslatorPB.java:681)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073){noformat}
 

 

  was:
A user reported following scenario,
 * HBase region server created WAL and attempted to write
 * As part of the pipeline write, following events happened,
 ** The last data node in the pipeline failed. 
 ** The region server could not identify this last data node as the root cause 
of write failure and instead reported NN the first data node in the pipeline as 
the cause of failure.
 ** NN created a new write pipeline by replacing the good data node and 
retaining the faulty data node.
 ** This process continued for three iterations until NN encountered an NPE.
 * Now the fsck on the /bhase directory also failing due to NPE in NN

Following stack traces was found in region server logs
{noformat}
WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
java.lang.NullPointerException
  at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.removeStaleReplicas(BlockManager.java:3238)
  at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.updateLastBlock(BlockManager.java:3633)
  at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipelineInternal(FSNamesystem.java:7374)
  at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipeline(FSNamesystem.java:7339)
  at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updatePipeline(NameNodeRpcServer.java:777)
  at 

[jira] [Created] (HDFS-13953) Failure of last datanode in the pipeline results in block recovery failure and subsequent NPE during fsck

2018-10-02 Thread Hrishikesh Gadre (JIRA)
Hrishikesh Gadre created HDFS-13953:
---

 Summary: Failure of last datanode in the pipeline results in block 
recovery failure and subsequent NPE during fsck
 Key: HDFS-13953
 URL: https://issues.apache.org/jira/browse/HDFS-13953
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Hrishikesh Gadre


A user reported following scenario,
 * HBase region server created WAL and attempted to write
 * As part of the pipeline write, following events happened,
 ** The last data node in the pipeline failed. 
 ** The region server could not identify this last data node as the root cause 
of write failure and instead reported NN the first data node in the pipeline as 
the cause of failure.
 ** NN created a new write pipeline by replacing the good data node and 
retaining the faulty data node.
 ** This process continued for three iterations until NN encountered an NPE.
 * Now the fsck on the /bhase directory also failing due to NPE in NN

Following stack traces was found in region server logs
{noformat}
WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
java.lang.NullPointerException
  at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.removeStaleReplicas(BlockManager.java:3238)
  at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.updateLastBlock(BlockManager.java:3633)
  at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipelineInternal(FSNamesystem.java:7374)
  at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipeline(FSNamesystem.java:7339)
  at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updatePipeline(NameNodeRpcServer.java:777)
  at 
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.updatePipeline(AuthorizationProviderProxyClientProtocol.java:654){noformat}
 

AND

 
{noformat}
WARN org.apache.hadoop.hbase.util.FSHDFSUtils: attempt=0 on 
file=hdfs://nameservice1/hbase/genie/WALs/hbasedn193.pv08.siri.apple.com,60020,1525325654855-splitting/hbasedn193.pv08.siri.apple.com%2C60020%2C1525325654855.null0.1536002440010
 after 6ms
org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoUnderConstruction$ReplicaUnderConstruction.isAlive(BlockInfoUnderConstruction.java:121)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoUnderConstruction.initializeBlockRecovery(BlockInfoUnderConstruction.java:288)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.internalReleaseLease(FSNamesystem.java:4846)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:3252)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLease(FSNamesystem.java:3196)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.recoverLease(NameNodeRpcServer.java:630)
at 
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.recoverLease(AuthorizationProviderProxyClientProtocol.java:372)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.recoverLease(ClientNamenodeProtocolServerSideTranslatorPB.java:681)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073){noformat}
 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13952) Update hadoop.version in the trunk, which is causing compilation failure

2018-10-02 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16635982#comment-16635982
 ] 

Wei-Chiu Chuang commented on HDFS-13952:


I'm not sure why , but I had to also add the following in order to compile:
{code}
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index cd38376..4c2c267 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -1656,7 +1656,9 @@
   maven-javadoc-plugin
   ${maven-javadoc-plugin.version}
   
--Xmaxwarns 1
+
+  -Xmaxwarns 1
+
   
 
 
{code}

Otherwise it gives me this error:
{quote}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (module-javadocs) on 
project hadoop-project: Unable to parse configuration of mojo 
org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar for parameter 
additionalOptions: Cannot assign configuration entry 'additionalOptions' with 
value '-Xmaxwarns 1' of type java.lang.String to property of type 
java.lang.String[] -> [Help 1]
{quote}

> Update hadoop.version in the trunk, which is causing compilation failure
> 
>
> Key: HDFS-13952
> URL: https://issues.apache.org/jira/browse/HDFS-13952
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HDFS-13952.00.patch
>
>
> 3.2.0-SNAPSHOT to
> 3.3.0-SNAPSHOT
>  
> On trunk compilation failure:
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireProperty failed 
> with message:
> The hadoop.version property should be set and should be 3.3.0-SNAPSHOT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-10) docker changes to test secure ozone cluster

2018-10-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-10?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16635981#comment-16635981
 ] 

Hadoop QA commented on HDDS-10:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
23s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
18s{color} | {color:red} dist in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
16s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} dist in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
37s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-10 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942146/HDDS-10-HDDS-4.05.patch
 |
| Optional Tests |  asflicense  mvnsite  compile  javac  javadoc  mvninstall  
unit  shadedclient  shellcheck  shelldocs  |
| uname | Linux a1a909e5f3bf 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-4 / fed478a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| shellcheck | v0.4.6 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1264/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1264/artifact/out/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1264/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1264/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 334 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1264/console |
| Powered by 

[jira] [Updated] (HDFS-13952) Update hadoop.version in the trunk, which is causing compilation failure

2018-10-02 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13952:
---
Priority: Blocker  (was: Major)

> Update hadoop.version in the trunk, which is causing compilation failure
> 
>
> Key: HDFS-13952
> URL: https://issues.apache.org/jira/browse/HDFS-13952
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HDFS-13952.00.patch
>
>
> 3.2.0-SNAPSHOT to
> 3.3.0-SNAPSHOT
>  
> On trunk compilation failure:
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireProperty failed 
> with message:
> The hadoop.version property should be set and should be 3.3.0-SNAPSHOT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13877) HttpFS: Implement GETSNAPSHOTDIFF

2018-10-02 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16635973#comment-16635973
 ] 

Wei-Chiu Chuang commented on HDFS-13877:


Overall looks good to me. 
{code}
  } catch (Exception e) {
  // Expect non-NullPointerException
  Assert.assertFalse(e instanceof NullPointerException);
  return;
{code}
Is this still needed after HDFS-13868 was fixed?

> HttpFS: Implement GETSNAPSHOTDIFF
> -
>
> Key: HDFS-13877
> URL: https://issues.apache.org/jira/browse/HDFS-13877
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13877.001.patch, HDFS-13877.001.patch, 
> HDFS-13877.002.patch, HDFS-13877.003.patch
>
>
> Implement GETSNAPSHOTDIFF (from HDFS-13052) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13952) Update hadoop.version in the trunk, which is causing compilation failure

2018-10-02 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13952:
--
Status: Patch Available  (was: Open)

> Update hadoop.version in the trunk, which is causing compilation failure
> 
>
> Key: HDFS-13952
> URL: https://issues.apache.org/jira/browse/HDFS-13952
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13952.00.patch
>
>
> 3.2.0-SNAPSHOT to
> 3.3.0-SNAPSHOT
>  
> On trunk compilation failure:
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireProperty failed 
> with message:
> The hadoop.version property should be set and should be 3.3.0-SNAPSHOT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >