[jira] [Commented] (HDFS-8631) WebHDFS : Support setQuota

2019-08-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909581#comment-16909581
 ] 

Hadoop QA commented on HDFS-8631:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 12s{color} | {color:orange} root: The patch generated 2 new + 603 unchanged 
- 1 fixed = 605 total (was 604) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
41s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
54s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}108m 
24s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
37s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 24m 24s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}264m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup |
|   | hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
|   | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | 

[jira] [Updated] (HDFS-14646) Standby NameNode should not upload fsimage to an inappropriate NameNode.

2019-08-16 Thread Xudong Cao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xudong Cao updated HDFS-14646:
--
Description: 
*Problem Description:*
 In the multi-NameNode scenario, when a SNN uploads a FsImage, it will put the 
image to all other NNs (whether the peer NN is an ANN or not), and even if the 
peer NN immediately replies an error (such as 
TransferResult.NOT_ACTIVE_NAMENODE_FAILURE, TransferResult 
.OLD_TRANSACTION_ID_FAILURE, etc.), the local SNN will not terminate the put 
process immediately, but will put the FsImage completely to the peer NN, and 
will not read the peer NN's reply until the put is completed.

Depending on the version of Jetty, this behavior can lead to different 
consequences, I tested it under 2.7.2 and trunk version. 

*1.In Hadoop 2.7.2 (with Jetty 6.1.26)*
 After peer NN called HttpServletResponse.sendError(), the underlying TCP 
connection will still be established, and the data SNN sent will be read by 
Jetty framework itself in the peer NN side, so the SNN will insignificantly 
send the FsImage to the peer NN continuously, causing a waste of time and 
bandwidth. In a relatively large HDFS cluster, the size of FsImage can often 
reach about 30GB, This is indeed a big waste.

*2.In trunk version (with Jetty 9.3.27)*
 After peer NN called HttpServletResponse.sendError(), the underlying TCP 
connection will be auto closed, and then SNN will directly get an "Error 
writing request body to server" exception, as below, note this test needs a 
relatively big FSImage (e.g. 10MB level):
{code:java}
2019-08-17 03:59:25,413 INFO namenode.TransferFsImage: Sending fileName: 
/tmp/hadoop-root/dfs/name/current/fsimage_3364240, fileSize: 
9864721. Sent total: 524288 bytes. Size of last segment intended to send: 4096 
bytes.
 java.io.IOException: Error writing request body to server
 at 
sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3587)
 at 
sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3570)
 at 
org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:396)
 at 
org.apache.hadoop.hdfs.server.namenode.TransferFsImage.writeFileToPutRequest(TransferFsImage.java:340)
 at 
org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImage(TransferFsImage.java:314)
 at 
org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImageFromStorage(TransferFsImage.java:249)
 at 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:277)
 at 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:272)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
 2019-08-17 03:59:25,422 INFO namenode.TransferFsImage: Sending fileName: 
/tmp/hadoop-root/dfs/name/current/fsimage_3364240, fileSize: 
9864721. Sent total: 851968 bytes. Size of last segment intended to send: 4096 
bytes.
 java.io.IOException: Error writing request body to server
 at 
sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3587)
 at 
sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3570)
 at 
org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:396)
 at 
org.apache.hadoop.hdfs.server.namenode.TransferFsImage.writeFileToPutRequest(TransferFsImage.java:340)
  {code}
                  

*Solution:*
 A standby NameNode should not upload fsimage to an inappropriate NameNode, 
when he plans to put a FsImage to the peer NN, he need to check whether he 
really need to put it at this time.

In detail, local SNN should establish an HTTP connection with the peer NN, send 
the put request, and then immediately read the response (this is the key 
point). If the peer NN does not reply an HTTP_OK, it means the local SNN should 
not put image at this time.

  was:
*Problem Description:*
 In the multi-NameNode scenario, when a SNN uploads a FsImage, it will put the 
image to all other NNs (whether the peer NN is an ANN or not), and even if the 
peer NN immediately replies an error (such as 
TransferResult.NOT_ACTIVE_NAMENODE_FAILURE, TransferResult 
.OLD_TRANSACTION_ID_FAILURE, etc.), the local SNN will not terminate the put 
process immediately, but will put the FsImage completely to the peer NN, and 
will not read the peer NN's reply until the put is completed.

Depending on the version of Jetty, this behavior can lead to different 
consequences, I tested it under 2.7.2 and trunk version. 

*1.In Hadoop 2.7.2 (with Jetty 6.1.26)*
 After peer NN called 

[jira] [Commented] (HDFS-14723) Add helper method FSNamesystem#setBlockManagerForTesting() in branch-2

2019-08-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909565#comment-16909565
 ] 

Hadoop QA commented on HDFS-14723:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2.8 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
10s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 24s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
20s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:1 |
| Failed junit tests | hadoop.hdfs.web.TestWebHDFS |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
| Timed out junit tests | org.apache.hadoop.hdfs.TestPread |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:b93746a |
| JIRA Issue | HDFS-14723 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977851/HDFS-14723.branch-2.8.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux dd47e813e851 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 

[jira] [Commented] (HDFS-10606) TrashPolicyDefault supports time of auto clean up can configured

2019-08-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-10606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909563#comment-16909563
 ] 

Hadoop QA commented on HDFS-10606:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 24m  
2s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.7 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 6s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
52s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} branch-2.7 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 8 new + 133 unchanged - 0 fixed = 141 total (was 133) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 68 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 25m 
13s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:06eafeedf12 |
| JIRA Issue | HDFS-10606 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817603/HDFS-10606-branch-2.7.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux d5e72b68fcc0 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2.7 / 6079107 |
| maven | version: Apache Maven 3.0.5 |
| Default Java | 1.7.0_201 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27539/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27539/artifact/out/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27539/testReport/ |
| Max. process+thread count | 346 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27539/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> TrashPolicyDefault supports time of auto clean up can 

[jira] [Work logged] (HDDS-1974) Implement OM CancelDelegationToken request to use Cache and DoubleBuffer

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1974?focusedWorklogId=296712=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296712
 ]

ASF GitHub Bot logged work on HDDS-1974:


Author: ASF GitHub Bot
Created on: 17/Aug/19 03:27
Start Date: 17/Aug/19 03:27
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1308: HDDS-1974. 
Implement OM CancelDelegationToken request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1308#issuecomment-522200348
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 85 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for branch |
   | +1 | mvninstall | 649 | trunk passed |
   | +1 | compile | 392 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 965 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 174 | trunk passed |
   | 0 | spotbugs | 546 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 797 | trunk passed |
   | -0 | patch | 596 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | +1 | mvninstall | 696 | the patch passed |
   | +1 | compile | 472 | the patch passed |
   | +1 | javac | 472 | the patch passed |
   | -0 | checkstyle | 52 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 914 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 205 | the patch passed |
   | -1 | findbugs | 597 | hadoop-ozone generated 2 new + 0 unchanged - 0 fixed 
= 2 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 413 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2493 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 9599 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-ozone |
   |  |  Null passed for non-null parameter of new 
org.apache.hadoop.ozone.om.response.security.OMCancelDelegationTokenResponse(OzoneTokenIdentifier,
 OzoneManagerProtocolProtos$OMResponse) in 
org.apache.hadoop.ozone.om.request.security.OMCancelDelegationTokenRequest.validateAndUpdateCache(OzoneManager,
 long, OzoneManagerDoubleBufferHelper)  At 
OMCancelDelegationTokenRequest.java:of new 
org.apache.hadoop.ozone.om.response.security.OMCancelDelegationTokenResponse(OzoneTokenIdentifier,
 OzoneManagerProtocolProtos$OMResponse) in 
org.apache.hadoop.ozone.om.request.security.OMCancelDelegationTokenRequest.validateAndUpdateCache(OzoneManager,
 long, OzoneManagerDoubleBufferHelper)  At 
OMCancelDelegationTokenRequest.java:[line 101] |
   |  |  Null passed for non-null parameter of new 
org.apache.hadoop.ozone.om.response.security.OMGetDelegationTokenResponse(OzoneTokenIdentifier,
 long, OzoneManagerProtocolProtos$OMResponse) in 
org.apache.hadoop.ozone.om.request.security.OMGetDelegationTokenRequest.validateAndUpdateCache(OzoneManager,
 long, OzoneManagerDoubleBufferHelper)  At OMGetDelegationTokenRequest.java:of 
new 
org.apache.hadoop.ozone.om.response.security.OMGetDelegationTokenResponse(OzoneTokenIdentifier,
 long, OzoneManagerProtocolProtos$OMResponse) in 
org.apache.hadoop.ozone.om.request.security.OMGetDelegationTokenRequest.validateAndUpdateCache(OzoneManager,
 long, OzoneManagerDoubleBufferHelper)  At 
OMGetDelegationTokenRequest.java:[line 140] |
   | Failed junit tests | 
hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1308/2/artifact/out/Dockerfile
 |
   | GITHUB PR | 

[jira] [Commented] (HDFS-13541) NameNode Port based selective encryption

2019-08-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909560#comment-16909560
 ] 

Hadoop QA commented on HDFS-13541:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m  
3s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.1 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
32s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
37s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 9s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
51s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
10s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
27s{color} | {color:green} branch-3.1 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 13m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 22s{color} | {color:orange} root: The patch generated 7 new + 1583 unchanged 
- 6 fixed = 1590 total (was 1589) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 30s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
46s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 19s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}212m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.util.TestDiskCheckerWithDiskIo |
|   | hadoop.util.TestReadWriteDiskValidator |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.server.diskbalancer.TestDiskBalancer |

[jira] [Work logged] (HDDS-1105) Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone Manager.

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1105?focusedWorklogId=296708=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296708
 ]

ASF GitHub Bot logged work on HDDS-1105:


Author: ASF GitHub Bot
Created on: 17/Aug/19 03:19
Start Date: 17/Aug/19 03:19
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #1259: HDDS-1105 
: Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone 
Manager
URL: https://github.com/apache/hadoop/pull/1259#discussion_r314932606
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/OzoneManagerServiceProviderImpl.java
 ##
 @@ -187,5 +229,119 @@ protected DBCheckpoint getOzoneManagerDBSnapshot() {
 }
 return null;
   }
+
+  /**
+   * Update Local OM DB with new OM DB snapshot.
+   * @throws IOException
+   */
+  @VisibleForTesting
+  void updateReconOmDBWithNewSnapshot() throws IOException {
+// Obtain the current DB snapshot from OM and
+// update the in house OM metadata managed DB instance.
+DBCheckpoint dbSnapshot = getOzoneManagerDBSnapshot();
+if (dbSnapshot != null && dbSnapshot.getCheckpointLocation() != null) {
+  try {
+omMetadataManager.updateOmDB(dbSnapshot.getCheckpointLocation()
+.toFile());
+  } catch (IOException e) {
+LOG.error("Unable to refresh Recon OM DB Snapshot. ", e);
+  }
+} else {
+  LOG.error("Null snapshot location got from OM.");
+}
+  }
+
+  /**
+   * Get Delta updates from OM through RPC call and apply to local OM DB as
+   * well as accumulate in a buffer.
+   * @param fromSequenceNumber from sequence number to request from.
+   * @param omdbUpdatesHandler OM DB updates handler to buffer updates.
+   * @throws IOException when OM RPC request fails.
+   * @throws RocksDBException when writing to RocksDB fails.
+   */
+  @VisibleForTesting
+  void getAndApplyDeltaUpdatesFromOM(
+  long fromSequenceNumber, OMDBUpdatesHandler omdbUpdatesHandler)
+  throws IOException, RocksDBException {
+DBUpdatesRequest dbUpdatesRequest = DBUpdatesRequest.newBuilder()
+.setSequenceNumber(fromSequenceNumber).build();
+DBUpdatesWrapper dbUpdates = ozoneManagerClient.getDBUpdates(
+dbUpdatesRequest);
+if (null != dbUpdates) {
+  RDBStore rocksDBStore = (RDBStore)omMetadataManager.getStore();
+  RocksDB rocksDB = rocksDBStore.getDb();
+  LOG.debug("Number of updates received from OM : " +
+  dbUpdates.getData().size());
+  for (byte[] data : dbUpdates.getData()) {
+WriteBatch writeBatch = new WriteBatch(data);
+writeBatch.iterate(omdbUpdatesHandler);
+RDBBatchOperation rdbBatchOperation = new 
RDBBatchOperation(writeBatch);
+rdbBatchOperation.commit(rocksDB, new WriteOptions());
+  }
+}
+  }
+
+  /**
+   * Based on current state of Recon's OM DB, we either get delta updates or
+   * full snapshot from Ozone Manager.
+   */
+  @VisibleForTesting
+  void syncDataFromOM() {
+long currentSequenceNumber = getCurrentOMDBSequenceNumber();
+boolean fullSnapshot = false;
+
+if (currentSequenceNumber <= 0) {
+  fullSnapshot = true;
+} else {
 
 Review comment:
   Good question @hanishakoneru . This is handled in OM side. Please check out 
org.apache.hadoop.utils.db.RDBStore#getUpdatesSince. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296708)
Time Spent: 3h 10m  (was: 3h)

> Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone 
> Manager.
> 
>
> Key: HDDS-1105
> URL: https://issues.apache.org/jira/browse/HDDS-1105
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> *Some context*
> The FSCK server will periodically invoke this OM API passing in the most 
> recent sequence number of its own RocksDB instance. The OM will use the 
> RockDB getUpdateSince() API to answer this query. Since the getUpdateSince 
> API only works against the RocksDB WAL, we have to configure OM RocksDB WAL 
> (https://github.com/facebook/rocksdb/wiki/Write-Ahead-Log) with sufficient 
> max size to make this API useful. If the OM cannot get all transactions since 
> the 

[jira] [Work logged] (HDDS-1105) Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone Manager.

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1105?focusedWorklogId=296707=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296707
 ]

ASF GitHub Bot logged work on HDDS-1105:


Author: ASF GitHub Bot
Created on: 17/Aug/19 03:18
Start Date: 17/Aug/19 03:18
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #1259: HDDS-1105 
: Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone 
Manager
URL: https://github.com/apache/hadoop/pull/1259#discussion_r314932586
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/ReconUtils.java
 ##
 @@ -90,7 +90,7 @@ public static File getReconDbDir(Configuration conf, String 
dirConfigKey) {
* @param destPath destination path to untar to.
* @throws IOException ioException
*/
-  public static void untarCheckpointFile(File tarFile, Path destPath)
+  public void untarCheckpointFile(File tarFile, Path destPath)
 
 Review comment:
   Yes I agree. But we decided to make the ReconUtils non static so that we can 
inject a mock instance of it and unit test cleanly, removing the need for 
Powermocks. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296707)
Time Spent: 3h  (was: 2h 50m)

> Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone 
> Manager.
> 
>
> Key: HDDS-1105
> URL: https://issues.apache.org/jira/browse/HDDS-1105
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> *Some context*
> The FSCK server will periodically invoke this OM API passing in the most 
> recent sequence number of its own RocksDB instance. The OM will use the 
> RockDB getUpdateSince() API to answer this query. Since the getUpdateSince 
> API only works against the RocksDB WAL, we have to configure OM RocksDB WAL 
> (https://github.com/facebook/rocksdb/wiki/Write-Ahead-Log) with sufficient 
> max size to make this API useful. If the OM cannot get all transactions since 
> the given sequence number (due to WAL flushing), it can error out. In that 
> case the FSCK server can fall back to getting the entire checkpoint snapshot 
> implemented in HDDS-1085.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1974) Implement OM CancelDelegationToken request to use Cache and DoubleBuffer

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1974?focusedWorklogId=296703=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296703
 ]

ASF GitHub Bot logged work on HDDS-1974:


Author: ASF GitHub Bot
Created on: 17/Aug/19 03:07
Start Date: 17/Aug/19 03:07
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1308: HDDS-1974. 
Implement OM CancelDelegationToken request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1308#issuecomment-522199198
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for branch |
   | +1 | mvninstall | 583 | trunk passed |
   | +1 | compile | 342 | trunk passed |
   | +1 | checkstyle | 61 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 808 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 168 | trunk passed |
   | 0 | spotbugs | 457 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 674 | trunk passed |
   | -0 | patch | 490 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 16 | Maven dependency ordering for patch |
   | +1 | mvninstall | 615 | the patch passed |
   | +1 | compile | 382 | the patch passed |
   | +1 | javac | 382 | the patch passed |
   | -0 | checkstyle | 38 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 667 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 165 | the patch passed |
   | -1 | findbugs | 471 | hadoop-ozone generated 2 new + 0 unchanged - 0 fixed 
= 2 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 293 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2149 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 7954 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-ozone |
   |  |  Null passed for non-null parameter of new 
org.apache.hadoop.ozone.om.response.security.OMCancelDelegationTokenResponse(OzoneTokenIdentifier,
 OzoneManagerProtocolProtos$OMResponse) in 
org.apache.hadoop.ozone.om.request.security.OMCancelDelegationTokenRequest.validateAndUpdateCache(OzoneManager,
 long, OzoneManagerDoubleBufferHelper)  At 
OMCancelDelegationTokenRequest.java:of new 
org.apache.hadoop.ozone.om.response.security.OMCancelDelegationTokenResponse(OzoneTokenIdentifier,
 OzoneManagerProtocolProtos$OMResponse) in 
org.apache.hadoop.ozone.om.request.security.OMCancelDelegationTokenRequest.validateAndUpdateCache(OzoneManager,
 long, OzoneManagerDoubleBufferHelper)  At 
OMCancelDelegationTokenRequest.java:[line 101] |
   |  |  Null passed for non-null parameter of new 
org.apache.hadoop.ozone.om.response.security.OMGetDelegationTokenResponse(OzoneTokenIdentifier,
 long, OzoneManagerProtocolProtos$OMResponse) in 
org.apache.hadoop.ozone.om.request.security.OMGetDelegationTokenRequest.validateAndUpdateCache(OzoneManager,
 long, OzoneManagerDoubleBufferHelper)  At OMGetDelegationTokenRequest.java:of 
new 
org.apache.hadoop.ozone.om.response.security.OMGetDelegationTokenResponse(OzoneTokenIdentifier,
 long, OzoneManagerProtocolProtos$OMResponse) in 
org.apache.hadoop.ozone.om.request.security.OMGetDelegationTokenRequest.validateAndUpdateCache(OzoneManager,
 long, OzoneManagerDoubleBufferHelper)  At 
OMGetDelegationTokenRequest.java:[line 140] |
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1308/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1308 

[jira] [Work logged] (HDDS-1974) Implement OM CancelDelegationToken request to use Cache and DoubleBuffer

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1974?focusedWorklogId=296702=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296702
 ]

ASF GitHub Bot logged work on HDDS-1974:


Author: ASF GitHub Bot
Created on: 17/Aug/19 03:05
Start Date: 17/Aug/19 03:05
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1308: HDDS-1974. 
Implement OM CancelDelegationToken request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1308#issuecomment-522199074
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 105 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for branch |
   | +1 | mvninstall | 606 | trunk passed |
   | +1 | compile | 367 | trunk passed |
   | +1 | checkstyle | 66 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 911 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 170 | trunk passed |
   | 0 | spotbugs | 469 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 689 | trunk passed |
   | -0 | patch | 508 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 19 | Maven dependency ordering for patch |
   | +1 | mvninstall | 592 | the patch passed |
   | +1 | compile | 428 | the patch passed |
   | +1 | javac | 428 | the patch passed |
   | -0 | checkstyle | 46 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 815 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 189 | the patch passed |
   | -1 | findbugs | 466 | hadoop-ozone generated 2 new + 0 unchanged - 0 fixed 
= 2 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 349 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2203 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 8580 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-ozone |
   |  |  Null passed for non-null parameter of new 
org.apache.hadoop.ozone.om.response.security.OMCancelDelegationTokenResponse(OzoneTokenIdentifier,
 OzoneManagerProtocolProtos$OMResponse) in 
org.apache.hadoop.ozone.om.request.security.OMCancelDelegationTokenRequest.validateAndUpdateCache(OzoneManager,
 long, OzoneManagerDoubleBufferHelper)  At 
OMCancelDelegationTokenRequest.java:of new 
org.apache.hadoop.ozone.om.response.security.OMCancelDelegationTokenResponse(OzoneTokenIdentifier,
 OzoneManagerProtocolProtos$OMResponse) in 
org.apache.hadoop.ozone.om.request.security.OMCancelDelegationTokenRequest.validateAndUpdateCache(OzoneManager,
 long, OzoneManagerDoubleBufferHelper)  At 
OMCancelDelegationTokenRequest.java:[line 101] |
   |  |  Null passed for non-null parameter of new 
org.apache.hadoop.ozone.om.response.security.OMGetDelegationTokenResponse(OzoneTokenIdentifier,
 long, OzoneManagerProtocolProtos$OMResponse) in 
org.apache.hadoop.ozone.om.request.security.OMGetDelegationTokenRequest.validateAndUpdateCache(OzoneManager,
 long, OzoneManagerDoubleBufferHelper)  At OMGetDelegationTokenRequest.java:of 
new 
org.apache.hadoop.ozone.om.response.security.OMGetDelegationTokenResponse(OzoneTokenIdentifier,
 long, OzoneManagerProtocolProtos$OMResponse) in 
org.apache.hadoop.ozone.om.request.security.OMGetDelegationTokenRequest.validateAndUpdateCache(OzoneManager,
 long, OzoneManagerDoubleBufferHelper)  At 
OMGetDelegationTokenRequest.java:[line 140] |
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientForAclAuditLog |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1308/1/artifact/out/Dockerfile
 |
   | GITHUB PR | 

[jira] [Work logged] (HDDS-1105) Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone Manager.

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1105?focusedWorklogId=296698=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296698
 ]

ASF GitHub Bot logged work on HDDS-1105:


Author: ASF GitHub Bot
Created on: 17/Aug/19 02:54
Start Date: 17/Aug/19 02:54
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #1259: 
HDDS-1105 : Add mechanism in Recon to obtain DB snapshot 'delta' updates from 
Ozone Manager
URL: https://github.com/apache/hadoop/pull/1259#discussion_r314915078
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/ReconUtils.java
 ##
 @@ -90,7 +90,7 @@ public static File getReconDbDir(Configuration conf, String 
dirConfigKey) {
* @param destPath destination path to untar to.
* @throws IOException ioException
*/
-  public static void untarCheckpointFile(File tarFile, Path destPath)
+  public void untarCheckpointFile(File tarFile, Path destPath)
 
 Review comment:
   Any particular reason for removing the static functionality? Util classes 
are generally kept stateless.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296698)
Time Spent: 2h 40m  (was: 2.5h)

> Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone 
> Manager.
> 
>
> Key: HDDS-1105
> URL: https://issues.apache.org/jira/browse/HDDS-1105
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> *Some context*
> The FSCK server will periodically invoke this OM API passing in the most 
> recent sequence number of its own RocksDB instance. The OM will use the 
> RockDB getUpdateSince() API to answer this query. Since the getUpdateSince 
> API only works against the RocksDB WAL, we have to configure OM RocksDB WAL 
> (https://github.com/facebook/rocksdb/wiki/Write-Ahead-Log) with sufficient 
> max size to make this API useful. If the OM cannot get all transactions since 
> the given sequence number (due to WAL flushing), it can error out. In that 
> case the FSCK server can fall back to getting the entire checkpoint snapshot 
> implemented in HDDS-1085.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1105) Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone Manager.

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1105?focusedWorklogId=296699=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296699
 ]

ASF GitHub Bot logged work on HDDS-1105:


Author: ASF GitHub Bot
Created on: 17/Aug/19 02:54
Start Date: 17/Aug/19 02:54
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #1259: 
HDDS-1105 : Add mechanism in Recon to obtain DB snapshot 'delta' updates from 
Ozone Manager
URL: https://github.com/apache/hadoop/pull/1259#discussion_r314914818
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/OzoneManagerServiceProviderImpl.java
 ##
 @@ -187,5 +229,119 @@ protected DBCheckpoint getOzoneManagerDBSnapshot() {
 }
 return null;
   }
+
+  /**
+   * Update Local OM DB with new OM DB snapshot.
+   * @throws IOException
+   */
+  @VisibleForTesting
+  void updateReconOmDBWithNewSnapshot() throws IOException {
+// Obtain the current DB snapshot from OM and
+// update the in house OM metadata managed DB instance.
+DBCheckpoint dbSnapshot = getOzoneManagerDBSnapshot();
+if (dbSnapshot != null && dbSnapshot.getCheckpointLocation() != null) {
+  try {
+omMetadataManager.updateOmDB(dbSnapshot.getCheckpointLocation()
+.toFile());
+  } catch (IOException e) {
+LOG.error("Unable to refresh Recon OM DB Snapshot. ", e);
+  }
+} else {
+  LOG.error("Null snapshot location got from OM.");
+}
+  }
+
+  /**
+   * Get Delta updates from OM through RPC call and apply to local OM DB as
+   * well as accumulate in a buffer.
+   * @param fromSequenceNumber from sequence number to request from.
+   * @param omdbUpdatesHandler OM DB updates handler to buffer updates.
+   * @throws IOException when OM RPC request fails.
+   * @throws RocksDBException when writing to RocksDB fails.
+   */
+  @VisibleForTesting
+  void getAndApplyDeltaUpdatesFromOM(
+  long fromSequenceNumber, OMDBUpdatesHandler omdbUpdatesHandler)
+  throws IOException, RocksDBException {
+DBUpdatesRequest dbUpdatesRequest = DBUpdatesRequest.newBuilder()
+.setSequenceNumber(fromSequenceNumber).build();
+DBUpdatesWrapper dbUpdates = ozoneManagerClient.getDBUpdates(
+dbUpdatesRequest);
+if (null != dbUpdates) {
+  RDBStore rocksDBStore = (RDBStore)omMetadataManager.getStore();
+  RocksDB rocksDB = rocksDBStore.getDb();
+  LOG.debug("Number of updates received from OM : " +
+  dbUpdates.getData().size());
+  for (byte[] data : dbUpdates.getData()) {
+WriteBatch writeBatch = new WriteBatch(data);
+writeBatch.iterate(omdbUpdatesHandler);
+RDBBatchOperation rdbBatchOperation = new 
RDBBatchOperation(writeBatch);
+rdbBatchOperation.commit(rocksDB, new WriteOptions());
+  }
+}
+  }
+
+  /**
+   * Based on current state of Recon's OM DB, we either get delta updates or
+   * full snapshot from Ozone Manager.
+   */
+  @VisibleForTesting
+  void syncDataFromOM() {
+long currentSequenceNumber = getCurrentOMDBSequenceNumber();
+boolean fullSnapshot = false;
+
+if (currentSequenceNumber <= 0) {
+  fullSnapshot = true;
+} else {
 
 Review comment:
   Should we check the difference between currentSequenceNum and the latest 
sequence number of OM DB before deciding whether to get full snapshot or delta 
updates?
   Lets say currentSequenceNum = 100 and latest sequence number is 1M. Would it 
not be better to just get the full snapshot and replace the old one? (Not sure 
if this is applicable to Recon)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296699)
Time Spent: 2h 50m  (was: 2h 40m)

> Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone 
> Manager.
> 
>
> Key: HDDS-1105
> URL: https://issues.apache.org/jira/browse/HDDS-1105
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> *Some context*
> The FSCK server will periodically invoke this OM API passing in the most 
> recent sequence number of its own RocksDB instance. The OM will use the 
> RockDB getUpdateSince() API to answer this query. Since the getUpdateSince 
> API only 

[jira] [Commented] (HDFS-13977) NameNode can kill itself if it tries to send too many txns to a QJM simultaneously

2019-08-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909551#comment-16909551
 ] 

Hadoop QA commented on HDFS-13977:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
52s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 171 unchanged - 0 fixed = 172 total (was 171) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 49s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}167m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs |
|   | hadoop.cli.TestHDFSCLI |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-13977 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977834/HDFS-13977.000.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7cd090bb7fa7 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a46ba03 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27535/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Work logged] (HDDS-1974) Implement OM CancelDelegationToken request to use Cache and DoubleBuffer

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1974?focusedWorklogId=296693=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296693
 ]

ASF GitHub Bot logged work on HDDS-1974:


Author: ASF GitHub Bot
Created on: 17/Aug/19 02:28
Start Date: 17/Aug/19 02:28
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1308: HDDS-1974. 
Implement OM CancelDelegationToken request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1308#issuecomment-522196993
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 199 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 44 | Maven dependency ordering for branch |
   | -1 | mvninstall | 34 | hadoop-ozone in trunk failed. |
   | -1 | compile | 38 | hadoop-hdds in trunk failed. |
   | -1 | compile | 33 | hadoop-ozone in trunk failed. |
   | -0 | checkstyle | 27 | The patch fails to run checkstyle in hadoop-hdds |
   | -0 | checkstyle | 24 | The patch fails to run checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | trunk passed |
   | -1 | shadedclient | 136 | branch has errors when building and testing our 
client artifacts. |
   | -1 | javadoc | 36 | hadoop-hdds in trunk failed. |
   | 0 | spotbugs | 1110 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 90 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 658 | hadoop-ozone in trunk failed. |
   | -0 | patch | 1210 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 48 | Maven dependency ordering for patch |
   | -1 | mvninstall | 39 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 49 | hadoop-ozone in the patch failed. |
   | -1 | compile | 161 | hadoop-hdds in the patch failed. |
   | -1 | compile | 46 | hadoop-ozone in the patch failed. |
   | -1 | javac | 161 | hadoop-hdds in the patch failed. |
   | -1 | javac | 46 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 44 | The patch fails to run checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | shadedclient | 50 | patch has no errors when building and testing our 
client artifacts. |
   | -1 | javadoc | 129 | hadoop-hdds generated 16 new + 0 unchanged - 0 fixed 
= 16 total (was 0) |
   | +1 | findbugs | 1047 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 495 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1554 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 74 | The patch does not generate ASF License warnings. |
   | | | 5996 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOMRatisSnapshots |
   |   | hadoop.ozone.om.TestOzoneManagerRestart |
   |   | hadoop.ozone.om.TestOzoneManagerRestInterface |
   |   | hadoop.ozone.om.TestOzoneManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1308/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1308 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 55283ee71c29 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 8d754c2 |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1308/3/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1308/3/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1308/3/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1308/3/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1308/out/maven-branch-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 

[jira] [Work logged] (HDDS-1903) Use dynamic ports for SCM in TestSCMClientProtocolServer and TestSCMSecurityProtocolServer

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1903?focusedWorklogId=296690=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296690
 ]

ASF GitHub Bot logged work on HDDS-1903:


Author: ASF GitHub Bot
Created on: 17/Aug/19 02:00
Start Date: 17/Aug/19 02:00
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1303: HDDS-1903 : Use 
dynamic ports for SCM in TestSCMClientProtocolServer …
URL: https://github.com/apache/hadoop/pull/1303#issuecomment-522195135
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 46 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 633 | trunk passed |
   | +1 | compile | 398 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 849 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | trunk passed |
   | 0 | spotbugs | 470 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 680 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 633 | the patch passed |
   | +1 | compile | 425 | the patch passed |
   | +1 | javac | 425 | the patch passed |
   | +1 | checkstyle | 82 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | -1 | shadedclient | 754 | patch has errors when building and testing our 
client artifacts. |
   | +1 | javadoc | 190 | the patch passed |
   | +1 | findbugs | 692 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 297 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1817 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 36 | The patch does not generate ASF License warnings. |
   | | | 7910 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1303/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1303 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 44abeb34e540 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a46ba03 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1303/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1303/4/testReport/ |
   | Max. process+thread count | 5264 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1303/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296690)
Time Spent: 1h 50m  (was: 1h 40m)

> Use dynamic ports for SCM in TestSCMClientProtocolServer and 
> TestSCMSecurityProtocolServer
> --
>
> Key: HDDS-1903
> URL: https://issues.apache.org/jira/browse/HDDS-1903
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Nanda kumar
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> We should use dynamic port for SCM in the following test-cases
> * TestSCMClientProtocolServer
> * TestSCMSecurityProtocolServer



--
This message was sent by 

[jira] [Assigned] (HDFS-10606) TrashPolicyDefault supports time of auto clean up can configured

2019-08-16 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-10606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDFS-10606:
--

Assignee: He Xiaoqiao

> TrashPolicyDefault supports time of auto clean up can configured
> 
>
> Key: HDFS-10606
> URL: https://issues.apache.org/jira/browse/HDFS-10606
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-10606-branch-2.7.001.patch
>
>
> TrashPolicyDefault clean up Trash based on 
> [UTC|http://www.worldtimeserver.com/current_time_in_UTC.aspx] currently and 
> the time of cleaning up is 00:00 UTC. when there are large amount of trash 
> data should be auto-clean, it will block NN for a long time since Global 
> Lock, In the most serious situations it may lead some cron job submit 
> failure. if add configuration about time of cleaning up, it will avoid impact 
> on this cron jobs at that default time.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14723) Add helper method FSNamesystem#setBlockManagerForTesting() in branch-2

2019-08-16 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14723:
---
Attachment: HDFS-14723.branch-2.8.001.patch

> Add helper method FSNamesystem#setBlockManagerForTesting() in branch-2
> --
>
> Key: HDFS-14723
> URL: https://issues.apache.org/jira/browse/HDFS-14723
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
> Fix For: 2.10.0, 2.9.3
>
> Attachments: HDFS-14723.branch-2.001.patch, 
> HDFS-14723.branch-2.8.001.patch
>
>
> The revert of HDFS-12914 from branch-2 broke the build.
> See HDFS-12914 and HDFS-13898 for the details. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14725) Backport HDFS-12914 to branch-2 (Block report leases cause missing blocks until next report)

2019-08-16 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14725:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Backport HDFS-12914 to branch-2 (Block report leases cause missing blocks 
> until next report)
> 
>
> Key: HDFS-14725
> URL: https://issues.apache.org/jira/browse/HDFS-14725
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: namenode
>Reporter: Wei-Chiu Chuang
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 2.10.0, 2.9.3
>
> Attachments: HDFS-14725.branch-2.001.patch, 
> HDFS-14725.branch-2.002.patch, HDFS-14725.branch-2.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14723) Add helper method FSNamesystem#setBlockManagerForTesting() in branch-2

2019-08-16 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14723:
---
Status: Patch Available  (was: Reopened)

> Add helper method FSNamesystem#setBlockManagerForTesting() in branch-2
> --
>
> Key: HDFS-14723
> URL: https://issues.apache.org/jira/browse/HDFS-14723
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
> Fix For: 2.10.0, 2.9.3
>
> Attachments: HDFS-14723.branch-2.001.patch, 
> HDFS-14723.branch-2.8.001.patch
>
>
> The revert of HDFS-12914 from branch-2 broke the build.
> See HDFS-12914 and HDFS-13898 for the details. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14744) RBF: Non secured routers should not log in error mode when UGI is default.

2019-08-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909536#comment-16909536
 ] 

Hadoop QA commented on HDFS-14744:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  7m 
37s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m  4s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup |
|   | hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14744 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977840/HDFS-14744.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e2022f240523 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8d754c2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27537/artifact/out/branch-mvninstall-root.txt
 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27537/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 

[jira] [Updated] (HDFS-14723) Add helper method FSNamesystem#setBlockManagerForTesting() in branch-2

2019-08-16 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14723:
---
Fix Version/s: 2.9.3

> Add helper method FSNamesystem#setBlockManagerForTesting() in branch-2
> --
>
> Key: HDFS-14723
> URL: https://issues.apache.org/jira/browse/HDFS-14723
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
> Fix For: 2.10.0, 2.9.3
>
> Attachments: HDFS-14723.branch-2.001.patch
>
>
> The revert of HDFS-12914 from branch-2 broke the build.
> See HDFS-12914 and HDFS-13898 for the details. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14725) Backport HDFS-12914 to branch-2 (Block report leases cause missing blocks until next report)

2019-08-16 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14725:
---
Fix Version/s: 2.9.3
   2.10.0

> Backport HDFS-12914 to branch-2 (Block report leases cause missing blocks 
> until next report)
> 
>
> Key: HDFS-14725
> URL: https://issues.apache.org/jira/browse/HDFS-14725
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: namenode
>Reporter: Wei-Chiu Chuang
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 2.10.0, 2.9.3
>
> Attachments: HDFS-14725.branch-2.001.patch, 
> HDFS-14725.branch-2.002.patch, HDFS-14725.branch-2.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-14723) Add helper method FSNamesystem#setBlockManagerForTesting() in branch-2

2019-08-16 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reopened HDFS-14723:


Reopen, in order to get HDFS-14725 into branch-2.9 and even 2.8, I need this 
for branch-2.9/2.8

> Add helper method FSNamesystem#setBlockManagerForTesting() in branch-2
> --
>
> Key: HDFS-14723
> URL: https://issues.apache.org/jira/browse/HDFS-14723
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
> Fix For: 2.10.0
>
> Attachments: HDFS-14723.branch-2.001.patch
>
>
> The revert of HDFS-12914 from branch-2 broke the build.
> See HDFS-12914 and HDFS-13898 for the details. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1972) Provide example ha proxy with multiple s3 servers back end.

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1972?focusedWorklogId=296686=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296686
 ]

ASF GitHub Bot logged work on HDDS-1972:


Author: ASF GitHub Bot
Created on: 17/Aug/19 01:09
Start Date: 17/Aug/19 01:09
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1304: HDDS-1972. 
Provide example ha proxy with multiple s3 servers back end.
URL: https://github.com/apache/hadoop/pull/1304#issuecomment-522191368
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 83 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | yamllint | 0 | yamllint was not available. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 631 | trunk passed |
   | +1 | compile | 428 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 911 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 187 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 638 | the patch passed |
   | +1 | compile | 415 | the patch passed |
   | +1 | javac | 415 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 742 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 178 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 373 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1947 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 56 | The patch does not generate ASF License warnings. |
   | | | 6796 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientForAclAuditLog |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1304/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1304 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient yamllint shellcheck shelldocs |
   | uname | Linux ee7c84075a91 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c8675ec |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1304/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1304/5/testReport/ |
   | Max. process+thread count | 4459 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1304/5/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296686)
Time Spent: 2.5h  (was: 2h 20m)

> Provide example ha proxy with multiple s3 servers back end.
> ---
>
> Key: HDDS-1972
> URL: https://issues.apache.org/jira/browse/HDDS-1972
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> In this Jira, we shall provide docker-compose files 

[jira] [Commented] (HDFS-10782) Decrease memory frequent exchange of Centralized Cache Management when run balancer

2019-08-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-10782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909524#comment-16909524
 ] 

Hadoop QA commented on HDFS-10782:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 138 unchanged - 0 fixed = 141 total (was 138) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 24s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeHttpServerXFrame 
|
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:da67579 |
| JIRA Issue | HDFS-10782 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12825264/HDFS-10782-branch-2.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  

[jira] [Commented] (HDFS-14725) Backport HDFS-12914 to branch-2 (Block report leases cause missing blocks until next report)

2019-08-16 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909523#comment-16909523
 ] 

Wei-Chiu Chuang commented on HDFS-14725:


+1 Other than TestDirectoryScanner which appears to be very flaky prior to the 
patch, tests passed locally.

> Backport HDFS-12914 to branch-2 (Block report leases cause missing blocks 
> until next report)
> 
>
> Key: HDFS-14725
> URL: https://issues.apache.org/jira/browse/HDFS-14725
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: namenode
>Reporter: Wei-Chiu Chuang
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14725.branch-2.001.patch, 
> HDFS-14725.branch-2.002.patch, HDFS-14725.branch-2.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14523) Remove excess read lock for NetworkToplogy

2019-08-16 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909521#comment-16909521
 ] 

Hudson commented on HDFS-14523:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17138 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17138/])
HDFS-14523. Remove excess read lock for NetworkToplogy. Contributed by 
(weichiu: rev 971a4c8e8328a4bdea65de4a0e84c82b5b2de24b)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java


> Remove excess read lock for NetworkToplogy
> --
>
> Key: HDFS-14523
> URL: https://issues.apache.org/jira/browse/HDFS-14523
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wu Weiwei
>Assignee: Wu Weiwei
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14523.1.patch
>
>
> getNumOfRacks() and getNumOfLeaves() are two high frequencies call methods 
> for BlockPlacementPolicy, this two methods need to get NetworkTopology read 
> lock, and get lock in high frequencies call methods may impact the namenode 
> performance. 
> This two methods get number of racks and number of leaves just for 
> chooseTarget calculate,  lock in these two methods cannot guarantee these two 
> values will not change in the subsequent calculations.
> I think it's safe to remove the read lock from this two methods.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14456) HAState#prepareToEnterState needn't a lock

2019-08-16 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909518#comment-16909518
 ] 

Hudson commented on HDFS-14456:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17138 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17138/])
HDFS-14456:HAState#prepareToEnterState neednt a lock (#770) Contributed 
(weichiu: rev a38b9e137e67571d2df83a7a9505b66cffefa7c8)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java


> HAState#prepareToEnterState needn't a lock
> --
>
> Key: HDFS-14456
> URL: https://issues.apache.org/jira/browse/HDFS-14456
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: hunshenshi
>Assignee: hunshenshi
>Priority: Major
> Fix For: 3.3.0
>
>
> prepareToEnterState in HAState is called without the context being locked.
> But in NameNode#NameNode, prepareToEnterState is after haContext.writeLock()
>  
> {code:java}
> try {
>   haContext.writeLock();
>   state.prepareToEnterState(haContext);
>   state.enterState(haContext);
> } finally {
>   haContext.writeUnlock();
> }
> {code}
>  
> Is it OK?
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1911) Support Prefix ACL operations for OM HA.

2019-08-16 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909519#comment-16909519
 ] 

Hudson commented on HDDS-1911:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17138 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17138/])
HDDS-1911. Support Prefix ACL operations for OM HA. (#1275) (github: rev 
c8675ec42ea6002ce517855b3514643c5afb6086)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHA.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/prefix/OMPrefixAclRequest.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/prefix/OMPrefixRemoveAclRequest.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/prefix/OMPrefixSetAclRequest.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/prefix/package-info.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/acl/prefix/OMPrefixAclResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/utils/OzoneManagerRatisUtils.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/acl/prefix/package-info.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/prefix/OMPrefixAddAclRequest.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/PrefixManagerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/security/acl/TestOzoneNativeAuthorizer.java


> Support Prefix ACL operations for OM HA.
> 
>
> Key: HDDS-1911
> URL: https://issues.apache.org/jira/browse/HDDS-1911
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> +-HDDS-1608-+ adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1913) Fix OzoneBucket and RpcClient APIS for acl

2019-08-16 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909520#comment-16909520
 ] 

Hudson commented on HDDS-1913:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17138 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17138/])
HDDS-1913. Fix OzoneBucket and RpcClient APIS for acl. (#1257) (github: rev 
a46ba03d150f9376a13ef91c818a5779ce9cfb4e)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOmAcls.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java
* (edit) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/interfaces/StorageHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketSetPropertyRequest.java
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/client/OzoneBucketStub.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestMultipleContainerReadWrite.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/web/handlers/BucketArgs.java
* (edit) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/handlers/BucketHandler.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneClientUtils.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestBucketManagerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOmBlockVersioning.java
* (edit) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/storage/DistributedStorageHandler.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestBuckets.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
* (edit) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rest/RestClient.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/client/OzoneVolumeStub.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManagerHelper.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/protocol/ClientProtocol.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketArgs.java
* (edit) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/handlers/BucketProcessTemplate.java


> Fix OzoneBucket and RpcClient APIS for acl
> --
>
> Key: HDDS-1913
> URL: https://issues.apache.org/jira/browse/HDDS-1913
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Fix addAcl,removeAcl in OzoneBucket to use newly added acl API's 
> addAcl/removeAcl as part of HDDS-1739.
> Remove addBucketAcls, removeBucketAcls from RpcClient. We should use 
> addAcl/removeAcl.
>  
> And also fix @xiaoyu comment on HDDS-1900 jira. 
> BucketManagerImpl#setBucketProperty() as they now require a different 
> permission (WRITE_ACL instead of WRITE)?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDDS-1347) In OM HA getS3Secret call Should happen only leader OM

2019-08-16 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reopened HDDS-1347:
--

> In OM HA getS3Secret call Should happen only leader OM
> --
>
> Key: HDDS-1347
> URL: https://issues.apache.org/jira/browse/HDDS-1347
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.5.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In Om HA getS3Secret  should happen only leader OM.
>  
>  
> The reason is similar to initiateMultipartUpload. For more info refer 
> HDDS-1319 
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1347) Implement GetS3Secret to use double buffer and cache.

2019-08-16 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1347:
-
Summary: Implement GetS3Secret to use double buffer and cache.  (was: In OM 
HA getS3Secret call Should happen only leader OM)

> Implement GetS3Secret to use double buffer and cache.
> -
>
> Key: HDDS-1347
> URL: https://issues.apache.org/jira/browse/HDDS-1347
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.5.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In Om HA getS3Secret  should happen only leader OM.
>  
>  
> The reason is similar to initiateMultipartUpload. For more info refer 
> HDDS-1319 
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDDS-1347) In OM HA getS3Secret call Should happen only leader OM

2019-08-16 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1347:
-
Comment: was deleted

(was: Fixed as part of HDDS-1969.)

> In OM HA getS3Secret call Should happen only leader OM
> --
>
> Key: HDDS-1347
> URL: https://issues.apache.org/jira/browse/HDDS-1347
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.5.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In Om HA getS3Secret  should happen only leader OM.
>  
>  
> The reason is similar to initiateMultipartUpload. For more info refer 
> HDDS-1319 
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1359) In OM HA getDelegation call Should happen only leader OM

2019-08-16 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-1359.
--
   Resolution: Fixed
Fix Version/s: 0.5.0

Fixed as part of HDDS-1969

>  In OM HA getDelegation call Should happen only leader OM
> -
>
> Key: HDDS-1359
> URL: https://issues.apache.org/jira/browse/HDDS-1359
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.5.0
>
>
> In Om HA getDelegationToken  should happen only leader OM.
>  
> The reason is similar to initiateMultipartUpload. For more info refer 
> HDDS-1319 
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1971) Update document for HDDS-1891: Ozone fs shell command should work with default port when port number is not specified

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1971?focusedWorklogId=296681=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296681
 ]

ASF GitHub Bot logged work on HDDS-1971:


Author: ASF GitHub Bot
Created on: 17/Aug/19 00:24
Start Date: 17/Aug/19 00:24
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1306: HDDS-1971. 
Update document for HDDS-1891: Ozone fs shell command should work with default 
port when port number is not specified
URL: https://github.com/apache/hadoop/pull/1306#issuecomment-522187445
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296681)
Time Spent: 1h 20m  (was: 1h 10m)

> Update document for HDDS-1891: Ozone fs shell command should work with 
> default port when port number is not specified
> -
>
> Key: HDDS-1971
> URL: https://issues.apache.org/jira/browse/HDDS-1971
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: documentation
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> This should've been part of HDDS-1891.
> Now that fs shell command works without specifying a default OM port number. 
> We should update the doc on 
> https://hadoop.apache.org/ozone/docs/0.4.0-alpha/ozonefs.html:
> {code}
> ... Moreover, the filesystem URI can take a fully qualified form with the OM 
> host and port as a part of the path following the volume name.
> {code}
> CC [~bharatviswa]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14523) Remove excess read lock for NetworkToplogy

2019-08-16 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14523:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Pushed v1 patch to trunk. Thanks [~wuweiwei] and [~vagarychen]!

> Remove excess read lock for NetworkToplogy
> --
>
> Key: HDFS-14523
> URL: https://issues.apache.org/jira/browse/HDFS-14523
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wu Weiwei
>Assignee: Wu Weiwei
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14523.1.patch
>
>
> getNumOfRacks() and getNumOfLeaves() are two high frequencies call methods 
> for BlockPlacementPolicy, this two methods need to get NetworkTopology read 
> lock, and get lock in high frequencies call methods may impact the namenode 
> performance. 
> This two methods get number of racks and number of leaves just for 
> chooseTarget calculate,  lock in these two methods cannot guarantee these two 
> values will not change in the subsequent calculations.
> I think it's safe to remove the read lock from this two methods.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1913) Fix OzoneBucket and RpcClient APIS for acl

2019-08-16 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1913:
-
Fix Version/s: 0.4.1

> Fix OzoneBucket and RpcClient APIS for acl
> --
>
> Key: HDDS-1913
> URL: https://issues.apache.org/jira/browse/HDDS-1913
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Fix addAcl,removeAcl in OzoneBucket to use newly added acl API's 
> addAcl/removeAcl as part of HDDS-1739.
> Remove addBucketAcls, removeBucketAcls from RpcClient. We should use 
> addAcl/removeAcl.
>  
> And also fix @xiaoyu comment on HDDS-1900 jira. 
> BucketManagerImpl#setBucketProperty() as they now require a different 
> permission (WRITE_ACL instead of WRITE)?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14523) Remove excess read lock for NetworkToplogy

2019-08-16 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909503#comment-16909503
 ] 

Wei-Chiu Chuang commented on HDFS-14523:


+1 from me.

> Remove excess read lock for NetworkToplogy
> --
>
> Key: HDFS-14523
> URL: https://issues.apache.org/jira/browse/HDFS-14523
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wu Weiwei
>Assignee: Wu Weiwei
>Priority: Major
> Attachments: HDFS-14523.1.patch
>
>
> getNumOfRacks() and getNumOfLeaves() are two high frequencies call methods 
> for BlockPlacementPolicy, this two methods need to get NetworkTopology read 
> lock, and get lock in high frequencies call methods may impact the namenode 
> performance. 
> This two methods get number of racks and number of leaves just for 
> chooseTarget calculate,  lock in these two methods cannot guarantee these two 
> values will not change in the subsequent calculations.
> I think it's safe to remove the read lock from this two methods.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13321) Inadequate information for handling catch clauses

2019-08-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909502#comment-16909502
 ] 

Hadoop QA commented on HDFS-13321:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
9s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
37s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m  
9s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 21s{color} | {color:orange} root: The patch generated 2 new + 47 unchanged - 
0 fixed = 49 total (was 47) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
31s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}115m  6s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
15s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
31s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
56s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| 

[jira] [Updated] (HDDS-505) OzoneManager HA

2019-08-16 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-505:

Status: Reopened  (was: Reopened)

> OzoneManager HA
> ---
>
> Key: HDDS-505
> URL: https://issues.apache.org/jira/browse/HDDS-505
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: Handling Write Requests with OM HA.pdf, OzoneManager 
> HA.pdf
>
>
> OzoneManager can be a single point of failure in an Ozone cluster. We propose 
> an HA implementation for OM using Ratis (Raft protocol).
> Attached the design document for the proposed implementation.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1347) In OM HA getS3Secret call Should happen only leader OM

2019-08-16 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-1347.
--
   Resolution: Fixed
Fix Version/s: 0.5.0

Fixed as part of HDDS-1969.

> In OM HA getS3Secret call Should happen only leader OM
> --
>
> Key: HDDS-1347
> URL: https://issues.apache.org/jira/browse/HDDS-1347
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.5.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In Om HA getS3Secret  should happen only leader OM.
>  
>  
> The reason is similar to initiateMultipartUpload. For more info refer 
> HDDS-1319 
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12859) Admin command resetBalancerBandwidth

2019-08-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909497#comment-16909497
 ] 

Hadoop QA commented on HDFS-12859:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HDFS-12859 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12859 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12899783/0004-HDFS-12859-Admin-command-resetBalancerBandwidth.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27536/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Admin command resetBalancerBandwidth
> 
>
> Key: HDFS-12859
> URL: https://issues.apache.org/jira/browse/HDFS-12859
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: balancer  mover
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
>Priority: Major
> Attachments: 
> 0003-HDFS-12859-Admin-command-resetBalancerBandwidth.patch, 
> 0004-HDFS-12859-Admin-command-resetBalancerBandwidth.patch, HDFS-12859.patch
>
>
> We can already set balancer bandwidth dynamically using command 
> setBalancerBandwidth. The setting value is not persistent and not stored in 
> configuration file. The different datanodes could their different default or 
> former setting in configuration.
> When we suggested to develop a schedule balancer task which runs at midnight 
> everyday. We set a larger bandwidth for it and hope to reset the value after 
> finishing. However, we found it difficult to reset the different setting for 
> different datanodes as the setBalancerBandwidth command can only set the same 
> value to all datanodes. If we want to use unique setting for every datanode, 
> we have to reset the datanodes.
> So it would be useful to have a command to synchronize the setting with the 
> configuration file. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1974) Implement OM CancelDelegationToken request to use Cache and DoubleBuffer

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1974?focusedWorklogId=296669=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296669
 ]

ASF GitHub Bot logged work on HDDS-1974:


Author: ASF GitHub Bot
Created on: 16/Aug/19 23:53
Start Date: 16/Aug/19 23:53
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1308: HDDS-1974. 
Implement OM CancelDelegationToken request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1308#issuecomment-522183837
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296669)
Time Spent: 40m  (was: 0.5h)

> Implement OM CancelDelegationToken request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1974
> URL: https://issues.apache.org/jira/browse/HDDS-1974
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Implement OM CancelDelegationToken request to use OM Cache, double buffer.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1913) Fix OzoneBucket and RpcClient APIS for acl

2019-08-16 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1913:
-
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

> Fix OzoneBucket and RpcClient APIS for acl
> --
>
> Key: HDDS-1913
> URL: https://issues.apache.org/jira/browse/HDDS-1913
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Fix addAcl,removeAcl in OzoneBucket to use newly added acl API's 
> addAcl/removeAcl as part of HDDS-1739.
> Remove addBucketAcls, removeBucketAcls from RpcClient. We should use 
> addAcl/removeAcl.
>  
> And also fix @xiaoyu comment on HDDS-1900 jira. 
> BucketManagerImpl#setBucketProperty() as they now require a different 
> permission (WRITE_ACL instead of WRITE)?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1913) Fix OzoneBucket and RpcClient APIS for acl

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1913?focusedWorklogId=296662=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296662
 ]

ASF GitHub Bot logged work on HDDS-1913:


Author: ASF GitHub Bot
Created on: 16/Aug/19 23:39
Start Date: 16/Aug/19 23:39
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1257: 
HDDS-1913. Fix OzoneBucket and RpcClient APIS for acl.
URL: https://github.com/apache/hadoop/pull/1257
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296662)
Time Spent: 3h 20m  (was: 3h 10m)

> Fix OzoneBucket and RpcClient APIS for acl
> --
>
> Key: HDDS-1913
> URL: https://issues.apache.org/jira/browse/HDDS-1913
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Fix addAcl,removeAcl in OzoneBucket to use newly added acl API's 
> addAcl/removeAcl as part of HDDS-1739.
> Remove addBucketAcls, removeBucketAcls from RpcClient. We should use 
> addAcl/removeAcl.
>  
> And also fix @xiaoyu comment on HDDS-1900 jira. 
> BucketManagerImpl#setBucketProperty() as they now require a different 
> permission (WRITE_ACL instead of WRITE)?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1913) Fix OzoneBucket and RpcClient APIS for acl

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1913?focusedWorklogId=296660=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296660
 ]

ASF GitHub Bot logged work on HDDS-1913:


Author: ASF GitHub Bot
Created on: 16/Aug/19 23:39
Start Date: 16/Aug/19 23:39
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1257: HDDS-1913. Fix 
OzoneBucket and RpcClient APIS for acl.
URL: https://github.com/apache/hadoop/pull/1257#issuecomment-522181903
 
 
   Test failures are not related to this patch.
   I will commit this to the trunk. Thank You @xiaoyuyao for the review.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296660)
Time Spent: 3h 10m  (was: 3h)

> Fix OzoneBucket and RpcClient APIS for acl
> --
>
> Key: HDDS-1913
> URL: https://issues.apache.org/jira/browse/HDDS-1913
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Fix addAcl,removeAcl in OzoneBucket to use newly added acl API's 
> addAcl/removeAcl as part of HDDS-1739.
> Remove addBucketAcls, removeBucketAcls from RpcClient. We should use 
> addAcl/removeAcl.
>  
> And also fix @xiaoyu comment on HDDS-1900 jira. 
> BucketManagerImpl#setBucketProperty() as they now require a different 
> permission (WRITE_ACL instead of WRITE)?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1913) Fix OzoneBucket and RpcClient APIS for acl

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1913?focusedWorklogId=296659=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296659
 ]

ASF GitHub Bot logged work on HDDS-1913:


Author: ASF GitHub Bot
Created on: 16/Aug/19 23:39
Start Date: 16/Aug/19 23:39
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1257: HDDS-1913. Fix 
OzoneBucket and RpcClient APIS for acl.
URL: https://github.com/apache/hadoop/pull/1257#issuecomment-522181903
 
 
   Test failures are not related to this patch.
   I will commit this to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296659)
Time Spent: 3h  (was: 2h 50m)

> Fix OzoneBucket and RpcClient APIS for acl
> --
>
> Key: HDDS-1913
> URL: https://issues.apache.org/jira/browse/HDDS-1913
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Fix addAcl,removeAcl in OzoneBucket to use newly added acl API's 
> addAcl/removeAcl as part of HDDS-1739.
> Remove addBucketAcls, removeBucketAcls from RpcClient. We should use 
> addAcl/removeAcl.
>  
> And also fix @xiaoyu comment on HDDS-1900 jira. 
> BucketManagerImpl#setBucketProperty() as they now require a different 
> permission (WRITE_ACL instead of WRITE)?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14646) Standby NameNode should not upload fsimage to an inappropriate NameNode.

2019-08-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909485#comment-16909485
 ] 

Hadoop QA commented on HDFS-14646:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 81 unchanged - 1 fixed = 81 total (was 82) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
8s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}137m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  
org.apache.hadoop.hdfs.server.namenode.ImageServlet.getParamsForPutImage(Storage,
 long, long, NNStorage$NameNodeFile, boolean) invokes inefficient Boolean 
constructor; use Boolean.valueOf(...) instead  At ImageServlet.java:invokes 
inefficient Boolean constructor; use Boolean.valueOf(...) instead  At 
ImageServlet.java:[line 510] |
|  |  Primitive boxed just to call toString in 
org.apache.hadoop.hdfs.server.namenode.ImageServlet.getParamsForPutImage(Storage,
 long, long, NNStorage$NameNodeFile, boolean)  At ImageServlet.java:toString in 
org.apache.hadoop.hdfs.server.namenode.ImageServlet.getParamsForPutImage(Storage,
 long, long, NNStorage$NameNodeFile, boolean)  At ImageServlet.java:[line 510] |
|  |  new 
org.apache.hadoop.hdfs.server.namenode.ImageServlet$PutImageParams(HttpServletRequest,
 HttpServletResponse, Configuration) invokes inefficient Boolean constructor; 
use Boolean.valueOf(...) instead  At ImageServlet.java:inefficient Boolean 
constructor; use Boolean.valueOf(...) instead  At ImageServlet.java:[line 696] |
| Failed junit tests | 

[jira] [Updated] (HDDS-1913) Fix OzoneBucket and RpcClient APIS for acl

2019-08-16 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1913:

Status: Patch Available  (was: Open)

> Fix OzoneBucket and RpcClient APIS for acl
> --
>
> Key: HDDS-1913
> URL: https://issues.apache.org/jira/browse/HDDS-1913
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Fix addAcl,removeAcl in OzoneBucket to use newly added acl API's 
> addAcl/removeAcl as part of HDDS-1739.
> Remove addBucketAcls, removeBucketAcls from RpcClient. We should use 
> addAcl/removeAcl.
>  
> And also fix @xiaoyu comment on HDDS-1900 jira. 
> BucketManagerImpl#setBucketProperty() as they now require a different 
> permission (WRITE_ACL instead of WRITE)?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1972) Provide example ha proxy with multiple s3 servers back end.

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1972?focusedWorklogId=296644=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296644
 ]

ASF GitHub Bot logged work on HDDS-1972:


Author: ASF GitHub Bot
Created on: 16/Aug/19 23:15
Start Date: 16/Aug/19 23:15
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1304: HDDS-1972. 
Provide example ha proxy with multiple s3 servers back end.
URL: https://github.com/apache/hadoop/pull/1304#issuecomment-522178545
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | yamllint | 1 | yamllint was not available. |
   | 0 | shelldocs | 1 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 637 | trunk passed |
   | +1 | compile | 400 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 860 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 228 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 647 | the patch passed |
   | +1 | compile | 429 | the patch passed |
   | +1 | javac | 429 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 810 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 205 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 372 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2048 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 61 | The patch does not generate ASF License warnings. |
   | | | 6954 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientForAclAuditLog |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1304/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1304 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient yamllint shellcheck shelldocs |
   | uname | Linux 6a557ccbbeb0 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 8943e13 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1304/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1304/4/testReport/ |
   | Max. process+thread count | 4499 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1304/4/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296644)
Time Spent: 2h 20m  (was: 2h 10m)

> Provide example ha proxy with multiple s3 servers back end.
> ---
>
> Key: HDDS-1972
> URL: https://issues.apache.org/jira/browse/HDDS-1972
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> In this Jira, we shall provide docker-compose files where we start 3 s3 
> gateway servers, and ha-proxy is used to load balance these S3 Gateway 
> Servers.
>  
> In this Jira, all are proxy configurations are hardcoded, 

[jira] [Work logged] (HDDS-1972) Provide example ha proxy with multiple s3 servers back end.

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1972?focusedWorklogId=296643=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296643
 ]

ASF GitHub Bot logged work on HDDS-1972:


Author: ASF GitHub Bot
Created on: 16/Aug/19 23:14
Start Date: 16/Aug/19 23:14
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1304: HDDS-1972. 
Provide example ha proxy with multiple s3 servers back end.
URL: https://github.com/apache/hadoop/pull/1304#issuecomment-522178410
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 76 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | 0 | yamllint | 0 | yamllint was not available. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 656 | trunk passed |
   | +1 | compile | 399 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 876 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 178 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 578 | the patch passed |
   | +1 | compile | 391 | the patch passed |
   | +1 | javac | 391 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 743 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 186 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 358 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2236 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 50 | The patch does not generate ASF License warnings. |
   | | | 6933 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientForAclAuditLog |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1304/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1304 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient yamllint shellcheck shelldocs |
   | uname | Linux 55bc14b20cf9 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 8943e13 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1304/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1304/3/testReport/ |
   | Max. process+thread count | 4573 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1304/3/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296643)
Time Spent: 2h 10m  (was: 2h)

> Provide example ha proxy with multiple s3 servers back end.
> ---
>
> Key: HDDS-1972
> URL: https://issues.apache.org/jira/browse/HDDS-1972
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> In this Jira, we 

[jira] [Commented] (HDFS-13101) Yet another fsimage corruption related to snapshot

2019-08-16 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909470#comment-16909470
 ] 

Siyao Meng commented on HDFS-13101:
---

[~shashikant] I found that the updated unit test `testDoubleRename` (was 
`testFSImageCorruption`) since patch v3 doesn't reproduce the corruption 
anymore. I tested that even if [L742-747 and 
L754|https://github.com/apache/hadoop/blob/0a85af959ce505f0659e5c69d0ca83a5dce0a7c2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java#L742-L747]
 is commented out, the new unit test won't catch the corruption. But the old 
unit test in v2 WILL. I might file another jira to improve the unit test.

> Yet another fsimage corruption related to snapshot
> --
>
> Key: HDFS-13101
> URL: https://issues.apache.org/jira/browse/HDFS-13101
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Yongjun Zhang
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-13101.001.patch, HDFS-13101.002.patch, 
> HDFS-13101.003.patch, HDFS-13101.004.patch, 
> HDFS-13101.corruption_repro.patch, 
> HDFS-13101.corruption_repro_simplified.patch
>
>
> Lately we saw case similar to HDFS-9406, even though HDFS-9406 fix is 
> present, so it's likely another case not covered by the fix. We are currently 
> trying to collect good fsimage + editlogs to replay to reproduce it and 
> investigate. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1972) Provide example ha proxy with multiple s3 servers back end.

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1972?focusedWorklogId=296639=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296639
 ]

ASF GitHub Bot logged work on HDDS-1972:


Author: ASF GitHub Bot
Created on: 16/Aug/19 23:04
Start Date: 16/Aug/19 23:04
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1304: HDDS-1972. 
Provide example ha proxy with multiple s3 servers back end.
URL: https://github.com/apache/hadoop/pull/1304#issuecomment-522176717
 
 
   Not sure why test is failing in Jenkins run, but passing locally.
   Will look into it, if not able to figure out the issue, planning to disable 
S3 test suite run with proxy.
   @adoroszlai any thoughts?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296639)
Time Spent: 2h  (was: 1h 50m)

> Provide example ha proxy with multiple s3 servers back end.
> ---
>
> Key: HDDS-1972
> URL: https://issues.apache.org/jira/browse/HDDS-1972
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> In this Jira, we shall provide docker-compose files where we start 3 s3 
> gateway servers, and ha-proxy is used to load balance these S3 Gateway 
> Servers.
>  
> In this Jira, all are proxy configurations are hardcoded, we can make 
> improvements to scale and automatically configure with environment variables 
> as a future improvement. This is just a starter example.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1976) Ozone manager init fails when certificate is missing in a kerberized cluster

2019-08-16 Thread Vivek Ratnavel Subramanian (JIRA)
Vivek Ratnavel Subramanian created HDDS-1976:


 Summary: Ozone manager init fails when certificate is missing in a 
kerberized cluster
 Key: HDDS-1976
 URL: https://issues.apache.org/jira/browse/HDDS-1976
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Security
Reporter: Vivek Ratnavel Subramanian
Assignee: Anu Engineer


When Ozone Manager gets into a state where certificate is missing, it does not 
try to recover by creating a certificate.


{code:java}
3:30:48.620 PM INFO OzoneManager Initializing secure OzoneManager. 
3:30:49.788 PM INFO OMCertificateClient Loading certificate from 
location:/var/lib/hadoop-ozone/om/data/certs. 
3:30:49.896 PM INFO OMCertificateClient Added certificate from 
file:/var/lib/hadoop-ozone/om/data/certs/8136899895890.crt. 
3:30:49.904 PM INFO OMCertificateClient Added certificate from 
file:/var/lib/hadoop-ozone/om/data/certs/CA-1.crt. 
3:30:49.930 PM ERROR OMCertificateClient Default certificate serial id is not 
set. Can't locate the default certificate for this client. 
3:30:49.930 PM INFO OMCertificateClient Certificate client init case: 6 
3:30:49.932 PM INFO OMCertificateClient Found private and public key but 
certificate is missing. 
3:30:50.194 PM INFO OzoneManager Init response: RECOVER 
3:30:50.230 PM ERROR OzoneManager OM security initialization failed. OM 
certificate is missing.
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8631) WebHDFS : Support setQuota

2019-08-16 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909456#comment-16909456
 ] 

Chao Sun commented on HDFS-8631:


Attached patch v9 to address checkstyle issues.

> WebHDFS : Support setQuota
> --
>
> Key: HDFS-8631
> URL: https://issues.apache.org/jira/browse/HDFS-8631
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.7.2
>Reporter: nijel
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-8631-001.patch, HDFS-8631-002.patch, 
> HDFS-8631-003.patch, HDFS-8631-004.patch, HDFS-8631-005.patch, 
> HDFS-8631-006.patch, HDFS-8631-007.patch, HDFS-8631-008.patch, 
> HDFS-8631-009.patch
>
>
> User is able do quota management from filesystem object. Same operation can 
> be allowed trough REST API.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8631) WebHDFS : Support setQuota

2019-08-16 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-8631:
---
Attachment: HDFS-8631-009.patch

> WebHDFS : Support setQuota
> --
>
> Key: HDFS-8631
> URL: https://issues.apache.org/jira/browse/HDFS-8631
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.7.2
>Reporter: nijel
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-8631-001.patch, HDFS-8631-002.patch, 
> HDFS-8631-003.patch, HDFS-8631-004.patch, HDFS-8631-005.patch, 
> HDFS-8631-006.patch, HDFS-8631-007.patch, HDFS-8631-008.patch, 
> HDFS-8631-009.patch
>
>
> User is able do quota management from filesystem object. Same operation can 
> be allowed trough REST API.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12510) RBF: Add security to UI

2019-08-16 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909449#comment-16909449
 ] 

CR Hota commented on HDFS-12510:


[~elgoiri] [~brahmareddy] Should we mark this done? we can revisit if any 
issues are reported in the future.

> RBF: Add security to UI
> ---
>
> Key: HDFS-12510
> URL: https://issues.apache.org/jira/browse/HDFS-12510
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
>  Labels: RBF
>
> HDFS-12273 implemented the UI for Router Based Federation without security.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14450) Erasure Coding: decommissioning datanodes cause replicate a large number of duplicate EC internal blocks

2019-08-16 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-14450:
-
Component/s: ec

> Erasure Coding: decommissioning datanodes cause replicate a large number of 
> duplicate EC internal blocks
> 
>
> Key: HDFS-14450
> URL: https://issues.apache.org/jira/browse/HDFS-14450
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ec
>Reporter: Wu Weiwei
>Assignee: Wu Weiwei
>Priority: Major
> Attachments: HDFS-14450-000.patch
>
>
> {code:java}
> //  [WARN] [RedundancyMonitor] : Failed to place enough replicas, still in 
> need of 2 to reach 167 (unavailableStorages=[DISK, ARCHIVE], 
> storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], 
> creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All 
> required storage types are unavailable:  unavailableStorages=[DISK, ARCHIVE], 
> storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], 
> creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
> {code}
> In a large-scale cluster, decommissioning large-scale datanodes cause EC 
> block groups to replicate a large number of duplicate internal blocks.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14350) dfs.datanode.ec.reconstruction.threads not take effect

2019-08-16 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-14350:
-
Component/s: ec

> dfs.datanode.ec.reconstruction.threads not take effect
> --
>
> Key: HDFS-14350
> URL: https://issues.apache.org/jira/browse/HDFS-14350
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, ec
>Affects Versions: 3.2.0
>Reporter: hunshenshi
>Assignee: hunshenshi
>Priority: Major
> Fix For: 3.2.0
>
>
> In ErasureCodingWorker, stripedReconstructionPool is create by 
> {code:java}
> initializeStripedBlkReconstructionThreadPool(conf.getInt(
> DFSConfigKeys.DFS_DN_EC_RECONSTRUCTION_THREADS_KEY,
> DFSConfigKeys.DFS_DN_EC_RECONSTRUCTION_THREADS_DEFAULT));
> private void initializeStripedBlkReconstructionThreadPool(int numThreads) {
>   LOG.debug("Using striped block reconstruction; pool threads={}",
>   numThreads);
>   stripedReconstructionPool = DFSUtilClient.getThreadPoolExecutor(2,
>   numThreads, 60, new LinkedBlockingQueue<>(),
>   "StripedBlockReconstruction-", false);
>   stripedReconstructionPool.allowCoreThreadTimeOut(true);
> }{code}
> so stripedReconstructionPool is a ThreadPoolExecutor, and the queue is a 
> LinkedBlockingQueue, then the active thread is awalys 2, the 
> dfs.datanode.ec.reconstruction.threads not take effect.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1974) Implement OM CancelDelegationToken request to use Cache and DoubleBuffer

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1974?focusedWorklogId=296630=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296630
 ]

ASF GitHub Bot logged work on HDDS-1974:


Author: ASF GitHub Bot
Created on: 16/Aug/19 22:19
Start Date: 16/Aug/19 22:19
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1308: 
HDDS-1974. Implement OM CancelDelegationToken request to use Cache and 
DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1308#discussion_r314908066
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneDelegationTokenSecretManager.java
 ##
 @@ -287,18 +287,40 @@ public OzoneTokenIdentifier 
cancelToken(Token token,
   throw new AccessControlException(canceller
   + " is not authorized to cancel the token " + formatTokenId(id));
 }
-try {
-  store.removeToken(id);
-} catch (IOException e) {
-  LOG.error("Unable to remove token " + id.getSequenceNumber(), e);
-}
-TokenInfo info = currentTokens.remove(id);
-if (info == null) {
-  throw new InvalidToken("Token not found " + formatTokenId(id));
+
+// For HA ratis will take care of removal.
+// This check will be removed, when HA/Non-HA code is merged.
+if (!isRatisEnabled) {
+  try {
+store.removeToken(id);
+  } catch (IOException e) {
+LOG.error("Unable to remove token " + id.getSequenceNumber(), e);
+  }
+  TokenInfo info = currentTokens.remove(id);
+  if (info == null) {
+throw new InvalidToken("Token not found " + formatTokenId(id));
+  }
+} else {
+  // Check whether token is there in-memory map of tokens or not on the
+  // OM leader.
+  TokenInfo info = currentTokens.get(id);
 
 Review comment:
   Yes, a complete map.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296630)
Time Spent: 0.5h  (was: 20m)

> Implement OM CancelDelegationToken request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1974
> URL: https://issues.apache.org/jira/browse/HDDS-1974
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Implement OM CancelDelegationToken request to use OM Cache, double buffer.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1975) Implement default acls for bucket/volume/key for OM HA code

2019-08-16 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1975:
-
Issue Type: Sub-task  (was: Bug)
Parent: HDDS-505

> Implement default acls for bucket/volume/key for OM HA code
> ---
>
> Key: HDDS-1975
> URL: https://issues.apache.org/jira/browse/HDDS-1975
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This Jira is to implement default ACLs for Ozone volume/bucket/key.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1975) Implement default acls for bucket/volume/key for OM HA code

2019-08-16 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1975:


 Summary: Implement default acls for bucket/volume/key for OM HA 
code
 Key: HDDS-1975
 URL: https://issues.apache.org/jira/browse/HDDS-1975
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


This Jira is to implement default ACLs for Ozone volume/bucket/key.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1911) Support Prefix ACL operations for OM HA.

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1911?focusedWorklogId=296628=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296628
 ]

ASF GitHub Bot logged work on HDDS-1911:


Author: ASF GitHub Bot
Created on: 16/Aug/19 22:11
Start Date: 16/Aug/19 22:11
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1275: 
HDDS-1911. Support Prefix ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1275
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296628)
Time Spent: 1h  (was: 50m)

> Support Prefix ACL operations for OM HA.
> 
>
> Key: HDDS-1911
> URL: https://issues.apache.org/jira/browse/HDDS-1911
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> +-HDDS-1608-+ adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13101) Yet another fsimage corruption related to snapshot

2019-08-16 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13101:
--
  Resolution: Fixed
Target Version/s: 3.0.4, 3.2.1, 3.1.3
  Status: Resolved  (was: Patch Available)

Thanks [~shashikant] [~szetszwo] [~jojochuang]. Marking this as resolved.

We would also want to backport this to all previous branches.

> Yet another fsimage corruption related to snapshot
> --
>
> Key: HDFS-13101
> URL: https://issues.apache.org/jira/browse/HDFS-13101
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Yongjun Zhang
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-13101.001.patch, HDFS-13101.002.patch, 
> HDFS-13101.003.patch, HDFS-13101.004.patch, 
> HDFS-13101.corruption_repro.patch, 
> HDFS-13101.corruption_repro_simplified.patch
>
>
> Lately we saw case similar to HDFS-9406, even though HDFS-9406 fix is 
> present, so it's likely another case not covered by the fix. We are currently 
> trying to collect good fsimage + editlogs to replay to reproduce it and 
> investigate. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1911) Support Prefix ACL operations for OM HA.

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1911?focusedWorklogId=296627=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296627
 ]

ASF GitHub Bot logged work on HDDS-1911:


Author: ASF GitHub Bot
Created on: 16/Aug/19 22:11
Start Date: 16/Aug/19 22:11
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1275: HDDS-1911. 
Support Prefix ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1275#issuecomment-522166896
 
 
   Thank You @hanishakoneru for the review.
   I will commit this to the trunk.
   
   Test failures are not related to this patch.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296627)
Time Spent: 50m  (was: 40m)

> Support Prefix ACL operations for OM HA.
> 
>
> Key: HDDS-1911
> URL: https://issues.apache.org/jira/browse/HDDS-1911
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> +-HDDS-1608-+ adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1911) Support Prefix ACL operations for OM HA.

2019-08-16 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1911:
-
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

> Support Prefix ACL operations for OM HA.
> 
>
> Key: HDDS-1911
> URL: https://issues.apache.org/jira/browse/HDDS-1911
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> +-HDDS-1608-+ adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1974) Implement OM CancelDelegationToken request to use Cache and DoubleBuffer

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1974?focusedWorklogId=296626=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296626
 ]

ASF GitHub Bot logged work on HDDS-1974:


Author: ASF GitHub Bot
Created on: 16/Aug/19 22:09
Start Date: 16/Aug/19 22:09
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1308: HDDS-1974. 
Implement OM CancelDelegationToken request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1308#discussion_r314906065
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneDelegationTokenSecretManager.java
 ##
 @@ -287,18 +287,40 @@ public OzoneTokenIdentifier 
cancelToken(Token token,
   throw new AccessControlException(canceller
   + " is not authorized to cancel the token " + formatTokenId(id));
 }
-try {
-  store.removeToken(id);
-} catch (IOException e) {
-  LOG.error("Unable to remove token " + id.getSequenceNumber(), e);
-}
-TokenInfo info = currentTokens.remove(id);
-if (info == null) {
-  throw new InvalidToken("Token not found " + formatTokenId(id));
+
+// For HA ratis will take care of removal.
+// This check will be removed, when HA/Non-HA code is merged.
+if (!isRatisEnabled) {
+  try {
+store.removeToken(id);
+  } catch (IOException e) {
+LOG.error("Unable to remove token " + id.getSequenceNumber(), e);
+  }
+  TokenInfo info = currentTokens.remove(id);
+  if (info == null) {
+throw new InvalidToken("Token not found " + formatTokenId(id));
+  }
+} else {
+  // Check whether token is there in-memory map of tokens or not on the
+  // OM leader.
+  TokenInfo info = currentTokens.get(id);
 
 Review comment:
   Is the in-memory map a complete map?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296626)
Time Spent: 20m  (was: 10m)

> Implement OM CancelDelegationToken request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1974
> URL: https://issues.apache.org/jira/browse/HDDS-1974
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Implement OM CancelDelegationToken request to use OM Cache, double buffer.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1974) Implement OM CancelDelegationToken request to use Cache and DoubleBuffer

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1974?focusedWorklogId=296625=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296625
 ]

ASF GitHub Bot logged work on HDDS-1974:


Author: ASF GitHub Bot
Created on: 16/Aug/19 22:08
Start Date: 16/Aug/19 22:08
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1308: 
HDDS-1974. Implement OM CancelDelegationToken request to use Cache and 
DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1308
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296625)
Time Spent: 10m
Remaining Estimate: 0h

> Implement OM CancelDelegationToken request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1974
> URL: https://issues.apache.org/jira/browse/HDDS-1974
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Implement OM CancelDelegationToken request to use OM Cache, double buffer.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1974) Implement OM CancelDelegationToken request to use Cache and DoubleBuffer

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1974:
-
Labels: pull-request-available  (was: )

> Implement OM CancelDelegationToken request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1974
> URL: https://issues.apache.org/jira/browse/HDDS-1974
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Implement OM CancelDelegationToken request to use OM Cache, double buffer.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1971) Update document for HDDS-1891: Ozone fs shell command should work with default port when port number is not specified

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1971?focusedWorklogId=296622=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296622
 ]

ASF GitHub Bot logged work on HDDS-1971:


Author: ASF GitHub Bot
Created on: 16/Aug/19 22:07
Start Date: 16/Aug/19 22:07
Worklog Time Spent: 10m 
  Work Description: smengcl commented on pull request #1306: HDDS-1971. 
Update document for HDDS-1891: Ozone fs shell command should work with default 
port when port number is not specified
URL: https://github.com/apache/hadoop/pull/1306#discussion_r314905504
 
 

 ##
 File path: hadoop-hdds/docs/content/interface/OzoneFS.md
 ##
 @@ -77,13 +77,39 @@ Or put command etc. In other words, all programs like 
Hive, Spark, and Distcp wi
 Please note that any keys created/deleted in the bucket using methods apart 
from OzoneFileSystem will show up as directories and files in the Ozone File 
System.
 
 Note: Bucket and volume names are not allowed to have a period in them.
-Moreover, the filesystem URI can take a fully qualified form with the OM host 
and port as a part of the path following the volume name.
-For example,
+Moreover, the filesystem URI can take a fully qualified form with the OM host 
and an optional port as a part of the path following the volume name.
+For example, you can specify both host and port:
 
 {{< highlight bash>}}
 hdfs dfs -ls o3fs://bucket.volume.om-host.example.com:5678/key
 {{< /highlight >}}
 
+When the port number is not specified, it will be retrieved from config key 
`ozone.om.address`.
 
 Review comment:
   done. still keeping L109 but rephrased.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296622)
Time Spent: 1h 10m  (was: 1h)

> Update document for HDDS-1891: Ozone fs shell command should work with 
> default port when port number is not specified
> -
>
> Key: HDDS-1971
> URL: https://issues.apache.org/jira/browse/HDDS-1971
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: documentation
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> This should've been part of HDDS-1891.
> Now that fs shell command works without specifying a default OM port number. 
> We should update the doc on 
> https://hadoop.apache.org/ozone/docs/0.4.0-alpha/ozonefs.html:
> {code}
> ... Moreover, the filesystem URI can take a fully qualified form with the OM 
> host and port as a part of the path following the volume name.
> {code}
> CC [~bharatviswa]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1974) Implement OM CancelDelegationToken request to use Cache and DoubleBuffer

2019-08-16 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1974:
-
Status: Patch Available  (was: Open)

> Implement OM CancelDelegationToken request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1974
> URL: https://issues.apache.org/jira/browse/HDDS-1974
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Implement OM CancelDelegationToken request to use OM Cache, double buffer.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1971) Update document for HDDS-1891: Ozone fs shell command should work with default port when port number is not specified

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1971?focusedWorklogId=296621=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296621
 ]

ASF GitHub Bot logged work on HDDS-1971:


Author: ASF GitHub Bot
Created on: 16/Aug/19 22:06
Start Date: 16/Aug/19 22:06
Worklog Time Spent: 10m 
  Work Description: smengcl commented on pull request #1306: HDDS-1971. 
Update document for HDDS-1891: Ozone fs shell command should work with default 
port when port number is not specified
URL: https://github.com/apache/hadoop/pull/1306#discussion_r314905504
 
 

 ##
 File path: hadoop-hdds/docs/content/interface/OzoneFS.md
 ##
 @@ -77,13 +77,39 @@ Or put command etc. In other words, all programs like 
Hive, Spark, and Distcp wi
 Please note that any keys created/deleted in the bucket using methods apart 
from OzoneFileSystem will show up as directories and files in the Ozone File 
System.
 
 Note: Bucket and volume names are not allowed to have a period in them.
-Moreover, the filesystem URI can take a fully qualified form with the OM host 
and port as a part of the path following the volume name.
-For example,
+Moreover, the filesystem URI can take a fully qualified form with the OM host 
and an optional port as a part of the path following the volume name.
+For example, you can specify both host and port:
 
 {{< highlight bash>}}
 hdfs dfs -ls o3fs://bucket.volume.om-host.example.com:5678/key
 {{< /highlight >}}
 
+When the port number is not specified, it will be retrieved from config key 
`ozone.om.address`.
 
 Review comment:
   done. still keeping L109 but repharsed.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296621)
Time Spent: 1h  (was: 50m)

> Update document for HDDS-1891: Ozone fs shell command should work with 
> default port when port number is not specified
> -
>
> Key: HDDS-1971
> URL: https://issues.apache.org/jira/browse/HDDS-1971
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: documentation
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This should've been part of HDDS-1891.
> Now that fs shell command works without specifying a default OM port number. 
> We should update the doc on 
> https://hadoop.apache.org/ozone/docs/0.4.0-alpha/ozonefs.html:
> {code}
> ... Moreover, the filesystem URI can take a fully qualified form with the OM 
> host and port as a part of the path following the volume name.
> {code}
> CC [~bharatviswa]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14456) HAState#prepareToEnterState needn't a lock

2019-08-16 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14456:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

PR was merged. Resolve this jira. Thanks [~hunhun]!

> HAState#prepareToEnterState needn't a lock
> --
>
> Key: HDFS-14456
> URL: https://issues.apache.org/jira/browse/HDFS-14456
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: hunshenshi
>Assignee: hunshenshi
>Priority: Major
> Fix For: 3.3.0
>
>
> prepareToEnterState in HAState is called without the context being locked.
> But in NameNode#NameNode, prepareToEnterState is after haContext.writeLock()
>  
> {code:java}
> try {
>   haContext.writeLock();
>   state.prepareToEnterState(haContext);
>   state.enterState(haContext);
> } finally {
>   haContext.writeUnlock();
> }
> {code}
>  
> Is it OK?
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1911) Support Prefix ACL operations for OM HA.

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1911?focusedWorklogId=296606=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296606
 ]

ASF GitHub Bot logged work on HDDS-1911:


Author: ASF GitHub Bot
Created on: 16/Aug/19 21:43
Start Date: 16/Aug/19 21:43
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on issue #1275: HDDS-1911. 
Support Prefix ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1275#issuecomment-522160832
 
 
   Thanks @bharatviswa504 . LGTM. +1.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296606)
Time Spent: 40m  (was: 0.5h)

> Support Prefix ACL operations for OM HA.
> 
>
> Key: HDDS-1911
> URL: https://issues.apache.org/jira/browse/HDDS-1911
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> +-HDDS-1608-+ adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1971) Update document for HDDS-1891: Ozone fs shell command should work with default port when port number is not specified

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1971?focusedWorklogId=296596=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296596
 ]

ASF GitHub Bot logged work on HDDS-1971:


Author: ASF GitHub Bot
Created on: 16/Aug/19 21:26
Start Date: 16/Aug/19 21:26
Worklog Time Spent: 10m 
  Work Description: smengcl commented on pull request #1306: HDDS-1971. 
Update document for HDDS-1891: Ozone fs shell command should work with default 
port when port number is not specified
URL: https://github.com/apache/hadoop/pull/1306#discussion_r314895527
 
 

 ##
 File path: hadoop-hdds/docs/content/interface/OzoneFS.md
 ##
 @@ -77,13 +77,39 @@ Or put command etc. In other words, all programs like 
Hive, Spark, and Distcp wi
 Please note that any keys created/deleted in the bucket using methods apart 
from OzoneFileSystem will show up as directories and files in the Ozone File 
System.
 
 Note: Bucket and volume names are not allowed to have a period in them.
-Moreover, the filesystem URI can take a fully qualified form with the OM host 
and port as a part of the path following the volume name.
-For example,
+Moreover, the filesystem URI can take a fully qualified form with the OM host 
and an optional port as a part of the path following the volume name.
+For example, you can specify both host and port:
 
 {{< highlight bash>}}
 hdfs dfs -ls o3fs://bucket.volume.om-host.example.com:5678/key
 {{< /highlight >}}
 
+When the port number is not specified, it will be retrieved from config key 
`ozone.om.address`.
+For example, we have `ozone.om.address` configured as following in 
`ozone-site.xml`:
+
+{{< highlight xml >}}
+  
+ozone.om.address
+0.0.0.0:6789
+  
+{{< /highlight >}}
+
+When we run command:
+
+{{< highlight bash>}}
+hdfs dfs -ls o3fs://bucket.volume.om-host.example.com/key
+{{< /highlight >}}
+
+The above command is essentially equivalent to:
+
+{{< highlight bash>}}
+hdfs dfs -ls o3fs://bucket.volume.om-host.example.com:6789/key
+{{< /highlight >}}
+
+Note only the port number in the config is relevant. The host name in config 
`ozone.om.address` is ignored in this case.
 
 Review comment:
   done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296596)
Time Spent: 50m  (was: 40m)

> Update document for HDDS-1891: Ozone fs shell command should work with 
> default port when port number is not specified
> -
>
> Key: HDDS-1971
> URL: https://issues.apache.org/jira/browse/HDDS-1971
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: documentation
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> This should've been part of HDDS-1891.
> Now that fs shell command works without specifying a default OM port number. 
> We should update the doc on 
> https://hadoop.apache.org/ozone/docs/0.4.0-alpha/ozonefs.html:
> {code}
> ... Moreover, the filesystem URI can take a fully qualified form with the OM 
> host and port as a part of the path following the volume name.
> {code}
> CC [~bharatviswa]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1938) Change omPort parameter type from String to int in BasicOzoneFileSystem#createAdapter

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1938?focusedWorklogId=296590=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296590
 ]

ASF GitHub Bot logged work on HDDS-1938:


Author: ASF GitHub Bot
Created on: 16/Aug/19 21:21
Start Date: 16/Aug/19 21:21
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1305: 
HDDS-1938. Change omPort parameter type from String to int in 
BasicOzoneFileSystem#createAdapter
URL: https://github.com/apache/hadoop/pull/1305#discussion_r314894101
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
 ##
 @@ -113,7 +113,7 @@ public void initialize(URI name, Configuration conf) 
throws IOException {
 String remaining = matcher.groupCount() == 3 ? matcher.group(3) : null;
 
 String omHost = null;
-String omPort = String.valueOf(-1);
+int omPort = -1;
 
 Review comment:
   Thanks @smengcl for offline explanation why we need to set this.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296590)
Time Spent: 1h 40m  (was: 1.5h)

> Change omPort parameter type from String to int in 
> BasicOzoneFileSystem#createAdapter
> -
>
> Key: HDDS-1938
> URL: https://issues.apache.org/jira/browse/HDDS-1938
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Filesystem
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1938.001.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> The diff will be based on HDDS-1891.
> Goal:
> 1. Change omPort type to int because it is eventually used as int anyway
> 2. Refactor the parser code in BasicOzoneFileSystem#initialize
> Will post a PR after HDDS-1891 is merged.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1938) Change omPort parameter type from String to int in BasicOzoneFileSystem#createAdapter

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1938?focusedWorklogId=296587=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296587
 ]

ASF GitHub Bot logged work on HDDS-1938:


Author: ASF GitHub Bot
Created on: 16/Aug/19 21:20
Start Date: 16/Aug/19 21:20
Worklog Time Spent: 10m 
  Work Description: smengcl commented on issue #1305: HDDS-1938. Change 
omPort parameter type from String to int in BasicOzoneFileSystem#createAdapter
URL: https://github.com/apache/hadoop/pull/1305#issuecomment-522154964
 
 
   Comment addressed. Pending CI. Thanks for the review @bharatviswa504 !
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296587)
Time Spent: 1.5h  (was: 1h 20m)

> Change omPort parameter type from String to int in 
> BasicOzoneFileSystem#createAdapter
> -
>
> Key: HDDS-1938
> URL: https://issues.apache.org/jira/browse/HDDS-1938
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Filesystem
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1938.001.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> The diff will be based on HDDS-1891.
> Goal:
> 1. Change omPort type to int because it is eventually used as int anyway
> 2. Refactor the parser code in BasicOzoneFileSystem#initialize
> Will post a PR after HDDS-1891 is merged.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14678) Allow triggerBlockReport to a specific namenode

2019-08-16 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909403#comment-16909403
 ] 

Hudson commented on HDFS-14678:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17137 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17137/])
HDFS-14678. Allow triggerBlockReport to a specific namenode. (#1252). (weichiu: 
rev 9a1d8cfaf50ec29ffb2d8522ba2f4bc6605d8b8b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientDatanodeProtocol.proto
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/BlockReportOptions.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientDatanodeProtocolServerSideTranslatorPB.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientDatanodeProtocolTranslatorPB.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestTriggerBlockReport.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java


> Allow triggerBlockReport to a specific namenode
> ---
>
> Key: HDFS-14678
> URL: https://issues.apache.org/jira/browse/HDFS-14678
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.8.2
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Major
> Fix For: 3.3.0
>
>
> In our largest prod cluster (running 2.8.2) we have >3k hosts. Every time 
> when rolling restarting NNs we will need to wait for block report which takes 
> >2.5 hours for each NN.
> One way to make it faster is to manually trigger a full block report from all 
> datanodes. [HDFS-7278|https://issues.apache.org/jira/browse/HDFS-7278]. 
> However, the current triggerBlockReport command will trigger a block report 
> on all NNs which will flood the active NN as well.
> A quick solution will be adding an option to specify a NN that the manually 
> triggered block report will go to, something like:
> *_hdfs dfsadmin [-triggerBlockReport [-incremental] ] 
> [-namenode] _*
> So when doing a restart of standby NN or observer NN we can trigger an 
> aggressive block report to a specific NN to exit safemode faster without 
> risking active NN performance.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1969) Implement OM GetDelegationToken request to use Cache and DoubleBuffer

2019-08-16 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909405#comment-16909405
 ] 

Hudson commented on HDDS-1969:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17137 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17137/])
HDDS-1969. Implement OM GetDelegationToken request to use Cache and (github: 
rev 8943e1340da4b3423a677d02bcac75ea26c6de38)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/security/package-info.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/security/OMGetDelegationTokenRequest.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/security/package-info.java
* (edit) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerHARequestHandlerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/utils/OzoneManagerRatisUtils.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneDelegationTokenSecretManager.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/security/OMDelegationTokenResponse.java


> Implement OM GetDelegationToken request to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1969
> URL: https://issues.apache.org/jira/browse/HDDS-1969
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Implement OM GetDelegationToken request to use OM Cache, double buffer.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14564) Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable

2019-08-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909400#comment-16909400
 ] 

Hadoop QA commented on HDFS-14564:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
56s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
37s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
37s{color} | {color:blue} branch/hadoop-hdfs-project/hadoop-hdfs-native-client 
no findbugs output file (findbugsXml.xml) {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 18m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
29s{color} | {color:green} root: The patch generated 0 new + 110 unchanged - 1 
fixed = 110 total (was 111) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
0s{color} | {color:green} the patch passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
26s{color} | {color:blue} hadoop-hdfs-project/hadoop-hdfs-native-client has no 
data from findbugs {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
29s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
9s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}114m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 28s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  1m  
4s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}270m 49s{color} | 

[jira] [Updated] (HDFS-14744) RBF: Non secured routers should not log in error mode when UGI is default.

2019-08-16 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-14744:
---
Attachment: HDFS-14744.001.patch
Status: Patch Available  (was: Open)

> RBF: Non secured routers should not log in error mode when UGI is default.
> --
>
> Key: HDFS-14744
> URL: https://issues.apache.org/jira/browse/HDFS-14744
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14744.001.patch
>
>
> RouterClientProtocol#getMountPointStatus logs error when groups are not found 
> for default web user dr.who. The line should be logged in "error" mode for 
> secured cluster, for unsecured clusters, we may want to just specify "debug" 
> or else logs are filled up with this non-critical line
> {{ERROR org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer: 
> Cannot get the remote user: There is no primary group for UGI dr.who 
> (auth:SIMPLE)}}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14744) RBF: Non secured routers should not log in error mode when UGI is default.

2019-08-16 Thread CR Hota (JIRA)
CR Hota created HDFS-14744:
--

 Summary: RBF: Non secured routers should not log in error mode 
when UGI is default.
 Key: HDFS-14744
 URL: https://issues.apache.org/jira/browse/HDFS-14744
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: CR Hota
Assignee: CR Hota


RouterClientProtocol#getMountPointStatus logs error when groups are not found 
for default web user dr.who. The line should be logged in "error" mode for 
secured cluster, for unsecured clusters, we may want to just specify "debug" or 
else logs are filled up with this non-critical line

{{ERROR org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer: Cannot 
get the remote user: There is no primary group for UGI dr.who (auth:SIMPLE)}}

 

 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1938) Change omPort parameter type from String to int in BasicOzoneFileSystem#createAdapter

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1938?focusedWorklogId=296574=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296574
 ]

ASF GitHub Bot logged work on HDDS-1938:


Author: ASF GitHub Bot
Created on: 16/Aug/19 21:07
Start Date: 16/Aug/19 21:07
Worklog Time Spent: 10m 
  Work Description: smengcl commented on pull request #1305: HDDS-1938. 
Change omPort parameter type from String to int in 
BasicOzoneFileSystem#createAdapter
URL: https://github.com/apache/hadoop/pull/1305#discussion_r314890276
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
 ##
 @@ -113,7 +113,7 @@ public void initialize(URI name, Configuration conf) 
throws IOException {
 String remaining = matcher.groupCount() == 3 ? matcher.group(3) : null;
 
 String omHost = null;
-String omPort = String.valueOf(-1);
+int omPort = -1;
 
 Review comment:
   I think we should initialize it. Because In a corner case where `remaining` 
is empty, `omPort` would become uninitialized when reaching L154 
`createAdapter()`. I think that might be an issue.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296574)
Time Spent: 1h 20m  (was: 1h 10m)

> Change omPort parameter type from String to int in 
> BasicOzoneFileSystem#createAdapter
> -
>
> Key: HDDS-1938
> URL: https://issues.apache.org/jira/browse/HDDS-1938
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Filesystem
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1938.001.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> The diff will be based on HDDS-1891.
> Goal:
> 1. Change omPort type to int because it is eventually used as int anyway
> 2. Refactor the parser code in BasicOzoneFileSystem#initialize
> Will post a PR after HDDS-1891 is merged.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1938) Change omPort parameter type from String to int in BasicOzoneFileSystem#createAdapter

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1938?focusedWorklogId=296572=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296572
 ]

ASF GitHub Bot logged work on HDDS-1938:


Author: ASF GitHub Bot
Created on: 16/Aug/19 21:07
Start Date: 16/Aug/19 21:07
Worklog Time Spent: 10m 
  Work Description: smengcl commented on pull request #1305: HDDS-1938. 
Change omPort parameter type from String to int in 
BasicOzoneFileSystem#createAdapter
URL: https://github.com/apache/hadoop/pull/1305#discussion_r314890276
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
 ##
 @@ -113,7 +113,7 @@ public void initialize(URI name, Configuration conf) 
throws IOException {
 String remaining = matcher.groupCount() == 3 ? matcher.group(3) : null;
 
 String omHost = null;
-String omPort = String.valueOf(-1);
+int omPort = -1;
 
 Review comment:
   I think we should initialize it. Because In a corner case where `remaining` 
is empty, omPort would be uninitialized when reaching line 154 
(createAdapter()). I think that might be an issue.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296572)
Time Spent: 1h 10m  (was: 1h)

> Change omPort parameter type from String to int in 
> BasicOzoneFileSystem#createAdapter
> -
>
> Key: HDDS-1938
> URL: https://issues.apache.org/jira/browse/HDDS-1938
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Filesystem
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1938.001.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The diff will be based on HDDS-1891.
> Goal:
> 1. Change omPort type to int because it is eventually used as int anyway
> 2. Refactor the parser code in BasicOzoneFileSystem#initialize
> Will post a PR after HDDS-1891 is merged.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1938) Change omPort parameter type from String to int in BasicOzoneFileSystem#createAdapter

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1938?focusedWorklogId=296567=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296567
 ]

ASF GitHub Bot logged work on HDDS-1938:


Author: ASF GitHub Bot
Created on: 16/Aug/19 20:51
Start Date: 16/Aug/19 20:51
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1305: HDDS-1938. 
Change omPort parameter type from String to int in 
BasicOzoneFileSystem#createAdapter
URL: https://github.com/apache/hadoop/pull/1305#issuecomment-522147475
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296567)
Time Spent: 1h  (was: 50m)

> Change omPort parameter type from String to int in 
> BasicOzoneFileSystem#createAdapter
> -
>
> Key: HDDS-1938
> URL: https://issues.apache.org/jira/browse/HDDS-1938
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Filesystem
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1938.001.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The diff will be based on HDDS-1891.
> Goal:
> 1. Change omPort type to int because it is eventually used as int anyway
> 2. Refactor the parser code in BasicOzoneFileSystem#initialize
> Will post a PR after HDDS-1891 is merged.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1969) Implement OM GetDelegationToken request to use Cache and DoubleBuffer

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1969?focusedWorklogId=296541=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296541
 ]

ASF GitHub Bot logged work on HDDS-1969:


Author: ASF GitHub Bot
Created on: 16/Aug/19 20:22
Start Date: 16/Aug/19 20:22
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1296: 
HDDS-1969. Implement OM GetDelegationToken request to use Cache and 
DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1296
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296541)
Time Spent: 1.5h  (was: 1h 20m)

> Implement OM GetDelegationToken request to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1969
> URL: https://issues.apache.org/jira/browse/HDDS-1969
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Implement OM GetDelegationToken request to use OM Cache, double buffer.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1969) Implement OM GetDelegationToken request to use Cache and DoubleBuffer

2019-08-16 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1969:
-
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

> Implement OM GetDelegationToken request to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1969
> URL: https://issues.apache.org/jira/browse/HDDS-1969
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Implement OM GetDelegationToken request to use OM Cache, double buffer.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14350) dfs.datanode.ec.reconstruction.threads not take effect

2019-08-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909370#comment-16909370
 ] 

Hadoop QA commented on HDFS-14350:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
39s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}144m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-582/6/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/582 |
| JIRA Issue | HDFS-14350 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 2f8024efd289 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / e356e4f |
| Default Java | 1.8.0_222 |
| unit | 

[jira] [Work logged] (HDDS-1969) Implement OM GetDelegationToken request to use Cache and DoubleBuffer

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1969?focusedWorklogId=296539=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296539
 ]

ASF GitHub Bot logged work on HDDS-1969:


Author: ASF GitHub Bot
Created on: 16/Aug/19 20:21
Start Date: 16/Aug/19 20:21
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1296: HDDS-1969. 
Implement OM GetDelegationToken request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1296#issuecomment-522139465
 
 
   Thank You @arp7 for the review.
   I will commit this to the trunk. 2nd PR changes are only code comment 
changes. Committing this, as previous CI test failures are not related to this 
PR.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296539)
Time Spent: 1h 20m  (was: 1h 10m)

> Implement OM GetDelegationToken request to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1969
> URL: https://issues.apache.org/jira/browse/HDDS-1969
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Implement OM GetDelegationToken request to use OM Cache, double buffer.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1969) Implement OM GetDelegationToken request to use Cache and DoubleBuffer

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1969?focusedWorklogId=296534=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296534
 ]

ASF GitHub Bot logged work on HDDS-1969:


Author: ASF GitHub Bot
Created on: 16/Aug/19 20:13
Start Date: 16/Aug/19 20:13
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1296: HDDS-1969. 
Implement OM GetDelegationToken request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1296#issuecomment-522137075
 
 
   Thank You @arp7 for the review.
   Fixed review comments.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296534)
Time Spent: 1h 10m  (was: 1h)

> Implement OM GetDelegationToken request to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1969
> URL: https://issues.apache.org/jira/browse/HDDS-1969
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Implement OM GetDelegationToken request to use OM Cache, double buffer.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1913) Fix OzoneBucket and RpcClient APIS for acl

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1913?focusedWorklogId=296533=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296533
 ]

ASF GitHub Bot logged work on HDDS-1913:


Author: ASF GitHub Bot
Created on: 16/Aug/19 20:06
Start Date: 16/Aug/19 20:06
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1257: HDDS-1913. Fix 
OzoneBucket and RpcClient APIS for acl.
URL: https://github.com/apache/hadoop/pull/1257#issuecomment-522135167
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296533)
Time Spent: 2h 50m  (was: 2h 40m)

> Fix OzoneBucket and RpcClient APIS for acl
> --
>
> Key: HDDS-1913
> URL: https://issues.apache.org/jira/browse/HDDS-1913
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Fix addAcl,removeAcl in OzoneBucket to use newly added acl API's 
> addAcl/removeAcl as part of HDDS-1739.
> Remove addBucketAcls, removeBucketAcls from RpcClient. We should use 
> addAcl/removeAcl.
>  
> And also fix @xiaoyu comment on HDDS-1900 jira. 
> BucketManagerImpl#setBucketProperty() as they now require a different 
> permission (WRITE_ACL instead of WRITE)?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8631) WebHDFS : Support setQuota

2019-08-16 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909361#comment-16909361
 ] 

Íñigo Goiri commented on HDFS-8631:
---

Can we fix some of the checkstyles?

> WebHDFS : Support setQuota
> --
>
> Key: HDFS-8631
> URL: https://issues.apache.org/jira/browse/HDFS-8631
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.7.2
>Reporter: nijel
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-8631-001.patch, HDFS-8631-002.patch, 
> HDFS-8631-003.patch, HDFS-8631-004.patch, HDFS-8631-005.patch, 
> HDFS-8631-006.patch, HDFS-8631-007.patch, HDFS-8631-008.patch
>
>
> User is able do quota management from filesystem object. Same operation can 
> be allowed trough REST API.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14728) RBF: GetDatanodeReport causes a large GC pressure on the NameNodes

2019-08-16 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909357#comment-16909357
 ] 

Íñigo Goiri commented on HDFS-14728:


In the tests, we should not use sleeps.
We need to wait for particular conditions

> RBF: GetDatanodeReport causes a large GC pressure on the NameNodes
> --
>
> Key: HDFS-14728
> URL: https://issues.apache.org/jira/browse/HDFS-14728
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14728-trunk-001.patch, HDFS-14728-trunk-002.patch, 
> HDFS-14728-trunk-003.patch, HDFS-14728-trunk-004.patch, 
> HDFS-14728-trunk-005.patch, HDFS-14728-trunk-006.patch, 
> HDFS-14728-trunk-007.patch
>
>
> When a cluster contains millions of DNs, *GetDatanodeReport* is pretty 
> expensive, and it will cause a large GC pressure on NameNode.
> When multiple NSs share the millions DNs by federation and the router listens 
> to the NSs, the problem will be more serious.
> All the NSs will be GC at the same time.
> RBF should cache the datanode report informations and have an option to 
> disable the cache.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14742) RBF:TestRouterFaultTolerant tests are flaky

2019-08-16 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909354#comment-16909354
 ] 

Íñigo Goiri commented on HDFS-14742:


The problem with this test is that there is a couple random variables that in 
some cases end up with all the files in one subcluster.

> RBF:TestRouterFaultTolerant tests are flaky
> ---
>
> Key: HDFS-14742
> URL: https://issues.apache.org/jira/browse/HDFS-14742
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
>
> [https://builds.apache.org/job/PreCommit-HDFS-Build/27516/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt]
> {code:java}
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 3.665 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.federation.router.TestRouterFaultTolerant
> [ERROR] 
> testWriteWithFailedSubcluster(org.apache.hadoop.hdfs.server.federation.router.TestRouterFaultTolerant)
>   Time elapsed: 3.516 s  <<< FAILURE!
> java.lang.AssertionError: 
> Failed to run "Full tests": 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot find 
> locations for /HASH_ALL-failsubcluster, because the default nameservice is 
> disabled to read or write
>   at 
> org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver.lookupLocation(MountTableResolver.java:425)
>   at 
> org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver$1.call(MountTableResolver.java:391)
>   at 
> org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver$1.call(MountTableResolver.java:388)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4876)
>   at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3528)
>   at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2277)
>   at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2154)
>   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2044)
>   at com.google.common.cache.LocalCache.get(LocalCache.java:3952)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4871)
>   at 
> org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver.getDestinationForPath(MountTableResolver.java:394)
>   at 
> org.apache.hadoop.hdfs.server.federation.resolver.MultipleDestinationMountTableResolver.getDestinationForPath(MultipleDestinationMountTableResolver.java:87)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:1498)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.getListing(RouterClientProtocol.java:734)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getListing(RouterRpcServer.java:827)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:732)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:529)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1001)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:929)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2921)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1553)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1499)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1396)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
>   at com.sun.proxy.$Proxy35.getListing(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:678)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> 

[jira] [Commented] (HDFS-14456) HAState#prepareToEnterState needn't a lock

2019-08-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909349#comment-16909349
 ] 

Hadoop QA commented on HDFS-14456:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
38s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
35s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 46s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
30s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}137m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestWriteReadStripedFile |
|   | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-770/6/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/770 |
| JIRA Issue | HDFS-14456 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 2631d0011390 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / e356e4f |
| Default Java | 1.8.0_222 |
| unit | 

[jira] [Commented] (HDFS-14318) dn cannot be recognized and must be restarted to recognize the Repaired disk

2019-08-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909340#comment-16909340
 ] 

Hadoop QA commented on HDFS-14318:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
46s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 154 unchanged - 0 fixed = 155 total (was 154) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
44s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}118m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}175m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Possible doublecheck on 
org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskThread in 
org.apache.hadoop.hdfs.server.datanode.DataNode.startCheckDiskThread()  At 
DataNode.java:org.apache.hadoop.hdfs.server.datanode.DataNode.startCheckDiskThread()
  At DataNode.java:[lines 2212-2214] |
|  |  Null pointer dereference of DataNode.errorDisk in 
org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskError()  Dereferenced 
at DataNode.java:in 
org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskError()  Dereferenced 
at DataNode.java:[line 3493] |
| Failed junit tests | hadoop.hdfs.TestHFlush |
|   | hadoop.hdfs.TestQuota |
|   

[jira] [Work logged] (HDDS-1596) Create service endpoint to download configuration from SCM

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1596?focusedWorklogId=296509=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296509
 ]

ASF GitHub Bot logged work on HDDS-1596:


Author: ASF GitHub Bot
Created on: 16/Aug/19 19:08
Start Date: 16/Aug/19 19:08
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #861: HDDS-1596. Create 
service endpoint to download configuration from SCM
URL: https://github.com/apache/hadoop/pull/861#issuecomment-522119175
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | 0 | yamllint | 0 | yamllint was not available. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for branch |
   | +1 | mvninstall | 562 | trunk passed |
   | +1 | compile | 367 | trunk passed |
   | +1 | checkstyle | 76 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 785 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 163 | trunk passed |
   | 0 | spotbugs | 451 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 662 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 38 | Maven dependency ordering for patch |
   | +1 | mvninstall | 589 | the patch passed |
   | +1 | compile | 388 | the patch passed |
   | +1 | javac | 388 | the patch passed |
   | -0 | checkstyle | 39 | hadoop-hdds: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | -0 | checkstyle | 41 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 655 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 155 | the patch passed |
   | +1 | findbugs | 681 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 305 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1720 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 7584 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientForAclAuditLog |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-861/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/861 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml yamllint shellcheck shelldocs 
|
   | uname | Linux e68565849e6b 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e356e4f |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-861/9/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-861/9/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-861/9/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-861/9/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-861/9/testReport/ |
   | Max. process+thread count | 5267 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds hadoop-hdds/common hadoop-hdds/container-service 
hadoop-hdds/framework hadoop-hdds/server-scm hadoop-ozone/dist 
hadoop-ozone/ozone-manager hadoop-ozone/ozonefs hadoop-ozone/s3gateway U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-861/9/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was 

[jira] [Commented] (HDFS-14741) RBF: RecoverLease should be return false when the file is open in multiple destination

2019-08-16 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909335#comment-16909335
 ] 

Íñigo Goiri commented on HDFS-14741:


 [^HDFS-14741-trunk-001.patch] LGTM.
Just one question: why the change in testSubclusterDown()?

> RBF: RecoverLease should be return false when the file is open in multiple 
> destination
> --
>
> Key: HDFS-14741
> URL: https://issues.apache.org/jira/browse/HDFS-14741
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14741-trunk-001.patch
>
>
> RecoverLease should be return false when the file is open or be writing in 
> multiple destinations.
> Liks this:
> Mount point has multiple destination(ns0 and ns1).
> And the file is in ns0 but it is be writing, ns1 doesn't has this file.
> In this case *recoverLease* should return false instead of throw 
> FileNotFoundException.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14743) Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

2019-08-16 Thread Ramesh Mani (JIRA)
Ramesh Mani created HDFS-14743:
--

 Summary: Enhance INodeAttributeProvider/ AccessControlEnforcer 
Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move 
etc...
 Key: HDFS-14743
 URL: https://issues.apache.org/jira/browse/HDFS-14743
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.1.0
Reporter: Ramesh Mani


Enhance INodeAttributeProvider / AccessControlEnforcer Interface in HDFS to 
support Authorization of mkdir, rm, rmdir, copy, move etc..., this should help 
the implementors of the interface like Apache Ranger's HDFS Authorization 
plugin to authorize and audit those command sets.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12859) Admin command resetBalancerBandwidth

2019-08-16 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDFS-12859:
--

Assignee: Jianfei Jiang

> Admin command resetBalancerBandwidth
> 
>
> Key: HDFS-12859
> URL: https://issues.apache.org/jira/browse/HDFS-12859
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: balancer  mover
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
>Priority: Major
> Attachments: 
> 0003-HDFS-12859-Admin-command-resetBalancerBandwidth.patch, 
> 0004-HDFS-12859-Admin-command-resetBalancerBandwidth.patch, HDFS-12859.patch
>
>
> We can already set balancer bandwidth dynamically using command 
> setBalancerBandwidth. The setting value is not persistent and not stored in 
> configuration file. The different datanodes could their different default or 
> former setting in configuration.
> When we suggested to develop a schedule balancer task which runs at midnight 
> everyday. We set a larger bandwidth for it and hope to reset the value after 
> finishing. However, we found it difficult to reset the different setting for 
> different datanodes as the setBalancerBandwidth command can only set the same 
> value to all datanodes. If we want to use unique setting for every datanode, 
> we have to reset the datanodes.
> So it would be useful to have a command to synchronize the setting with the 
> configuration file. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >