[jira] [Commented] (HDDS-1483) Fix getMultipartKey javadoc

2019-05-01 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831417#comment-16831417
 ] 

Hudson commented on HDDS-1483:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16485 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16485/])
HDDS-1483. Fix getMultipartKey javadoc. (#790) (bharat: rev 
f682a171f59e61ea6bd1b1bed911820789d150f0)
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMMetadataManager.java


> Fix getMultipartKey javadoc
> ---
>
> Key: HDDS-1483
> URL: https://issues.apache.org/jira/browse/HDDS-1483
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Bharat Viswanadham
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code:java}
> /**
> <<< HEAD
>  * Returns the DB key name of a multipart upload key in OM metadata store.
>  *
>  * @param volume - volume name
>  * @param bucket - bucket name
>  * @param key - key name
>  * @param uploadId - the upload id for this key
>  * @return bytes of DB key.
>  */
>  String getMultipartKey(String volume, String bucket, String key, String
>  uploadId);{code}
>  
> Remove *<<< HEAD* unwanted change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1475) Fix OzoneContainer start method

2019-05-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1475?focusedWorklogId=236110=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236110
 ]

ASF GitHub Bot logged work on HDDS-1475:


Author: ASF GitHub Bot
Created on: 02/May/19 05:01
Start Date: 02/May/19 05:01
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #788: 
HDDS-1475 : Fix OzoneContainer start method.
URL: https://github.com/apache/hadoop/pull/788#discussion_r280289110
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/OzoneContainer.java
 ##
 @@ -172,7 +176,9 @@ private void stopContainerScrub() {
 if (scrubber == null) {
   return;
 }
-scrubber.down();
+if (scrubber.isHalted()) {
 
 Review comment:
   is this should be if (!scrubber.isHalted()) 
   
   As halt is initially set to false, and only it is stopped it is set to true.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236110)
Time Spent: 1h 20m  (was: 1h 10m)

> Fix OzoneContainer start method
> ---
>
> Key: HDDS-1475
> URL: https://issues.apache.org/jira/browse/HDDS-1475
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> In OzoneContainer start() we have 
> {code:java}
> startContainerScrub();
> writeChannel.start();
> readChannel.start();
> hddsDispatcher.init();
> hddsDispatcher.setScmId(scmId);{code}
>  
> Suppose here if readChannel.start() failed due to some reason, from 
> VersionEndPointTask, we try to start OzoneContainer again. This can cause an 
> issue for writeChannel.start() if it is already started. 
>  
> Fix the logic such a way that if service is started, don't attempt to start 
> the service again. Similar changes needed to be done for stop().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1395) Key write fails with BlockOutputStream has been closed exception

2019-05-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1395?focusedWorklogId=236108=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236108
 ]

ASF GitHub Bot logged work on HDDS-1395:


Author: ASF GitHub Bot
Created on: 02/May/19 04:52
Start Date: 02/May/19 04:52
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on issue #749: HDDS-1395. Key 
write fails with BlockOutputStream has been closed exception
URL: https://github.com/apache/hadoop/pull/749#issuecomment-488552588
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236108)
Time Spent: 4h 10m  (was: 4h)

> Key write fails with BlockOutputStream has been closed exception
> 
>
> Key: HDDS-1395
> URL: https://issues.apache.org/jira/browse/HDDS-1395
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
> Attachments: HDDS-1395.000.patch, HDDS-1395.001.patch
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Key write fails with BlockOutputStream has been closed
> {code}
> 2019-04-05 11:24:47,770 ERROR ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:load(102)) - LOADGEN: Create 
> key:pool-431-thread-9-2092651262 failed with exception, but skipping
> java.io.IOException: BlockOutputStream has been closed.
> at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.checkOpen(BlockOutputStream.java:662)
> at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.write(BlockOutputStream.java:245)
> at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.write(BlockOutputStreamEntry.java:131)
> at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:325)
> at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:287)
> at 
> org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:49)
> at java.io.OutputStream.write(OutputStream.java:75)
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.load(MiniOzoneLoadGenerator.java:100)
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$0(MiniOzoneLoadGenerator.java:143)
> at 
> java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1483) Fix getMultipartKey javadoc

2019-05-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1483?focusedWorklogId=236107=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236107
 ]

ASF GitHub Bot logged work on HDDS-1483:


Author: ASF GitHub Bot
Created on: 02/May/19 04:51
Start Date: 02/May/19 04:51
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #790: 
HDDS-1483:Fix javadoc
URL: https://github.com/apache/hadoop/pull/790
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236107)
Time Spent: 50m  (was: 40m)

> Fix getMultipartKey javadoc
> ---
>
> Key: HDDS-1483
> URL: https://issues.apache.org/jira/browse/HDDS-1483
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Bharat Viswanadham
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code:java}
> /**
> <<< HEAD
>  * Returns the DB key name of a multipart upload key in OM metadata store.
>  *
>  * @param volume - volume name
>  * @param bucket - bucket name
>  * @param key - key name
>  * @param uploadId - the upload id for this key
>  * @return bytes of DB key.
>  */
>  String getMultipartKey(String volume, String bucket, String key, String
>  uploadId);{code}
>  
> Remove *<<< HEAD* unwanted change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1483) Fix getMultipartKey javadoc

2019-05-01 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-1483.
--
   Resolution: Fixed
Fix Version/s: 0.5.0

> Fix getMultipartKey javadoc
> ---
>
> Key: HDDS-1483
> URL: https://issues.apache.org/jira/browse/HDDS-1483
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code:java}
> /**
> <<< HEAD
>  * Returns the DB key name of a multipart upload key in OM metadata store.
>  *
>  * @param volume - volume name
>  * @param bucket - bucket name
>  * @param key - key name
>  * @param uploadId - the upload id for this key
>  * @return bytes of DB key.
>  */
>  String getMultipartKey(String volume, String bucket, String key, String
>  uploadId);{code}
>  
> Remove *<<< HEAD* unwanted change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1483) Fix getMultipartKey javadoc

2019-05-01 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1483:
-
Affects Version/s: 0.4.0

> Fix getMultipartKey javadoc
> ---
>
> Key: HDDS-1483
> URL: https://issues.apache.org/jira/browse/HDDS-1483
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Bharat Viswanadham
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code:java}
> /**
> <<< HEAD
>  * Returns the DB key name of a multipart upload key in OM metadata store.
>  *
>  * @param volume - volume name
>  * @param bucket - bucket name
>  * @param key - key name
>  * @param uploadId - the upload id for this key
>  * @return bytes of DB key.
>  */
>  String getMultipartKey(String volume, String bucket, String key, String
>  uploadId);{code}
>  
> Remove *<<< HEAD* unwanted change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1483) Fix getMultipartKey javadoc

2019-05-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1483?focusedWorklogId=236106=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236106
 ]

ASF GitHub Bot logged work on HDDS-1483:


Author: ASF GitHub Bot
Created on: 02/May/19 04:50
Start Date: 02/May/19 04:50
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #790: HDDS-1483:Fix 
javadoc
URL: https://github.com/apache/hadoop/pull/790#issuecomment-488552435
 
 
   Thank You @dineshchitlangia  for fixing this issue.
   +1 LGTM.
   I will commit this.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236106)
Time Spent: 40m  (was: 0.5h)

> Fix getMultipartKey javadoc
> ---
>
> Key: HDDS-1483
> URL: https://issues.apache.org/jira/browse/HDDS-1483
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {code:java}
> /**
> <<< HEAD
>  * Returns the DB key name of a multipart upload key in OM metadata store.
>  *
>  * @param volume - volume name
>  * @param bucket - bucket name
>  * @param key - key name
>  * @param uploadId - the upload id for this key
>  * @return bytes of DB key.
>  */
>  String getMultipartKey(String volume, String bucket, String key, String
>  uploadId);{code}
>  
> Remove *<<< HEAD* unwanted change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1483) Fix getMultipartKey javadoc

2019-05-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831401#comment-16831401
 ] 

Hadoop QA commented on HDDS-1483:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-790/1/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/790 |
| JIRA Issue | HDDS-1483 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 36a7d8c5f381 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 7cb46f0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-790/1/testReport/ |
| Max. process+thread count | 414 (vs. ulimit of 5500) |
| modules | C: hadoop-ozone/common U: hadoop-ozone/common |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-790/1/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was automatically generated.



> Fix getMultipartKey javadoc
> ---
>
> Key: 

[jira] [Work logged] (HDDS-1483) Fix getMultipartKey javadoc

2019-05-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1483?focusedWorklogId=236102=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236102
 ]

ASF GitHub Bot logged work on HDDS-1483:


Author: ASF GitHub Bot
Created on: 02/May/19 03:54
Start Date: 02/May/19 03:54
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #790: HDDS-1483:Fix 
javadoc
URL: https://github.com/apache/hadoop/pull/790#issuecomment-488545720
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1074 | trunk passed |
   | +1 | compile | 43 | trunk passed |
   | +1 | checkstyle | 16 | trunk passed |
   | +1 | mvnsite | 32 | trunk passed |
   | +1 | shadedclient | 665 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 57 | trunk passed |
   | +1 | javadoc | 34 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 36 | the patch passed |
   | +1 | compile | 26 | the patch passed |
   | +1 | javac | 26 | the patch passed |
   | +1 | checkstyle | 12 | the patch passed |
   | +1 | mvnsite | 29 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 729 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 65 | the patch passed |
   | +1 | javadoc | 30 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 31 | common in the patch passed. |
   | +1 | asflicense | 26 | The patch does not generate ASF License warnings. |
   | | | 3018 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-790/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/790 |
   | JIRA Issue | HDDS-1483 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 36a7d8c5f381 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7cb46f0 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-790/1/testReport/ |
   | Max. process+thread count | 414 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common U: hadoop-ozone/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-790/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236102)
Time Spent: 0.5h  (was: 20m)

> Fix getMultipartKey javadoc
> ---
>
> Key: HDDS-1483
> URL: https://issues.apache.org/jira/browse/HDDS-1483
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {code:java}
> /**
> <<< HEAD
>  * Returns the DB key name of a multipart upload key in OM metadata store.
>  *
>  * @param volume - volume name
>  * @param bucket - bucket name
>  * @param key - key name
>  * @param uploadId - the upload id for this key
>  * @return bytes of DB key.
>  */
>  String getMultipartKey(String volume, String bucket, String key, String
>  uploadId);{code}
>  
> Remove *<<< HEAD* unwanted change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1483) Fix getMultipartKey javadoc

2019-05-01 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1483 started by Dinesh Chitlangia.
---
> Fix getMultipartKey javadoc
> ---
>
> Key: HDDS-1483
> URL: https://issues.apache.org/jira/browse/HDDS-1483
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code:java}
> /**
> <<< HEAD
>  * Returns the DB key name of a multipart upload key in OM metadata store.
>  *
>  * @param volume - volume name
>  * @param bucket - bucket name
>  * @param key - key name
>  * @param uploadId - the upload id for this key
>  * @return bytes of DB key.
>  */
>  String getMultipartKey(String volume, String bucket, String key, String
>  uploadId);{code}
>  
> Remove *<<< HEAD* unwanted change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1483) Fix getMultipartKey javadoc

2019-05-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1483?focusedWorklogId=236100=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236100
 ]

ASF GitHub Bot logged work on HDDS-1483:


Author: ASF GitHub Bot
Created on: 02/May/19 03:03
Start Date: 02/May/19 03:03
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on issue #790: HDDS-1483:Fix 
javadoc
URL: https://github.com/apache/hadoop/pull/790#issuecomment-488539749
 
 
   @bharatviswa504 Thanks for filing the JIRA. Pls help to review/commit.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236100)
Time Spent: 20m  (was: 10m)

> Fix getMultipartKey javadoc
> ---
>
> Key: HDDS-1483
> URL: https://issues.apache.org/jira/browse/HDDS-1483
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code:java}
> /**
> <<< HEAD
>  * Returns the DB key name of a multipart upload key in OM metadata store.
>  *
>  * @param volume - volume name
>  * @param bucket - bucket name
>  * @param key - key name
>  * @param uploadId - the upload id for this key
>  * @return bytes of DB key.
>  */
>  String getMultipartKey(String volume, String bucket, String key, String
>  uploadId);{code}
>  
> Remove *<<< HEAD* unwanted change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1483) Fix getMultipartKey javadoc

2019-05-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1483?focusedWorklogId=236099=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236099
 ]

ASF GitHub Bot logged work on HDDS-1483:


Author: ASF GitHub Bot
Created on: 02/May/19 03:02
Start Date: 02/May/19 03:02
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #790: 
HDDS-1483:Fix javadoc
URL: https://github.com/apache/hadoop/pull/790
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236099)
Time Spent: 10m
Remaining Estimate: 0h

> Fix getMultipartKey javadoc
> ---
>
> Key: HDDS-1483
> URL: https://issues.apache.org/jira/browse/HDDS-1483
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code:java}
> /**
> <<< HEAD
>  * Returns the DB key name of a multipart upload key in OM metadata store.
>  *
>  * @param volume - volume name
>  * @param bucket - bucket name
>  * @param key - key name
>  * @param uploadId - the upload id for this key
>  * @return bytes of DB key.
>  */
>  String getMultipartKey(String volume, String bucket, String key, String
>  uploadId);{code}
>  
> Remove *<<< HEAD* unwanted change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1483) Fix getMultipartKey javadoc

2019-05-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1483:
-
Labels: newbie pull-request-available  (was: newbie)

> Fix getMultipartKey javadoc
> ---
>
> Key: HDDS-1483
> URL: https://issues.apache.org/jira/browse/HDDS-1483
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie, pull-request-available
>
> {code:java}
> /**
> <<< HEAD
>  * Returns the DB key name of a multipart upload key in OM metadata store.
>  *
>  * @param volume - volume name
>  * @param bucket - bucket name
>  * @param key - key name
>  * @param uploadId - the upload id for this key
>  * @return bytes of DB key.
>  */
>  String getMultipartKey(String volume, String bucket, String key, String
>  uploadId);{code}
>  
> Remove *<<< HEAD* unwanted change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14453) Improve Bad Sequence Number Error Message

2019-05-01 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831342#comment-16831342
 ] 

Wei-Chiu Chuang commented on HDFS-14453:


+1 from me.

> Improve Bad Sequence Number Error Message
> -
>
> Key: HDFS-14453
> URL: https://issues.apache.org/jira/browse/HDFS-14453
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: Shweta
>Priority: Minor
>  Labels: noob
> Attachments: HDFS-14453.001.patch
>
>
> {code:java|title=DataStreamer.java}
>   if (one.getSeqno() != seqno) {
> throw new IOException("ResponseProcessor: Expecting seqno" +
> " for block " + block +
> one.getSeqno() + " but received " + seqno);
>   }
> {code}
> https://github.com/apache/hadoop/blob/685cb83e4c3f433c5147e35217ce79ea520a0da5/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1154-L1158
> There is no space between the {{block}} and the {{one.getSeqno()}}.  Please 
> change to:
> {code:java}
>   if (one.getSeqno() != seqno) {
> throw new IOException("ResponseProcessor: Expecting seqno " + 
> one.getSeqno()
> + " for block " + block + " but received " + seqno);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14451) Incorrect header or version mismatch log message

2019-05-01 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831340#comment-16831340
 ] 

Wei-Chiu Chuang commented on HDFS-14451:


Hi Shweta,

I think you also should call {{setupBadVersionResponse(version);}} in the first 
case, so that client knows what went wrong.

> Incorrect header or version mismatch log message
> 
>
> Key: HDFS-14451
> URL: https://issues.apache.org/jira/browse/HDFS-14451
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: Shweta
>Priority: Minor
>  Labels: noob
> Attachments: HDFS-14451.001.patch
>
>
> {code:java|title=Server.java}
>   if (!RpcConstants.HEADER.equals(dataLengthBuffer)
>   || version != CURRENT_VERSION) {
> //Warning is ok since this is not supposed to happen.
> LOG.warn("Incorrect header or version mismatch from " + 
>  hostAddress + ":" + remotePort +
>  " got version " + version + 
>  " expected version " + CURRENT_VERSION);
> setupBadVersionResponse(version);
> return -1;
> {code}
> This message should include the value of {{RpcConstants.HEADER}} and 
> {{dataLengthBuffer}} in addition to just the version information or else that 
> data is lost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14463) Add Log Level link under NameNode and DataNode Web UI Utilities dropdown

2019-05-01 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831332#comment-16831332
 ] 

Hudson commented on HDFS-14463:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16484 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16484/])
HDFS-14463. Add Log Level link under NameNode and DataNode Web UI (weichiu: rev 
7cb46f035a92056783bad23a9abc6a264d71285d)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/datanode.html
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html


> Add Log Level link under NameNode and DataNode Web UI Utilities dropdown
> 
>
> Key: HDFS-14463
> URL: https://issues.apache.org/jira/browse/HDFS-14463
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Trivial
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14463.001.patch, dn_postpatch.png, nn_postpatch.png
>
>
> Add Log Level link under NameNode and DataNode Web UI Utilities dropdown:
>  !nn_postpatch.png! 
>  !dn_postpatch.png! 
> CC [~arpitagarwal] [~jojochuang]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14312) KMS-o-meter: Scale test KMS using kms audit log

2019-05-01 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831331#comment-16831331
 ] 

Wei-Chiu Chuang commented on HDFS-14312:


Update: I took the first approach, replay at component level (approach 1) 
rather than system level (approach 2).

I think that's reasonable. The second one has more things attached to it, like 
I would require NN fsimage, hdfs audits and a dump of key provider. The tool 
I'm working on requires only kms audit log, so much lightweight.

> KMS-o-meter: Scale test KMS using kms audit log
> ---
>
> Key: HDFS-14312
> URL: https://issues.apache.org/jira/browse/HDFS-14312
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>
> It appears to me that Dynamometer's architecture allows KMS scale tests too.
> I imagine there are two ways to scale test a KMS.
> # Take KMS audit logs, and replay the logs against a KMS.
> # Configure Dynamometer to start KMS in addition to NameNode. Assuming the 
> fsimage comes from an encrypted cluster, replaying HDFS audit log also tests 
> KMS.
> It would be even more interesting to have a tool that converts uncrypted 
> cluster fsimage to an encrypted one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14463) Add Log Level link under NameNode and DataNode Web UI Utilities dropdown

2019-05-01 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14463:
---
   Resolution: Fixed
Fix Version/s: 3.1.3
   3.2.1
   3.3.0
   Status: Resolved  (was: Patch Available)

Pushed to trunk, branch-3.2 and branch-3.1

 

Thanks [~smeng]!

> Add Log Level link under NameNode and DataNode Web UI Utilities dropdown
> 
>
> Key: HDFS-14463
> URL: https://issues.apache.org/jira/browse/HDFS-14463
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Trivial
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14463.001.patch, dn_postpatch.png, nn_postpatch.png
>
>
> Add Log Level link under NameNode and DataNode Web UI Utilities dropdown:
>  !nn_postpatch.png! 
>  !dn_postpatch.png! 
> CC [~arpitagarwal] [~jojochuang]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1132) Ozone serialization codec for Ozone S3 secret table

2019-05-01 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-1132.
--
Resolution: Duplicate

> Ozone serialization codec for Ozone S3 secret table
> ---
>
> Key: HDDS-1132
> URL: https://issues.apache.org/jira/browse/HDDS-1132
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager, S3
>Reporter: Elek, Marton
>Assignee: Zsolt Venczel
>Priority: Major
>  Labels: newbie
>
> HDDS-748/HDDS-864 introduced an option to use strongly typed metadata tables 
> and separated the serialization/deserialization logic to separated codec 
> implementation
> HDDS-937 introduced a new S3 secret table which is not codec based.
> I propose to use codecs for this table.
> In OzoneMetadataManager the return value of getS3SecretTable() should be 
> changed from Table to Table. 
> The encoding/decoding logic of S3SecretValue should be registered in 
> ~OzoneMetadataManagerImpl:L204
> As the codecs are type based we may need a wrapper class to encode the String 
> kerberos id with md5: class S3SecretKey(String name = kerberodId). Long term 
> we can modify the S3SecretKey to support multiple keys for the same kerberos 
> id.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1132) Ozone serialization codec for Ozone S3 secret table

2019-05-01 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831321#comment-16831321
 ] 

Bharat Viswanadham edited comment on HDDS-1132 at 5/1/19 11:42 PM:
---

When I am working some other change I found this is not implemented and have 
implemented as part of HDDS-1482. In that Jira changed s3Table also along with 
s3SecretTable.

[~zvenczel] I am closing this as a duplicate of HDDS-1482.


was (Author: bharatviswa):
When I am working some other change I found this is not implemented and have 
implemented as part of HDDS-1482.

[~zvenczel] I am closing this as a duplicate of HDDS-1482.

> Ozone serialization codec for Ozone S3 secret table
> ---
>
> Key: HDDS-1132
> URL: https://issues.apache.org/jira/browse/HDDS-1132
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager, S3
>Reporter: Elek, Marton
>Assignee: Zsolt Venczel
>Priority: Major
>  Labels: newbie
>
> HDDS-748/HDDS-864 introduced an option to use strongly typed metadata tables 
> and separated the serialization/deserialization logic to separated codec 
> implementation
> HDDS-937 introduced a new S3 secret table which is not codec based.
> I propose to use codecs for this table.
> In OzoneMetadataManager the return value of getS3SecretTable() should be 
> changed from Table to Table. 
> The encoding/decoding logic of S3SecretValue should be registered in 
> ~OzoneMetadataManagerImpl:L204
> As the codecs are type based we may need a wrapper class to encode the String 
> kerberos id with md5: class S3SecretKey(String name = kerberodId). Long term 
> we can modify the S3SecretKey to support multiple keys for the same kerberos 
> id.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1132) Ozone serialization codec for Ozone S3 secret table

2019-05-01 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831321#comment-16831321
 ] 

Bharat Viswanadham commented on HDDS-1132:
--

When I am working some other change I found this is not implemented and have 
implemented as part of HDDS-1482.

[~zvenczel] I am closing this as a duplicate of HDDS-1482.

> Ozone serialization codec for Ozone S3 secret table
> ---
>
> Key: HDDS-1132
> URL: https://issues.apache.org/jira/browse/HDDS-1132
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager, S3
>Reporter: Elek, Marton
>Assignee: Zsolt Venczel
>Priority: Major
>  Labels: newbie
>
> HDDS-748/HDDS-864 introduced an option to use strongly typed metadata tables 
> and separated the serialization/deserialization logic to separated codec 
> implementation
> HDDS-937 introduced a new S3 secret table which is not codec based.
> I propose to use codecs for this table.
> In OzoneMetadataManager the return value of getS3SecretTable() should be 
> changed from Table to Table. 
> The encoding/decoding logic of S3SecretValue should be registered in 
> ~OzoneMetadataManagerImpl:L204
> As the codecs are type based we may need a wrapper class to encode the String 
> kerberos id with md5: class S3SecretKey(String name = kerberodId). Long term 
> we can modify the S3SecretKey to support multiple keys for the same kerberos 
> id.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831318#comment-16831318
 ] 

Hadoop QA commented on HDDS-1458:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
2s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
2s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue}  0m  
0s{color} | {color:blue} markdownlint was not available. {color} |
| {color:blue}0{color} | {color:blue} yamllint {color} | {color:blue}  0m  
0s{color} | {color:blue} yamllint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 21 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 24m 
25s{color} | {color:green} trunk passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m 
13s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 29m  
8s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  6m 
19s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
28s{color} | {color:red} docker in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 22m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} hadolint {color} | {color:red}  0m  
3s{color} | {color:red} The patch generated 6 new + 0 unchanged - 0 fixed = 6 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 17m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m 
23s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} pylint {color} | {color:green}  0m 
24s{color} | {color:green} There were no new pylint issues. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
14s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
35s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  7m  
2s{color} | {color:red} root in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m  6s{color} 
| 

[jira] [Commented] (HDDS-1175) Serve read requests directly from RocksDB

2019-05-01 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831257#comment-16831257
 ] 

Anu Engineer commented on HDDS-1175:


[~hanishakoneru] Sorry for commenting so late. I have not been looking at HA 
patches. I have a concern here.

bq. On OM leader, we run a periodic role checked to verify its leader status.
This means that, at the end of the day, it is possible that we do not "know" 
for sure if we are the leader. This suffers from the issue of time of check vs. 
time of access issue. One OM might think that it is a leader when it really is 
not.

Many other systems have used a notion of "Leader Lease" to avoid this problem. 
I have been thinking another way to solve this issue is to read from any 2 
nodes, and if they value of the key does not agree, we can use the later 
version of the key.

Without one of these approaches, OM HA will weaken the current set of strict 
serializability guarantees of OM ( that is OM without HA). Thought I will flag 
this here, for your consideration.


> Serve read requests directly from RocksDB
> -
>
> Key: HDDS-1175
> URL: https://issues.apache.org/jira/browse/HDDS-1175
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1175.001.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> We can directly server read requests from the OM's RocksDB instead of going 
> through the Ratis server. OM should first check its role and only if it is 
> the leader can it server read requests. 
> There can be a scenario where an OM can lose its Leader status but not know 
> about the new election in the ring. This OM could server stale reads for the 
> duration of the heartbeat timeout but this should be acceptable (similar to 
> how Standby Namenode could possibly server stale reads till it figures out 
> the new status).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1483) Fix getMultipartKey javadoc

2019-05-01 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1483:
-
Priority: Minor  (was: Major)

> Fix getMultipartKey javadoc
> ---
>
> Key: HDDS-1483
> URL: https://issues.apache.org/jira/browse/HDDS-1483
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Priority: Minor
>  Labels: newbie
>
> {code:java}
> /**
> <<< HEAD
>  * Returns the DB key name of a multipart upload key in OM metadata store.
>  *
>  * @param volume - volume name
>  * @param bucket - bucket name
>  * @param key - key name
>  * @param uploadId - the upload id for this key
>  * @return bytes of DB key.
>  */
>  String getMultipartKey(String volume, String bucket, String key, String
>  uploadId);{code}
>  
> Remove *<<< HEAD* unwanted change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1483) Fix getMultipartKey javadoc

2019-05-01 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia reassigned HDDS-1483:
---

Assignee: Dinesh Chitlangia

> Fix getMultipartKey javadoc
> ---
>
> Key: HDDS-1483
> URL: https://issues.apache.org/jira/browse/HDDS-1483
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
>
> {code:java}
> /**
> <<< HEAD
>  * Returns the DB key name of a multipart upload key in OM metadata store.
>  *
>  * @param volume - volume name
>  * @param bucket - bucket name
>  * @param key - key name
>  * @param uploadId - the upload id for this key
>  * @return bytes of DB key.
>  */
>  String getMultipartKey(String volume, String bucket, String key, String
>  uploadId);{code}
>  
> Remove *<<< HEAD* unwanted change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1483) Fix getMultipartKey javadoc

2019-05-01 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1483:
-
Labels: newbie  (was: )

> Fix getMultipartKey javadoc
> ---
>
> Key: HDDS-1483
> URL: https://issues.apache.org/jira/browse/HDDS-1483
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
>
> {code:java}
> /**
> <<< HEAD
>  * Returns the DB key name of a multipart upload key in OM metadata store.
>  *
>  * @param volume - volume name
>  * @param bucket - bucket name
>  * @param key - key name
>  * @param uploadId - the upload id for this key
>  * @return bytes of DB key.
>  */
>  String getMultipartKey(String volume, String bucket, String key, String
>  uploadId);{code}
>  
> Remove *<<< HEAD* unwanted change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1483) Fix getMultipartKey javadoc

2019-05-01 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1483:


 Summary: Fix getMultipartKey javadoc
 Key: HDDS-1483
 URL: https://issues.apache.org/jira/browse/HDDS-1483
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Bharat Viswanadham


{code:java}
/**
<<< HEAD
 * Returns the DB key name of a multipart upload key in OM metadata store.
 *
 * @param volume - volume name
 * @param bucket - bucket name
 * @param key - key name
 * @param uploadId - the upload id for this key
 * @return bytes of DB key.
 */
 String getMultipartKey(String volume, String bucket, String key, String
 uploadId);{code}
 

Remove *<<< HEAD* unwanted change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14460) DFSUtil#getNamenodeWebAddr should return HTTPS address based on policy configured

2019-05-01 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831226#comment-16831226
 ] 

Íñigo Goiri commented on HDFS-14460:


The failed unit tests are not related to this change (we even get different 
errors for the same code change).
+1 on  [^HDFS-14460.004.patch].

> DFSUtil#getNamenodeWebAddr should return HTTPS address based on policy 
> configured
> -
>
> Key: HDFS-14460
> URL: https://issues.apache.org/jira/browse/HDFS-14460
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14460.001.patch, HDFS-14460.002.patch, 
> HDFS-14460.003.patch, HDFS-14460.004.patch
>
>
> DFSUtil#getNamenodeWebAddr does a look-up of HTTP address irrespective of 
> policy configured. It should instead look at the policy configured and return 
> appropriate web address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-01 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831213#comment-16831213
 ] 

Eric Yang edited comment on HDDS-1458 at 5/1/19 7:59 PM:
-

[~elek] Patch 002 move blockade tests from dist to 
fault-injection-test/network-tests.  It also created a setup for disk based 
tests.  There is a problem with Ozone docker image is that it mounts ozone 
tarball in expanded form from dist/target directory.  This prevents 
integration-test to reiterate on the same ozone binaries.  I made a fix for 
docker image to build as a separate submodule using maven-assembly instead of 
dist-tar-stitching.  Dist-tar-stitching was invented to fix symlink support in 
Hadoop tarball.  However, it introduced a regression in maven dependency.  
Maven is unable to reference ozone tarball for docker image build.  I verified 
that ozone tarball does not require symlink, therefore, switching to 
maven-assembly can produce Docker Ozone image in maven build more efficiently. 

In YARN-9523, I suggest to use dist profile to build docker images.  For 
consistency, Ozone docker image will be built if -Pdist flag is given.  There 
were oppositions in YARN community to make docker build part of the default 
build.  [~jeagles] [~ebadger], please speak up if you have concerns to rename 
docker profile to dist profile.  As a compromise, we made docker build optional 
with -Pdocker profile.  However, it might be worth while to review if we can 
use dist profile to build distribution.  If we have intend to release docker 
images as part of the release.  Without a inline build docker image, the 
fault-injection test will try to use docker image which has the same tag name 
as apache/ozone:[version] from either local cache, or docker hub.


was (Author: eyang):
[~elek] Patch 002 move blockade tests from dist to 
fault-injection-test/network-tests.  It also created a setup for disk based 
tests.  There is a problem with Ozone docker image is that it mounts ozone 
tarball in expanded form from dist/target directory.  This prevents 
integration-test to reiterate on the same ozone binaries.  I made a fix for 
docker image to build as a separate submodule using maven-assembly instead of 
dist-tar-stitching.  Dist-tar-stitching was invented to fix symlink support in 
tarball.  However, it introduced a regression in maven dependency.  Maven is 
unable to reference ozone tarball for docker image build.  I verified that 
ozone tarball does not require symlink, therefore, switching to maven-assembly 
can produce Docker Ozone image in maven build more efficiently. 

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831217#comment-16831217
 ] 

Hadoop QA commented on HDDS-1458:
-

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HDDS-Build/2670/console in case of 
problems.


> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-01 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HDDS-1458:

Status: Patch Available  (was: Open)

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-01 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831213#comment-16831213
 ] 

Eric Yang commented on HDDS-1458:
-

[~elek] Patch 002 move blockade tests from dist to 
fault-injection-test/network-tests.  It also created a setup for disk based 
tests.  There is a problem with Ozone docker image is that it mounts ozone 
tarball in expanded form from dist/target directory.  This prevents 
integration-test to reiterate on the same ozone binaries.  I made a fix for 
docker image to build as a separate submodule using maven-assembly instead of 
dist-tar-stitching.  Dist-tar-stitching was invented to fix symlink support in 
tarball.  However, it introduced a regression in maven dependency.  Maven is 
unable to reference ozone tarball for docker image build.  I verified that 
ozone tarball does not require symlink, therefore, switching to maven-assembly 
can produce Docker Ozone image in maven build more efficiently. 

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1482) Use strongly typed codec implementations for the S3Table

2019-05-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1482?focusedWorklogId=235957=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-235957
 ]

ASF GitHub Bot logged work on HDDS-1482:


Author: ASF GitHub Bot
Created on: 01/May/19 19:40
Start Date: 01/May/19 19:40
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #789: HDDS-1482. Use 
strongly typed codec implementations for the S3Table.
URL: https://github.com/apache/hadoop/pull/789#issuecomment-488391699
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 525 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 42 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1072 | trunk passed |
   | +1 | compile | 117 | trunk passed |
   | +1 | checkstyle | 51 | trunk passed |
   | +1 | mvnsite | 76 | trunk passed |
   | +1 | shadedclient | 806 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 108 | trunk passed |
   | +1 | javadoc | 67 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for patch |
   | +1 | mvninstall | 72 | the patch passed |
   | +1 | compile | 109 | the patch passed |
   | +1 | javac | 109 | the patch passed |
   | +1 | checkstyle | 25 | the patch passed |
   | +1 | mvnsite | 58 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 749 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 109 | the patch passed |
   | +1 | javadoc | 51 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 32 | common in the patch passed. |
   | +1 | unit | 40 | ozone-manager in the patch passed. |
   | +1 | asflicense | 24 | The patch does not generate ASF License warnings. |
   | | | 4184 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-789/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/789 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 2cc0c2693e0e 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4877f0a |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-789/1/testReport/ |
   | Max. process+thread count | 442 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-789/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 235957)
Time Spent: 20m  (was: 10m)

> Use strongly typed codec implementations for the S3Table
> 
>
> Key: HDDS-1482
> URL: https://issues.apache.org/jira/browse/HDDS-1482
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> HDDS-864 added the implementation for Strongly typed codec implementation for 
> the tables of OmMetadataManager.
>  
> Tables which are added as part of S3 Implementation are not using this. This 
> Jira is address to this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14460) DFSUtil#getNamenodeWebAddr should return HTTPS address based on policy configured

2019-05-01 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831210#comment-16831210
 ] 

CR Hota commented on HDFS-14460:


[~elgoiri] Thanks for the clarifications and previous review. Upload 004 with 
the comments addressed.

 

> DFSUtil#getNamenodeWebAddr should return HTTPS address based on policy 
> configured
> -
>
> Key: HDFS-14460
> URL: https://issues.apache.org/jira/browse/HDFS-14460
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14460.001.patch, HDFS-14460.002.patch, 
> HDFS-14460.003.patch, HDFS-14460.004.patch
>
>
> DFSUtil#getNamenodeWebAddr does a look-up of HTTP address irrespective of 
> policy configured. It should instead look at the policy configured and return 
> appropriate web address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14460) DFSUtil#getNamenodeWebAddr should return HTTPS address based on policy configured

2019-05-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831205#comment-16831205
 ] 

Hadoop QA commented on HDFS-14460:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 22s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}159m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
|   | hadoop.hdfs.web.TestWebHDFS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14460 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12967570/HDFS-14460.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 38e278795812 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4877f0a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26739/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26739/testReport/ |
| Max. process+thread count | 2911 (vs. ulimit of 1) |
| modules | C: 

[jira] [Assigned] (HDDS-1200) Ozone Data Scrubbing : Checksum verification for chunks

2019-05-01 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta reassigned HDDS-1200:


Assignee: Shweta

> Ozone Data Scrubbing : Checksum verification for chunks
> ---
>
> Key: HDDS-1200
> URL: https://issues.apache.org/jira/browse/HDDS-1200
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Supratim Deka
>Assignee: Shweta
>Priority: Major
>
> Background scrubber should read each chunk and verify the checksum.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-01 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HDDS-1458:

Attachment: HDDS-1458.002.patch

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1482) Use strongly typed codec implementations for the S3Table

2019-05-01 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1482:
-
Status: Patch Available  (was: In Progress)

> Use strongly typed codec implementations for the S3Table
> 
>
> Key: HDDS-1482
> URL: https://issues.apache.org/jira/browse/HDDS-1482
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDDS-864 added the implementation for Strongly typed codec implementation for 
> the tables of OmMetadataManager.
>  
> Tables which are added as part of S3 Implementation are not using this. This 
> Jira is address to this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1482) Use strongly typed codec implementations for the S3Table

2019-05-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1482?focusedWorklogId=235938=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-235938
 ]

ASF GitHub Bot logged work on HDDS-1482:


Author: ASF GitHub Bot
Created on: 01/May/19 18:29
Start Date: 01/May/19 18:29
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #789: 
HDDS-1482. Use strongly typed codec implementations for the S3Table.
URL: https://github.com/apache/hadoop/pull/789
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 235938)
Time Spent: 10m
Remaining Estimate: 0h

> Use strongly typed codec implementations for the S3Table
> 
>
> Key: HDDS-1482
> URL: https://issues.apache.org/jira/browse/HDDS-1482
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDDS-864 added the implementation for Strongly typed codec implementation for 
> the tables of OmMetadataManager.
>  
> Tables which are added as part of S3 Implementation are not using this. This 
> Jira is address to this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1482) Use strongly typed codec implementations for the S3Table

2019-05-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1482:
-
Labels: pull-request-available  (was: )

> Use strongly typed codec implementations for the S3Table
> 
>
> Key: HDDS-1482
> URL: https://issues.apache.org/jira/browse/HDDS-1482
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> HDDS-864 added the implementation for Strongly typed codec implementation for 
> the tables of OmMetadataManager.
>  
> Tables which are added as part of S3 Implementation are not using this. This 
> Jira is address to this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1478) Provide k8s resources files for prometheus and performance tests

2019-05-01 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831156#comment-16831156
 ] 

Anu Engineer commented on HDDS-1478:


+1, thank you for getting this done. Please feel free to commit.

> Provide k8s resources files for prometheus and performance tests
> 
>
> Key: HDDS-1478
> URL: https://issues.apache.org/jira/browse/HDDS-1478
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Similar to HDDS-1412 we can further improve the available k8s resources with 
> providing example resources to:
> 1) install prometheus
> 2) execute freon test and check the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1478) Provide k8s resources files for prometheus and performance tests

2019-05-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1478?focusedWorklogId=235924=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-235924
 ]

ASF GitHub Bot logged work on HDDS-1478:


Author: ASF GitHub Bot
Created on: 01/May/19 18:13
Start Date: 01/May/19 18:13
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #783: HDDS-1478. Provide 
k8s resources files for prometheus and performance tests
URL: https://github.com/apache/hadoop/pull/783#issuecomment-488364360
 
 
   +1, thank you for getting this done. Please feel free to commit.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 235924)
Time Spent: 50m  (was: 40m)

> Provide k8s resources files for prometheus and performance tests
> 
>
> Key: HDDS-1478
> URL: https://issues.apache.org/jira/browse/HDDS-1478
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Similar to HDDS-1412 we can further improve the available k8s resources with 
> providing example resources to:
> 1) install prometheus
> 2) execute freon test and check the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1482) Use strongly typed codec implementations for the S3Table

2019-05-01 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1482:


 Summary: Use strongly typed codec implementations for the S3Table
 Key: HDDS-1482
 URL: https://issues.apache.org/jira/browse/HDDS-1482
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


HDDS-864 added the implementation for Strongly typed codec implementation for 
the tables of OmMetadataManager.

 

Tables which are added as part of S3 Implementation are not using this. This 
Jira is address to this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1482) Use strongly typed codec implementations for the S3Table

2019-05-01 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1482:
-
Affects Version/s: 0.4.0

> Use strongly typed codec implementations for the S3Table
> 
>
> Key: HDDS-1482
> URL: https://issues.apache.org/jira/browse/HDDS-1482
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> HDDS-864 added the implementation for Strongly typed codec implementation for 
> the tables of OmMetadataManager.
>  
> Tables which are added as part of S3 Implementation are not using this. This 
> Jira is address to this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1482) Use strongly typed codec implementations for the S3Table

2019-05-01 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1482 started by Bharat Viswanadham.

> Use strongly typed codec implementations for the S3Table
> 
>
> Key: HDDS-1482
> URL: https://issues.apache.org/jira/browse/HDDS-1482
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> HDDS-864 added the implementation for Strongly typed codec implementation for 
> the tables of OmMetadataManager.
>  
> Tables which are added as part of S3 Implementation are not using this. This 
> Jira is address to this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13933) [JDK 11] SWebhdfsFileSystem related tests fail with hostname verification problems for "localhost"

2019-05-01 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng resolved HDFS-13933.
---
Resolution: Won't Fix

> [JDK 11] SWebhdfsFileSystem related tests fail with hostname verification 
> problems for "localhost"
> --
>
> Key: HDFS-13933
> URL: https://issues.apache.org/jira/browse/HDFS-13933
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Andrew Purtell
>Priority: Minor
>
> Tests with issues:
> * TestHttpFSFWithSWebhdfsFileSystem
> * TestWebHdfsTokens
> * TestSWebHdfsFileContextMainOperations
> Possibly others. Failure looks like 
> {noformat}
> java.io.IOException: localhost:50260: HTTPS hostname wrong:  should be 
> 
> {noformat}
> These tests set up a trust store and use HTTPS connections, and with Java 11 
> the client validation of the server name in the generated self-signed 
> certificate is failing. Exceptions originate in the JRE's HTTP client 
> library. How everything hooks together uses static initializers, static 
> methods, JUnit MethodRules... There's a lot to unpack, not sure how to fix. 
> This is Java 11+28



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13189) Standby NameNode should roll active edit log when checkpointing

2019-05-01 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun resolved HDFS-13189.
-
Resolution: Duplicate

> Standby NameNode should roll active edit log when checkpointing
> ---
>
> Key: HDFS-13189
> URL: https://issues.apache.org/jira/browse/HDFS-13189
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chao Sun
>Priority: Minor
>
> When the SBN is doing checkpointing, it will hold the {{cpLock}}. In the 
> current implementation of edit log tailer thread, it will first check and 
> roll active edit log, and then tail and apply edits. In the case of 
> checkpointing, it will be blocked on the {{cpLock}} and will not roll the 
> edit log.
> It seems there is no dependency between the edit log roll and tailing edits, 
> so a better may be to do these in separate threads. This will be helpful for 
> people who uses the observer feature without in-progress edit log tailing. 
> An alternative is to configure 
> {{dfs.namenode.edit.log.autoroll.multiplier.threshold}} and 
> {{dfs.namenode.edit.log.autoroll.check.interval.ms}} to let ANN roll its own 
> log more frequently in case SBN is stuck on the lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13189) Standby NameNode should roll active edit log when checkpointing

2019-05-01 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831131#comment-16831131
 ] 

Chao Sun commented on HDFS-13189:
-

[~xuzq_zander] this JIRA is duplicated by HDFS-14349. You are welcome to track 
the progress there.

> Standby NameNode should roll active edit log when checkpointing
> ---
>
> Key: HDFS-13189
> URL: https://issues.apache.org/jira/browse/HDFS-13189
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chao Sun
>Priority: Minor
>
> When the SBN is doing checkpointing, it will hold the {{cpLock}}. In the 
> current implementation of edit log tailer thread, it will first check and 
> roll active edit log, and then tail and apply edits. In the case of 
> checkpointing, it will be blocked on the {{cpLock}} and will not roll the 
> edit log.
> It seems there is no dependency between the edit log roll and tailing edits, 
> so a better may be to do these in separate threads. This will be helpful for 
> people who uses the observer feature without in-progress edit log tailing. 
> An alternative is to configure 
> {{dfs.namenode.edit.log.autoroll.multiplier.threshold}} and 
> {{dfs.namenode.edit.log.autoroll.check.interval.ms}} to let ANN roll its own 
> log more frequently in case SBN is stuck on the lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13933) [JDK 11] SWebhdfsFileSystem related tests fail with hostname verification problems for "localhost"

2019-05-01 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831129#comment-16831129
 ] 

Siyao Meng commented on HDFS-13933:
---

I've confirmed that the exceptions goes away with OpenJDK 11.0.3u7 on Ubuntu 
19.04. It does seem it was a bug on OpenJDK 11. Thanks for digging into this! 
[~knanasi]
{code}
INFO] Running org.apache.hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem
[INFO] Tests run: 64, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.428 
s - in org.apache.hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem
{code}
{code}
[INFO] Running org.apache.hadoop.crypto.key.kms.server.TestKMS
[INFO] Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
134.355s - in org.apache.hadoop.crypto.key.kms.server.TestKMS
{code}

> [JDK 11] SWebhdfsFileSystem related tests fail with hostname verification 
> problems for "localhost"
> --
>
> Key: HDFS-13933
> URL: https://issues.apache.org/jira/browse/HDFS-13933
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Andrew Purtell
>Priority: Minor
>
> Tests with issues:
> * TestHttpFSFWithSWebhdfsFileSystem
> * TestWebHdfsTokens
> * TestSWebHdfsFileContextMainOperations
> Possibly others. Failure looks like 
> {noformat}
> java.io.IOException: localhost:50260: HTTPS hostname wrong:  should be 
> 
> {noformat}
> These tests set up a trust store and use HTTPS connections, and with Java 11 
> the client validation of the server name in the generated self-signed 
> certificate is failing. Exceptions originate in the JRE's HTTP client 
> library. How everything hooks together uses static initializers, static 
> methods, JUnit MethodRules... There's a lot to unpack, not sure how to fix. 
> This is Java 11+28



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13995) RBF: Security documentation

2019-05-01 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831122#comment-16831122
 ] 

CR Hota commented on HDFS-13995:


[~elgoiri] Thanks for the review.

[~brahmareddy] [~ajisakaa] [~surendrasingh]

Could you also help take a look and share your thoughts?

> RBF: Security documentation
> ---
>
> Key: HDFS-13995
> URL: https://issues.apache.org/jira/browse/HDFS-13995
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13995-HDFS-13891.001.patch, 
> HDFS-13995-HDFS-13891.002.patch, HDFS-13995-HDFS-13891.003.patch
>
>
> Documentation for users under the section relating to security needs to be 
> updated once security work is completely. 
> [https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html#Security]
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14460) DFSUtil#getNamenodeWebAddr should return HTTPS address based on policy configured

2019-05-01 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-14460:
---
Attachment: HDFS-14460.004.patch

> DFSUtil#getNamenodeWebAddr should return HTTPS address based on policy 
> configured
> -
>
> Key: HDFS-14460
> URL: https://issues.apache.org/jira/browse/HDFS-14460
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14460.001.patch, HDFS-14460.002.patch, 
> HDFS-14460.003.patch, HDFS-14460.004.patch
>
>
> DFSUtil#getNamenodeWebAddr does a look-up of HTTP address irrespective of 
> policy configured. It should instead look at the policy configured and return 
> appropriate web address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14440) RBF: Optimize the file write process in case of multiple destinations.

2019-05-01 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831121#comment-16831121
 ] 

Íñigo Goiri commented on HDFS-14440:


One minor style comment: align the javadoc (confused on why the checkstyle 
let's this one go) and a break line before.

> RBF: Optimize the file write process in case of multiple destinations.
> --
>
> Key: HDFS-14440
> URL: https://issues.apache.org/jira/browse/HDFS-14440
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14440-HDFS-13891-01.patch, 
> HDFS-14440-HDFS-13891-02.patch, HDFS-14440-HDFS-13891-03.patch
>
>
> In case of multiple destinations, We need to check if the file already exists 
> in one of the subclusters for which we use the existing getBlockLocation() 
> API which is by default a sequential Call,
> In an ideal scenario where the file needs to be created each subcluster shall 
> be checked sequentially, this can be done concurrently to save time.
> In another case where the file is found and if the last block is null, we 
> need to do getFileInfo to all the locations to get the location where the 
> file exists. This also can be prevented by use of ConcurrentCall since we 
> shall be having the remoteLocation to where the getBlockLocation returned a 
> non null entry.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13995) RBF: Security documentation

2019-05-01 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831114#comment-16831114
 ] 

Íñigo Goiri commented on HDFS-13995:


Thanks [~crh] for the changes.
+1 from my side.
Let's hold a little to see if others have more feedback.

> RBF: Security documentation
> ---
>
> Key: HDFS-13995
> URL: https://issues.apache.org/jira/browse/HDFS-13995
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13995-HDFS-13891.001.patch, 
> HDFS-13995-HDFS-13891.002.patch, HDFS-13995-HDFS-13891.003.patch
>
>
> Documentation for users under the section relating to security needs to be 
> updated once security work is completely. 
> [https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html#Security]
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1430) NPE if secure ozone if KMS uri is not defined.

2019-05-01 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1430:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> NPE if secure ozone if KMS uri is not defined.
> --
>
> Key: HDDS-1430
> URL: https://issues.apache.org/jira/browse/HDDS-1430
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.4.0
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> OzoneKMSUtil.getKeyProvider throws NPE if KMS uri is not defined. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1478) Provide k8s resources files for prometheus and performance tests

2019-05-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1478?focusedWorklogId=235796=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-235796
 ]

ASF GitHub Bot logged work on HDDS-1478:


Author: ASF GitHub Bot
Created on: 01/May/19 13:30
Start Date: 01/May/19 13:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #783: HDDS-1478. 
Provide k8s resources files for prometheus and performance tests
URL: https://github.com/apache/hadoop/pull/783#issuecomment-488282484
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 22 | Docker mode activated. |
   ||| _ Prechecks _ |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 34 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1130 | trunk passed |
   | +1 | compile | 117 | trunk passed |
   | +1 | mvnsite | 91 | trunk passed |
   | +1 | shadedclient | 739 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 64 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for patch |
   | -1 | mvninstall | 22 | dist in the patch failed. |
   | +1 | compile | 114 | the patch passed |
   | +1 | javac | 114 | the patch passed |
   | +1 | hadolint | 2 | There were no new hadolint issues. |
   | +1 | mvnsite | 57 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | shelldocs | 14 | The patch generated 0 new + 104 unchanged - 132 
fixed = 104 total (was 236) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 818 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 55 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 37 | common in the patch passed. |
   | +1 | unit | 25 | dist in the patch passed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 3561 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-783/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/783 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  xml  hadolint  shellcheck  shelldocs  yamllint  |
   | uname | Linux f85d3129398c 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4877f0a |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | shellcheck | v0.4.6 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-783/3/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-783/3/testReport/ |
   | Max. process+thread count | 340 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/dist U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-783/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 235796)
Time Spent: 40m  (was: 0.5h)

> Provide k8s resources files for prometheus and performance tests
> 
>
> Key: HDDS-1478
> URL: https://issues.apache.org/jira/browse/HDDS-1478
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Similar to HDDS-1412 we can further improve the available k8s resources with 
> providing example resources to:

[jira] [Commented] (HDFS-14460) DFSUtil#getNamenodeWebAddr should return HTTPS address based on policy configured

2019-05-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16830895#comment-16830895
 ] 

Hadoop QA commented on HDFS-14460:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 125 unchanged - 0 fixed = 128 total (was 125) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 28s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
|   | hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14460 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12967548/HDFS-14460.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1f819f6e04be 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4877f0a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26737/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Created] (HDDS-1481) Cleanup BasicOzoneFileSystem#mkdir

2019-05-01 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-1481:
-

 Summary: Cleanup BasicOzoneFileSystem#mkdir
 Key: HDDS-1481
 URL: https://issues.apache.org/jira/browse/HDDS-1481
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Filesystem
Reporter: Lokesh Jain
Assignee: Lokesh Jain


Currently BasicOzoneFileSystem#mkdir does not have the optimizations made in 
HDDS-1300. The changes for this function were missed in HDDS-1460.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13995) RBF: Security documentation

2019-05-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16830883#comment-16830883
 ] 

Hadoop QA commented on HDFS-13995:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
13s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
40m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13995 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12967551/HDFS-13995-HDFS-13891.003.patch
 |
| Optional Tests |  dupname  asflicense  mvnsite  |
| uname | Linux e4d8d40fe8b3 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / aeb3b61 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26738/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Security documentation
> ---
>
> Key: HDFS-13995
> URL: https://issues.apache.org/jira/browse/HDFS-13995
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13995-HDFS-13891.001.patch, 
> HDFS-13995-HDFS-13891.002.patch, HDFS-13995-HDFS-13891.003.patch
>
>
> Documentation for users under the section relating to security needs to be 
> updated once security work is completely. 
> [https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html#Security]
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org