[jira] [Updated] (HDDS-2185) createmrenv failure not reflected in acceptance test result

2019-09-26 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2185:
---
Status: Patch Available  (was: In Progress)

> createmrenv failure not reflected in acceptance test result
> ---
>
> Key: HDDS-2185
> URL: https://issues.apache.org/jira/browse/HDDS-2185
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Part of the MR tests fail, but it's not reflected in the test report, which 
> shows all green.
> {noformat:title=https://github.com/elek/ozone-ci/blob/679228c146628cd4d1a416e1ffc9c513d19fb43d/pr/pr-hdds-2179-9bnxk/acceptance/output.log#L718-L730}
> ==
> hadoop31-createmrenv :: Create directories required for MR test   
> ==
> Create test volume, bucket and key| PASS |
> --
> Create user dir for hadoop| FAIL |
> 1 != 0
> --
> hadoop31-createmrenv :: Create directories required for MR test   | FAIL |
> 2 critical tests, 1 passed, 1 failed
> 2 tests total, 1 passed, 1 failed
> ==
> Output:  
> /tmp/smoketest/hadoop31/result/robot-hadoop31-hadoop31-createmrenv-scm.xml
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2187) ozone-mr test fails with No FileSystem for scheme "o3fs"

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2187?focusedWorklogId=319370&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319370
 ]

ASF GitHub Bot logged work on HDDS-2187:


Author: ASF GitHub Bot
Created on: 27/Sep/19 06:48
Start Date: 27/Sep/19 06:48
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1537: HDDS-2187. 
ozone-mr test fails with No FileSystem for scheme o3fs
URL: https://github.com/apache/hadoop/pull/1537#issuecomment-535813139
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319370)
Time Spent: 20m  (was: 10m)

> ozone-mr test fails with No FileSystem for scheme "o3fs"
> 
>
> Key: HDDS-2187
> URL: https://issues.apache.org/jira/browse/HDDS-2187
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> HDDS-2101 changed how Ozone filesystem provider is configured.  {{ozone-mr}} 
> tests [started 
> failing|https://github.com/elek/ozone-ci/blob/2f2c99652af6b26a95f08eece9e545f0d72ccf45/pr/pr-hdds-2101-rtz55/acceptance/output.log#L255-L263],
>  but it [wasn't 
> noticed|https://github.com/elek/ozone-ci/blob/master/pr/pr-hdds-2101-rtz55/acceptance/result]
>  due to HDDS-2185.
> {code}
> Running command 'ozone fs -mkdir /user'
> ${output} = mkdir: No FileSystem for scheme "o3fs"
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2187) ozone-mr test fails with No FileSystem for scheme "o3fs"

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2187?focusedWorklogId=319369&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319369
 ]

ASF GitHub Bot logged work on HDDS-2187:


Author: ASF GitHub Bot
Created on: 27/Sep/19 06:47
Start Date: 27/Sep/19 06:47
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1537: HDDS-2187. 
ozone-mr test fails with No FileSystem for scheme o3fs
URL: https://github.com/apache/hadoop/pull/1537
 
 
   ## What changes were proposed in this pull request?
   
   MR acceptance tests are 
[failing](https://github.com/elek/ozone-ci/blob/2f2c99652af6b26a95f08eece9e545f0d72ccf45/trunk/trunk-nightly-20190925-htsvn/acceptance/output.log#L724)
 at the `Create user dir for hadoop` step of `Create directories required for 
MR test` suite.  This happens since 
[HDDS-2101](https://issues.apache.org/jira/browse/HDDS-2101) removed 
`fs.o3fs.impl` definition from `core-site.xml`, filesystem provider is defined 
in META-INF of `hadoop-ozone-filesystem-lib-*.jar`.  The problem is that the 
filesystem jars are not on the classpath for `ozone fs` commands used to create 
hadoop user dir in the `ozone-mr` env, so the explicit config is needed.  This 
change adds back the config, but only for scm, not for Hadoop.  It also applies 
both changes (this fix and the removal per HDDS-2101) to `ozonesecure-mr` env.  
It is based on top of #1533 (which should be merged first).
   
   https://issues.apache.org/jira/browse/HDDS-2187
   
   ## How was this patch tested?
   
   Ran all affected acceptance tests locally.
   
   ```
   0 datanode is up and healthy (until now)
   3 datanodes are up and registered to the scm
   
==
   hadoop32-createmrenv :: Create directories required for MR test  
 
   
==
   Create test volume, bucket and key| PASS 
|
   
--
   Create user dir for hadoop| PASS 
|
   
--
   hadoop32-createmrenv :: Create directories required for MR test   | PASS 
|
   2 critical tests, 2 passed, 0 failed
   2 tests total, 2 passed, 0 failed
   
==
   Output:  
/tmp/smoketest/hadoop32/result/robot-hadoop32-hadoop32-createmrenv-scm.xml
   
==
   hadoop32-hadoopo3fs :: Test ozone fs with hadoopfs   
 
   
==
   Test hadoop dfs   | PASS 
|
   
--
   hadoop32-hadoopo3fs :: Test ozone fs with hadoopfs| PASS 
|
   1 critical test, 1 passed, 0 failed
   1 test total, 1 passed, 0 failed
   
==
   Output:  
/tmp/smoketest/hadoop32/result/robot-hadoop32-hadoop32-hadoopo3fs-rm.xml
   
==
   hadoop32-mapreduce :: Execute MR jobs
 
   
==
   Execute PI calculation| PASS 
|
   
--
   Execute WordCount | PASS 
|
   
--
   hadoop32-mapreduce :: Execute MR jobs | PASS 
|
   2 critical tests, 2 passed, 0 failed
   2 tests total, 2 passed, 0 failed
   
==
   Output:  
/tmp/smoketest/hadoop32/result/robot-hadoop32-hadoop32-mapreduce-rm.xml
   Robot framework is not installed, the reports can be generated (sudo pip 
install robotframework).
   0 datanode is up and healthy (until now)
   3 datanodes are up and registered to the scm
   
==
   hadoop31-createmrenv :: Create directories required for MR test  
 
   
==
   Create test volume, bucket and key| PASS 
|
   
--
   Create user dir for hado

[jira] [Updated] (HDDS-2187) ozone-mr test fails with No FileSystem for scheme "o3fs"

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2187:
-
Labels: pull-request-available  (was: )

> ozone-mr test fails with No FileSystem for scheme "o3fs"
> 
>
> Key: HDDS-2187
> URL: https://issues.apache.org/jira/browse/HDDS-2187
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> HDDS-2101 changed how Ozone filesystem provider is configured.  {{ozone-mr}} 
> tests [started 
> failing|https://github.com/elek/ozone-ci/blob/2f2c99652af6b26a95f08eece9e545f0d72ccf45/pr/pr-hdds-2101-rtz55/acceptance/output.log#L255-L263],
>  but it [wasn't 
> noticed|https://github.com/elek/ozone-ci/blob/master/pr/pr-hdds-2101-rtz55/acceptance/result]
>  due to HDDS-2185.
> {code}
> Running command 'ozone fs -mkdir /user'
> ${output} = mkdir: No FileSystem for scheme "o3fs"
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2034) Async RATIS pipeline creation and destroy through heartbeat commands

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2034?focusedWorklogId=319367&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319367
 ]

ASF GitHub Bot logged work on HDDS-2034:


Author: ASF GitHub Bot
Created on: 27/Sep/19 06:41
Start Date: 27/Sep/19 06:41
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on issue #1469: HDDS-2034. Async 
RATIS pipeline creation and destroy through heartbea…
URL: https://github.com/apache/hadoop/pull/1469#issuecomment-535777599
 
 
   @anuengineer and @xiaoyuyao ,  should I provide a new patch on trunk now, or 
wait until the whole communication channel design come out next week? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319367)
Time Spent: 7h 20m  (was: 7h 10m)

> Async RATIS pipeline creation and destroy through heartbeat commands
> 
>
> Key: HDDS-2034
> URL: https://issues.apache.org/jira/browse/HDDS-2034
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h 20m
>  Remaining Estimate: 0h
>
> Currently, pipeline creation and destroy are synchronous operations. SCM 
> directly connect to each datanode of the pipeline through gRPC channel to 
> create the pipeline to destroy the pipeline.  
> This task is to remove the gRPC channel, send pipeline creation and destroy 
> action through heartbeat command to each datanode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2195) Apply spotbugs check to test code

2019-09-26 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2195:
--

 Summary: Apply spotbugs check to test code
 Key: HDDS-2195
 URL: https://issues.apache.org/jira/browse/HDDS-2195
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: test
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


The goal of this task is to [enable Spotbugs to run on test 
code|https://spotbugs.github.io/spotbugs-maven-plugin/spotbugs-mojo.html#includeTests],
 and fix all issues it reports (both to improve code and to avoid breaking CI).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2164) om.db.checkpoints is getting filling up fast

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2164?focusedWorklogId=319361&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319361
 ]

ASF GitHub Bot logged work on HDDS-2164:


Author: ASF GitHub Bot
Created on: 27/Sep/19 06:13
Start Date: 27/Sep/19 06:13
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1536: HDDS-2164 : 
om.db.checkpoints is getting filling up fast.
URL: https://github.com/apache/hadoop/pull/1536#issuecomment-535803107
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 9 | https://github.com/apache/hadoop/pull/1536 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/1536 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1536/1/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319361)
Time Spent: 40m  (was: 0.5h)

> om.db.checkpoints is getting filling up fast
> 
>
> Key: HDDS-2164
> URL: https://issues.apache.org/jira/browse/HDDS-2164
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Nanda kumar
>Assignee: Aravindan Vijayan
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {{om.db.checkpoints}} is filling up fast, we should also clean this up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2169) Avoid buffer copies while submitting client requests in Ratis

2019-09-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16939160#comment-16939160
 ] 

Hadoop QA commented on HDDS-2169:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 11s{color} 
| {color:red} https://github.com/apache/hadoop/pull/1517 does not apply to 
trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| GITHUB PR | https://github.com/apache/hadoop/pull/1517 |
| JIRA Issue | HDDS-2169 |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/3/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |


This message was automatically generated.



> Avoid buffer copies while submitting client requests in Ratis
> -
>
> Key: HDDS-2169
> URL: https://issues.apache.org/jira/browse/HDDS-2169
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Shashikant Banerjee
>Assignee: Tsz-wo Sze
>Priority: Major
>  Labels: pull-request-available
> Attachments: o2169_20190923.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently, while sending write requests to Ratis from ozone, a protobuf 
> object containing data encoded  and then resultant protobuf is again 
> converted to a byteString which internally does a copy of the buffer embedded 
> inside the protobuf again so that it can be submitted over to Ratis client. 
> Again, while sending the appendRequest as well while building up the 
> appendRequestProto, it might be again copying the data. The idea here is to 
> provide client so pass the raw data(stateMachine data) separately to ratis 
> client without copying overhead. 
>  
> {code:java}
> private CompletableFuture sendRequestAsync(
> ContainerCommandRequestProto request) {
>   try (Scope scope = GlobalTracer.get()
>   .buildSpan("XceiverClientRatis." + request.getCmdType().name())
>   .startActive(true)) {
> ContainerCommandRequestProto finalPayload =
> ContainerCommandRequestProto.newBuilder(request)
> .setTraceID(TracingUtil.exportCurrentSpan())
> .build();
> boolean isReadOnlyRequest = HddsUtils.isReadOnly(finalPayload);
> //  finalPayload already has the byteString data embedded. 
> ByteString byteString = finalPayload.toByteString(); -> It involves a 
> copy again.
> if (LOG.isDebugEnabled()) {
>   LOG.debug("sendCommandAsync {} {}", isReadOnlyRequest,
>   sanitizeForDebug(finalPayload));
> }
> return isReadOnlyRequest ?
> getClient().sendReadOnlyAsync(() -> byteString) :
> getClient().sendAsync(() -> byteString);
>   }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2169) Avoid buffer copies while submitting client requests in Ratis

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2169?focusedWorklogId=319360&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319360
 ]

ASF GitHub Bot logged work on HDDS-2169:


Author: ASF GitHub Bot
Created on: 27/Sep/19 06:13
Start Date: 27/Sep/19 06:13
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1517: HDDS-2169
URL: https://github.com/apache/hadoop/pull/1517#issuecomment-535803080
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 11 | https://github.com/apache/hadoop/pull/1517 does not 
apply to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/1517 |
   | JIRA Issue | HDDS-2169 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/3/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319360)
Time Spent: 40m  (was: 0.5h)

> Avoid buffer copies while submitting client requests in Ratis
> -
>
> Key: HDDS-2169
> URL: https://issues.apache.org/jira/browse/HDDS-2169
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Shashikant Banerjee
>Assignee: Tsz-wo Sze
>Priority: Major
>  Labels: pull-request-available
> Attachments: o2169_20190923.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently, while sending write requests to Ratis from ozone, a protobuf 
> object containing data encoded  and then resultant protobuf is again 
> converted to a byteString which internally does a copy of the buffer embedded 
> inside the protobuf again so that it can be submitted over to Ratis client. 
> Again, while sending the appendRequest as well while building up the 
> appendRequestProto, it might be again copying the data. The idea here is to 
> provide client so pass the raw data(stateMachine data) separately to ratis 
> client without copying overhead. 
>  
> {code:java}
> private CompletableFuture sendRequestAsync(
> ContainerCommandRequestProto request) {
>   try (Scope scope = GlobalTracer.get()
>   .buildSpan("XceiverClientRatis." + request.getCmdType().name())
>   .startActive(true)) {
> ContainerCommandRequestProto finalPayload =
> ContainerCommandRequestProto.newBuilder(request)
> .setTraceID(TracingUtil.exportCurrentSpan())
> .build();
> boolean isReadOnlyRequest = HddsUtils.isReadOnly(finalPayload);
> //  finalPayload already has the byteString data embedded. 
> ByteString byteString = finalPayload.toByteString(); -> It involves a 
> copy again.
> if (LOG.isDebugEnabled()) {
>   LOG.debug("sendCommandAsync {} {}", isReadOnlyRequest,
>   sanitizeForDebug(finalPayload));
> }
> return isReadOnlyRequest ?
> getClient().sendReadOnlyAsync(() -> byteString) :
> getClient().sendAsync(() -> byteString);
>   }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2149) Replace findbugs with spotbugs

2019-09-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16939157#comment-16939157
 ] 

Hudson commented on HDDS-2149:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17402 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17402/])
HDDS-2149. Replace findbugs with spotbugs (aengineer: rev 
9bf7a6e5b26a361fd08552793852208d817fdfbd)
* (edit) hadoop-ozone/common/pom.xml
* (edit) hadoop-ozone/insight/pom.xml
* (edit) hadoop-ozone/csi/pom.xml
* (edit) hadoop-ozone/ozonefs-lib-current/pom.xml
* (edit) hadoop-ozone/tools/pom.xml
* (edit) hadoop-ozone/upgrade/pom.xml
* (edit) hadoop-ozone/ozone-manager/pom.xml
* (edit) hadoop-ozone/ozonefs/pom.xml
* (edit) hadoop-ozone/recon/pom.xml
* (edit) hadoop-hdds/container-service/pom.xml
* (edit) hadoop-ozone/s3gateway/pom.xml
* (edit) hadoop-hdds/server-scm/pom.xml
* (edit) hadoop-hdds/common/pom.xml
* (edit) pom.ozone.xml
* (edit) hadoop-hdds/pom.xml
* (edit) hadoop-ozone/dev-support/checks/findbugs.sh
* (edit) hadoop-ozone/ozonefs-lib-legacy/pom.xml
* (edit) hadoop-ozone/pom.xml


> Replace findbugs with spotbugs
> --
>
> Key: HDDS-2149
> URL: https://issues.apache.org/jira/browse/HDDS-2149
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Dinesh Chitlangia
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Findbugs has been marked deprecated and all future work is now happening 
> under SpotBugs project.
> This Jira is to investigate and possibly transition to Spotbugs in Ozone
>  
> Ref1 - 
> [https://mailman.cs.umd.edu/pipermail/findbugs-discuss/2017-September/004383.html]
> Ref2 - [https://spotbugs.github.io/]
>  
> A turn off for developers is that IntelliJ does not yet have a plugin for 
> Spotbugs - [https://youtrack.jetbrains.com/issue/IDEA-201846]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2187) ozone-mr test fails with No FileSystem for scheme "o3fs"

2019-09-26 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai reassigned HDDS-2187:
--

Assignee: Attila Doroszlai

> ozone-mr test fails with No FileSystem for scheme "o3fs"
> 
>
> Key: HDDS-2187
> URL: https://issues.apache.org/jira/browse/HDDS-2187
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>
> HDDS-2101 changed how Ozone filesystem provider is configured.  {{ozone-mr}} 
> tests [started 
> failing|https://github.com/elek/ozone-ci/blob/2f2c99652af6b26a95f08eece9e545f0d72ccf45/pr/pr-hdds-2101-rtz55/acceptance/output.log#L255-L263],
>  but it [wasn't 
> noticed|https://github.com/elek/ozone-ci/blob/master/pr/pr-hdds-2101-rtz55/acceptance/result]
>  due to HDDS-2185.
> {code}
> Running command 'ozone fs -mkdir /user'
> ${output} = mkdir: No FileSystem for scheme "o3fs"
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2187) ozone-mr test fails with No FileSystem for scheme "o3fs"

2019-09-26 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2187 started by Attila Doroszlai.
--
> ozone-mr test fails with No FileSystem for scheme "o3fs"
> 
>
> Key: HDDS-2187
> URL: https://issues.apache.org/jira/browse/HDDS-2187
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>
> HDDS-2101 changed how Ozone filesystem provider is configured.  {{ozone-mr}} 
> tests [started 
> failing|https://github.com/elek/ozone-ci/blob/2f2c99652af6b26a95f08eece9e545f0d72ccf45/pr/pr-hdds-2101-rtz55/acceptance/output.log#L255-L263],
>  but it [wasn't 
> noticed|https://github.com/elek/ozone-ci/blob/master/pr/pr-hdds-2101-rtz55/acceptance/result]
>  due to HDDS-2185.
> {code}
> Running command 'ozone fs -mkdir /user'
> ${output} = mkdir: No FileSystem for scheme "o3fs"
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2149) Replace findbugs with spotbugs

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2149?focusedWorklogId=319359&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319359
 ]

ASF GitHub Bot logged work on HDDS-2149:


Author: ASF GitHub Bot
Created on: 27/Sep/19 06:02
Start Date: 27/Sep/19 06:02
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1513: HDDS-2149. Replace 
FindBugs with SpotBugs
URL: https://github.com/apache/hadoop/pull/1513#issuecomment-535800110
 
 
   Thanks @elek for the review.  Thank you @anuengineer for reviewing, testing 
and committing it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319359)
Time Spent: 4h 10m  (was: 4h)

> Replace findbugs with spotbugs
> --
>
> Key: HDDS-2149
> URL: https://issues.apache.org/jira/browse/HDDS-2149
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Dinesh Chitlangia
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Findbugs has been marked deprecated and all future work is now happening 
> under SpotBugs project.
> This Jira is to investigate and possibly transition to Spotbugs in Ozone
>  
> Ref1 - 
> [https://mailman.cs.umd.edu/pipermail/findbugs-discuss/2017-September/004383.html]
> Ref2 - [https://spotbugs.github.io/]
>  
> A turn off for developers is that IntelliJ does not yet have a plugin for 
> Spotbugs - [https://youtrack.jetbrains.com/issue/IDEA-201846]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2164) om.db.checkpoints is getting filling up fast

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2164?focusedWorklogId=319357&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319357
 ]

ASF GitHub Bot logged work on HDDS-2164:


Author: ASF GitHub Bot
Created on: 27/Sep/19 06:00
Start Date: 27/Sep/19 06:00
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on issue #1536: HDDS-2164 : 
om.db.checkpoints is getting filling up fast.
URL: https://github.com/apache/hadoop/pull/1536#issuecomment-535799724
 
 
   @swagle / @vivekratnavel 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319357)
Time Spent: 0.5h  (was: 20m)

> om.db.checkpoints is getting filling up fast
> 
>
> Key: HDDS-2164
> URL: https://issues.apache.org/jira/browse/HDDS-2164
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Nanda kumar
>Assignee: Aravindan Vijayan
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {{om.db.checkpoints}} is filling up fast, we should also clean this up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2164) om.db.checkpoints is getting filling up fast

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2164:
-
Labels: pull-request-available  (was: )

> om.db.checkpoints is getting filling up fast
> 
>
> Key: HDDS-2164
> URL: https://issues.apache.org/jira/browse/HDDS-2164
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Nanda kumar
>Assignee: Aravindan Vijayan
>Priority: Critical
>  Labels: pull-request-available
>
> {{om.db.checkpoints}} is filling up fast, we should also clean this up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2164) om.db.checkpoints is getting filling up fast

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2164?focusedWorklogId=319356&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319356
 ]

ASF GitHub Bot logged work on HDDS-2164:


Author: ASF GitHub Bot
Created on: 27/Sep/19 05:59
Start Date: 27/Sep/19 05:59
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on issue #1536: HDDS-2164 : 
om.db.checkpoints is getting filling up fast.
URL: https://github.com/apache/hadoop/pull/1536#issuecomment-535799489
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319356)
Time Spent: 20m  (was: 10m)

> om.db.checkpoints is getting filling up fast
> 
>
> Key: HDDS-2164
> URL: https://issues.apache.org/jira/browse/HDDS-2164
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Nanda kumar
>Assignee: Aravindan Vijayan
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{om.db.checkpoints}} is filling up fast, we should also clean this up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2164) om.db.checkpoints is getting filling up fast

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2164?focusedWorklogId=319355&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319355
 ]

ASF GitHub Bot logged work on HDDS-2164:


Author: ASF GitHub Bot
Created on: 27/Sep/19 05:59
Start Date: 27/Sep/19 05:59
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #1536: HDDS-2164 
: om.db.checkpoints is getting filling up fast.
URL: https://github.com/apache/hadoop/pull/1536
 
 
   Fixed issue where the checkpoint clean up does not happen.
   Changed the 2 step process in the OM DB checkpoint servlet (Creating a tar 
file for OM DB + Writing to outputstream) to a single step process (Writing the 
compressed tar file as a stream directly to output stream).
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319355)
Remaining Estimate: 0h
Time Spent: 10m

> om.db.checkpoints is getting filling up fast
> 
>
> Key: HDDS-2164
> URL: https://issues.apache.org/jira/browse/HDDS-2164
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Nanda kumar
>Assignee: Aravindan Vijayan
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{om.db.checkpoints}} is filling up fast, we should also clean this up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2149) Replace findbugs with spotbugs

2019-09-26 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16939152#comment-16939152
 ] 

Anu Engineer commented on HDDS-2149:


Thank you for the contribution. I have committed this to the trunk.  [~elek]  
Thank you for the reivew.

> Replace findbugs with spotbugs
> --
>
> Key: HDDS-2149
> URL: https://issues.apache.org/jira/browse/HDDS-2149
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Dinesh Chitlangia
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Findbugs has been marked deprecated and all future work is now happening 
> under SpotBugs project.
> This Jira is to investigate and possibly transition to Spotbugs in Ozone
>  
> Ref1 - 
> [https://mailman.cs.umd.edu/pipermail/findbugs-discuss/2017-September/004383.html]
> Ref2 - [https://spotbugs.github.io/]
>  
> A turn off for developers is that IntelliJ does not yet have a plugin for 
> Spotbugs - [https://youtrack.jetbrains.com/issue/IDEA-201846]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2149) Replace findbugs with spotbugs

2019-09-26 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2149:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Replace findbugs with spotbugs
> --
>
> Key: HDDS-2149
> URL: https://issues.apache.org/jira/browse/HDDS-2149
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Dinesh Chitlangia
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Findbugs has been marked deprecated and all future work is now happening 
> under SpotBugs project.
> This Jira is to investigate and possibly transition to Spotbugs in Ozone
>  
> Ref1 - 
> [https://mailman.cs.umd.edu/pipermail/findbugs-discuss/2017-September/004383.html]
> Ref2 - [https://spotbugs.github.io/]
>  
> A turn off for developers is that IntelliJ does not yet have a plugin for 
> Spotbugs - [https://youtrack.jetbrains.com/issue/IDEA-201846]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2149) Replace findbugs with spotbugs

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2149?focusedWorklogId=319353&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319353
 ]

ASF GitHub Bot logged work on HDDS-2149:


Author: ASF GitHub Bot
Created on: 27/Sep/19 05:57
Start Date: 27/Sep/19 05:57
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1513: HDDS-2149. 
Replace FindBugs with SpotBugs
URL: https://github.com/apache/hadoop/pull/1513
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319353)
Time Spent: 4h  (was: 3h 50m)

> Replace findbugs with spotbugs
> --
>
> Key: HDDS-2149
> URL: https://issues.apache.org/jira/browse/HDDS-2149
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Dinesh Chitlangia
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Findbugs has been marked deprecated and all future work is now happening 
> under SpotBugs project.
> This Jira is to investigate and possibly transition to Spotbugs in Ozone
>  
> Ref1 - 
> [https://mailman.cs.umd.edu/pipermail/findbugs-discuss/2017-September/004383.html]
> Ref2 - [https://spotbugs.github.io/]
>  
> A turn off for developers is that IntelliJ does not yet have a plugin for 
> Spotbugs - [https://youtrack.jetbrains.com/issue/IDEA-201846]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2149) Replace findbugs with spotbugs

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2149?focusedWorklogId=319352&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319352
 ]

ASF GitHub Bot logged work on HDDS-2149:


Author: ASF GitHub Bot
Created on: 27/Sep/19 05:57
Start Date: 27/Sep/19 05:57
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1513: HDDS-2149. 
Replace FindBugs with SpotBugs
URL: https://github.com/apache/hadoop/pull/1513#issuecomment-535799019
 
 
   Thank you for the contribution. I have committed this to the trunk. @elek  
Thank you for the reivew.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319352)
Time Spent: 3h 50m  (was: 3h 40m)

> Replace findbugs with spotbugs
> --
>
> Key: HDDS-2149
> URL: https://issues.apache.org/jira/browse/HDDS-2149
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Dinesh Chitlangia
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Findbugs has been marked deprecated and all future work is now happening 
> under SpotBugs project.
> This Jira is to investigate and possibly transition to Spotbugs in Ozone
>  
> Ref1 - 
> [https://mailman.cs.umd.edu/pipermail/findbugs-discuss/2017-September/004383.html]
> Ref2 - [https://spotbugs.github.io/]
>  
> A turn off for developers is that IntelliJ does not yet have a plugin for 
> Spotbugs - [https://youtrack.jetbrains.com/issue/IDEA-201846]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2179) ConfigFileGenerator fails with Java 10 or newer

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2179?focusedWorklogId=319351&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319351
 ]

ASF GitHub Bot logged work on HDDS-2179:


Author: ASF GitHub Bot
Created on: 27/Sep/19 05:48
Start Date: 27/Sep/19 05:48
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1525: HDDS-2179. 
ConfigFileGenerator fails with Java 10 or newer
URL: https://github.com/apache/hadoop/pull/1525#issuecomment-535796885
 
 
   Thanks @anuengineer for reviewing and committing it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319351)
Time Spent: 1h 10m  (was: 1h)

> ConfigFileGenerator fails with Java 10 or newer
> ---
>
> Key: HDDS-2179
> URL: https://issues.apache.org/jira/browse/HDDS-2179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> {code:title=mvn -f pom.ozone.xml -DskipTests -am -pl :hadoop-hdds-config 
> clean package}
> ...
> [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
> hadoop-hdds-config ---
> [INFO] Compiling 3 source files to hadoop-hdds/config/target/test-classes
> ...
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-hdds-config: Compilation failure
> [ERROR] Can't generate the config file from annotation: 
> hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
> {code}
> The root cause is that new Java (I guess it's 9+, but tried only on 10+) 
> throws a different {{IOException}} subclass: {{NoSuchFileException}} instead 
> of {{FileNotFoundException}}.
> {code}
> java.nio.file.NoSuchFileException: 
> hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
>   at 
> java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
>   at 
> java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:219)
>   at java.base/java.nio.file.Files.newByteChannel(Files.java:374)
>   at java.base/java.nio.file.Files.newByteChannel(Files.java:425)
>   at 
> java.base/java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:420)
>   at java.base/java.nio.file.Files.newInputStream(Files.java:159)
>   at 
> jdk.compiler/com.sun.tools.javac.file.PathFileObject.openInputStream(PathFileObject.java:461)
>   at 
> java.compiler@13/javax.tools.ForwardingFileObject.openInputStream(ForwardingFileObject.java:74)
>   at 
> org.apache.hadoop.hdds.conf.ConfigFileGenerator.process(ConfigFileGenerator.java:62)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2179) ConfigFileGenerator fails with Java 10 or newer

2019-09-26 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2179:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> ConfigFileGenerator fails with Java 10 or newer
> ---
>
> Key: HDDS-2179
> URL: https://issues.apache.org/jira/browse/HDDS-2179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code:title=mvn -f pom.ozone.xml -DskipTests -am -pl :hadoop-hdds-config 
> clean package}
> ...
> [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
> hadoop-hdds-config ---
> [INFO] Compiling 3 source files to hadoop-hdds/config/target/test-classes
> ...
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-hdds-config: Compilation failure
> [ERROR] Can't generate the config file from annotation: 
> hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
> {code}
> The root cause is that new Java (I guess it's 9+, but tried only on 10+) 
> throws a different {{IOException}} subclass: {{NoSuchFileException}} instead 
> of {{FileNotFoundException}}.
> {code}
> java.nio.file.NoSuchFileException: 
> hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
>   at 
> java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
>   at 
> java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:219)
>   at java.base/java.nio.file.Files.newByteChannel(Files.java:374)
>   at java.base/java.nio.file.Files.newByteChannel(Files.java:425)
>   at 
> java.base/java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:420)
>   at java.base/java.nio.file.Files.newInputStream(Files.java:159)
>   at 
> jdk.compiler/com.sun.tools.javac.file.PathFileObject.openInputStream(PathFileObject.java:461)
>   at 
> java.compiler@13/javax.tools.ForwardingFileObject.openInputStream(ForwardingFileObject.java:74)
>   at 
> org.apache.hadoop.hdds.conf.ConfigFileGenerator.process(ConfigFileGenerator.java:62)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2179) ConfigFileGenerator fails with Java 10 or newer

2019-09-26 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16939143#comment-16939143
 ] 

Anu Engineer commented on HDDS-2179:


I have committed this patch to the trunk. Thank you for the contribution

> ConfigFileGenerator fails with Java 10 or newer
> ---
>
> Key: HDDS-2179
> URL: https://issues.apache.org/jira/browse/HDDS-2179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code:title=mvn -f pom.ozone.xml -DskipTests -am -pl :hadoop-hdds-config 
> clean package}
> ...
> [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
> hadoop-hdds-config ---
> [INFO] Compiling 3 source files to hadoop-hdds/config/target/test-classes
> ...
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-hdds-config: Compilation failure
> [ERROR] Can't generate the config file from annotation: 
> hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
> {code}
> The root cause is that new Java (I guess it's 9+, but tried only on 10+) 
> throws a different {{IOException}} subclass: {{NoSuchFileException}} instead 
> of {{FileNotFoundException}}.
> {code}
> java.nio.file.NoSuchFileException: 
> hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
>   at 
> java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
>   at 
> java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:219)
>   at java.base/java.nio.file.Files.newByteChannel(Files.java:374)
>   at java.base/java.nio.file.Files.newByteChannel(Files.java:425)
>   at 
> java.base/java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:420)
>   at java.base/java.nio.file.Files.newInputStream(Files.java:159)
>   at 
> jdk.compiler/com.sun.tools.javac.file.PathFileObject.openInputStream(PathFileObject.java:461)
>   at 
> java.compiler@13/javax.tools.ForwardingFileObject.openInputStream(ForwardingFileObject.java:74)
>   at 
> org.apache.hadoop.hdds.conf.ConfigFileGenerator.process(ConfigFileGenerator.java:62)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2169) Avoid buffer copies while submitting client requests in Ratis

2019-09-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16939138#comment-16939138
 ] 

Hadoop QA commented on HDDS-2169:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 11s{color} 
| {color:red} https://github.com/apache/hadoop/pull/1517 does not apply to 
trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| GITHUB PR | https://github.com/apache/hadoop/pull/1517 |
| JIRA Issue | HDDS-2169 |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/2/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |


This message was automatically generated.



> Avoid buffer copies while submitting client requests in Ratis
> -
>
> Key: HDDS-2169
> URL: https://issues.apache.org/jira/browse/HDDS-2169
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Shashikant Banerjee
>Assignee: Tsz-wo Sze
>Priority: Major
>  Labels: pull-request-available
> Attachments: o2169_20190923.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently, while sending write requests to Ratis from ozone, a protobuf 
> object containing data encoded  and then resultant protobuf is again 
> converted to a byteString which internally does a copy of the buffer embedded 
> inside the protobuf again so that it can be submitted over to Ratis client. 
> Again, while sending the appendRequest as well while building up the 
> appendRequestProto, it might be again copying the data. The idea here is to 
> provide client so pass the raw data(stateMachine data) separately to ratis 
> client without copying overhead. 
>  
> {code:java}
> private CompletableFuture sendRequestAsync(
> ContainerCommandRequestProto request) {
>   try (Scope scope = GlobalTracer.get()
>   .buildSpan("XceiverClientRatis." + request.getCmdType().name())
>   .startActive(true)) {
> ContainerCommandRequestProto finalPayload =
> ContainerCommandRequestProto.newBuilder(request)
> .setTraceID(TracingUtil.exportCurrentSpan())
> .build();
> boolean isReadOnlyRequest = HddsUtils.isReadOnly(finalPayload);
> //  finalPayload already has the byteString data embedded. 
> ByteString byteString = finalPayload.toByteString(); -> It involves a 
> copy again.
> if (LOG.isDebugEnabled()) {
>   LOG.debug("sendCommandAsync {} {}", isReadOnlyRequest,
>   sanitizeForDebug(finalPayload));
> }
> return isReadOnlyRequest ?
> getClient().sendReadOnlyAsync(() -> byteString) :
> getClient().sendAsync(() -> byteString);
>   }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2169) Avoid buffer copies while submitting client requests in Ratis

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2169?focusedWorklogId=319349&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319349
 ]

ASF GitHub Bot logged work on HDDS-2169:


Author: ASF GitHub Bot
Created on: 27/Sep/19 05:30
Start Date: 27/Sep/19 05:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1517: HDDS-2169
URL: https://github.com/apache/hadoop/pull/1517#issuecomment-535792469
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 11 | https://github.com/apache/hadoop/pull/1517 does not 
apply to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/1517 |
   | JIRA Issue | HDDS-2169 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/2/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319349)
Time Spent: 0.5h  (was: 20m)

> Avoid buffer copies while submitting client requests in Ratis
> -
>
> Key: HDDS-2169
> URL: https://issues.apache.org/jira/browse/HDDS-2169
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Shashikant Banerjee
>Assignee: Tsz-wo Sze
>Priority: Major
>  Labels: pull-request-available
> Attachments: o2169_20190923.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently, while sending write requests to Ratis from ozone, a protobuf 
> object containing data encoded  and then resultant protobuf is again 
> converted to a byteString which internally does a copy of the buffer embedded 
> inside the protobuf again so that it can be submitted over to Ratis client. 
> Again, while sending the appendRequest as well while building up the 
> appendRequestProto, it might be again copying the data. The idea here is to 
> provide client so pass the raw data(stateMachine data) separately to ratis 
> client without copying overhead. 
>  
> {code:java}
> private CompletableFuture sendRequestAsync(
> ContainerCommandRequestProto request) {
>   try (Scope scope = GlobalTracer.get()
>   .buildSpan("XceiverClientRatis." + request.getCmdType().name())
>   .startActive(true)) {
> ContainerCommandRequestProto finalPayload =
> ContainerCommandRequestProto.newBuilder(request)
> .setTraceID(TracingUtil.exportCurrentSpan())
> .build();
> boolean isReadOnlyRequest = HddsUtils.isReadOnly(finalPayload);
> //  finalPayload already has the byteString data embedded. 
> ByteString byteString = finalPayload.toByteString(); -> It involves a 
> copy again.
> if (LOG.isDebugEnabled()) {
>   LOG.debug("sendCommandAsync {} {}", isReadOnlyRequest,
>   sanitizeForDebug(finalPayload));
> }
> return isReadOnlyRequest ?
> getClient().sendReadOnlyAsync(() -> byteString) :
> getClient().sendAsync(() -> byteString);
>   }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2179) ConfigFileGenerator fails with Java 10 or newer

2019-09-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16939132#comment-16939132
 ] 

Hudson commented on HDDS-2179:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17401 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17401/])
HDDS-2179. ConfigFileGenerator fails with Java 10 or newer (aengineer: rev 
0371e953ac51d991f2bfed9ffd1724ff80733752)
* (edit) 
hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/ConfigFileGenerator.java


> ConfigFileGenerator fails with Java 10 or newer
> ---
>
> Key: HDDS-2179
> URL: https://issues.apache.org/jira/browse/HDDS-2179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code:title=mvn -f pom.ozone.xml -DskipTests -am -pl :hadoop-hdds-config 
> clean package}
> ...
> [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
> hadoop-hdds-config ---
> [INFO] Compiling 3 source files to hadoop-hdds/config/target/test-classes
> ...
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-hdds-config: Compilation failure
> [ERROR] Can't generate the config file from annotation: 
> hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
> {code}
> The root cause is that new Java (I guess it's 9+, but tried only on 10+) 
> throws a different {{IOException}} subclass: {{NoSuchFileException}} instead 
> of {{FileNotFoundException}}.
> {code}
> java.nio.file.NoSuchFileException: 
> hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
>   at 
> java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
>   at 
> java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:219)
>   at java.base/java.nio.file.Files.newByteChannel(Files.java:374)
>   at java.base/java.nio.file.Files.newByteChannel(Files.java:425)
>   at 
> java.base/java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:420)
>   at java.base/java.nio.file.Files.newInputStream(Files.java:159)
>   at 
> jdk.compiler/com.sun.tools.javac.file.PathFileObject.openInputStream(PathFileObject.java:461)
>   at 
> java.compiler@13/javax.tools.ForwardingFileObject.openInputStream(ForwardingFileObject.java:74)
>   at 
> org.apache.hadoop.hdds.conf.ConfigFileGenerator.process(ConfigFileGenerator.java:62)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2179) ConfigFileGenerator fails with Java 10 or newer

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2179?focusedWorklogId=319348&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319348
 ]

ASF GitHub Bot logged work on HDDS-2179:


Author: ASF GitHub Bot
Created on: 27/Sep/19 05:16
Start Date: 27/Sep/19 05:16
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1525: HDDS-2179. 
ConfigFileGenerator fails with Java 10 or newer
URL: https://github.com/apache/hadoop/pull/1525
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319348)
Time Spent: 1h  (was: 50m)

> ConfigFileGenerator fails with Java 10 or newer
> ---
>
> Key: HDDS-2179
> URL: https://issues.apache.org/jira/browse/HDDS-2179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code:title=mvn -f pom.ozone.xml -DskipTests -am -pl :hadoop-hdds-config 
> clean package}
> ...
> [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
> hadoop-hdds-config ---
> [INFO] Compiling 3 source files to hadoop-hdds/config/target/test-classes
> ...
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-hdds-config: Compilation failure
> [ERROR] Can't generate the config file from annotation: 
> hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
> {code}
> The root cause is that new Java (I guess it's 9+, but tried only on 10+) 
> throws a different {{IOException}} subclass: {{NoSuchFileException}} instead 
> of {{FileNotFoundException}}.
> {code}
> java.nio.file.NoSuchFileException: 
> hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
>   at 
> java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
>   at 
> java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:219)
>   at java.base/java.nio.file.Files.newByteChannel(Files.java:374)
>   at java.base/java.nio.file.Files.newByteChannel(Files.java:425)
>   at 
> java.base/java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:420)
>   at java.base/java.nio.file.Files.newInputStream(Files.java:159)
>   at 
> jdk.compiler/com.sun.tools.javac.file.PathFileObject.openInputStream(PathFileObject.java:461)
>   at 
> java.compiler@13/javax.tools.ForwardingFileObject.openInputStream(ForwardingFileObject.java:74)
>   at 
> org.apache.hadoop.hdds.conf.ConfigFileGenerator.process(ConfigFileGenerator.java:62)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2179) ConfigFileGenerator fails with Java 10 or newer

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2179?focusedWorklogId=319347&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319347
 ]

ASF GitHub Bot logged work on HDDS-2179:


Author: ASF GitHub Bot
Created on: 27/Sep/19 05:16
Start Date: 27/Sep/19 05:16
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1525: HDDS-2179. 
ConfigFileGenerator fails with Java 10 or newer
URL: https://github.com/apache/hadoop/pull/1525#issuecomment-535789540
 
 
   The failures are not related to this patch. I have committed this patch to 
the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319347)
Time Spent: 50m  (was: 40m)

> ConfigFileGenerator fails with Java 10 or newer
> ---
>
> Key: HDDS-2179
> URL: https://issues.apache.org/jira/browse/HDDS-2179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code:title=mvn -f pom.ozone.xml -DskipTests -am -pl :hadoop-hdds-config 
> clean package}
> ...
> [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
> hadoop-hdds-config ---
> [INFO] Compiling 3 source files to hadoop-hdds/config/target/test-classes
> ...
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-hdds-config: Compilation failure
> [ERROR] Can't generate the config file from annotation: 
> hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
> {code}
> The root cause is that new Java (I guess it's 9+, but tried only on 10+) 
> throws a different {{IOException}} subclass: {{NoSuchFileException}} instead 
> of {{FileNotFoundException}}.
> {code}
> java.nio.file.NoSuchFileException: 
> hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
>   at 
> java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
>   at 
> java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:219)
>   at java.base/java.nio.file.Files.newByteChannel(Files.java:374)
>   at java.base/java.nio.file.Files.newByteChannel(Files.java:425)
>   at 
> java.base/java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:420)
>   at java.base/java.nio.file.Files.newInputStream(Files.java:159)
>   at 
> jdk.compiler/com.sun.tools.javac.file.PathFileObject.openInputStream(PathFileObject.java:461)
>   at 
> java.compiler@13/javax.tools.ForwardingFileObject.openInputStream(ForwardingFileObject.java:74)
>   at 
> org.apache.hadoop.hdds.conf.ConfigFileGenerator.process(ConfigFileGenerator.java:62)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2193) Adding container related metrics in SCM

2019-09-26 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16939119#comment-16939119
 ] 

Anu Engineer commented on HDDS-2193:


Thank you for the contribution. I have committed this patch to the trunk branch.

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-2193
> URL: https://issues.apache.org/jira/browse/HDDS-2193
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
>  * Number of list container ops.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2193) Adding container related metrics in SCM

2019-09-26 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2193.

Fix Version/s: 0.5.0
   Resolution: Fixed

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-2193
> URL: https://issues.apache.org/jira/browse/HDDS-2193
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
>  * Number of list container ops.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2193) Adding container related metrics in SCM

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2193?focusedWorklogId=319342&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319342
 ]

ASF GitHub Bot logged work on HDDS-2193:


Author: ASF GitHub Bot
Created on: 27/Sep/19 04:49
Start Date: 27/Sep/19 04:49
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1534: HDDS-2193. 
Adding container related metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1534
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319342)
Time Spent: 1h  (was: 50m)

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-2193
> URL: https://issues.apache.org/jira/browse/HDDS-2193
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
>  * Number of list container ops.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2193) Adding container related metrics in SCM

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2193?focusedWorklogId=319341&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319341
 ]

ASF GitHub Bot logged work on HDDS-2193:


Author: ASF GitHub Bot
Created on: 27/Sep/19 04:49
Start Date: 27/Sep/19 04:49
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1534: HDDS-2193. Adding 
container related metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1534#issuecomment-535784216
 
 
   Thank you for the contribution. I have committed this patch to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319341)
Time Spent: 50m  (was: 40m)

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-2193
> URL: https://issues.apache.org/jira/browse/HDDS-2193
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
>  * Number of list container ops.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2174) Delete GDPR Encryption Key from metadata when a Key is deleted

2019-09-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16939115#comment-16939115
 ] 

Hudson commented on HDDS-2174:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17399 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17399/])
HDDS-2174. Delete GDPR Encryption Key from metadata when a Key is (aengineer: 
rev c55ac6a1c7d1dc65a0d2e735b315bbf6898f6ff1)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMRequestUtils.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/s3/multipart/S3MultipartUploadCommitPartResponse.java
* (edit) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/s3/multipart/S3MultipartUploadAbortResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyDeleteResponse.java


> Delete GDPR Encryption Key from metadata when a Key is deleted
> --
>
> Key: HDDS-2174
> URL: https://issues.apache.org/jira/browse/HDDS-2174
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> As advised by [~arp]  & [~aengineer], when a deleteKey command is executed, 
> delete the gdpr encryption key details from key metadata before moving it to 
> deletedTable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2174) Delete GDPR Encryption Key from metadata when a Key is deleted

2019-09-26 Thread Dinesh Chitlangia (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16939112#comment-16939112
 ] 

Dinesh Chitlangia commented on HDDS-2174:
-

Thanks [~bharat] for review, [~aengineer] for review/commit.

> Delete GDPR Encryption Key from metadata when a Key is deleted
> --
>
> Key: HDDS-2174
> URL: https://issues.apache.org/jira/browse/HDDS-2174
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> As advised by [~arp]  & [~aengineer], when a deleteKey command is executed, 
> delete the gdpr encryption key details from key metadata before moving it to 
> deletedTable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2174) Delete GDPR Encryption Key from metadata when a Key is deleted

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2174?focusedWorklogId=319337&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319337
 ]

ASF GitHub Bot logged work on HDDS-2174:


Author: ASF GitHub Bot
Created on: 27/Sep/19 04:33
Start Date: 27/Sep/19 04:33
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on issue #1519: HDDS-2174. 
Delete GDPR Encryption Key from metadata when a Key is deleted
URL: https://github.com/apache/hadoop/pull/1519#issuecomment-535781468
 
 
   Thank you @anuengineer  for review/commit.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319337)
Time Spent: 3h 10m  (was: 3h)

> Delete GDPR Encryption Key from metadata when a Key is deleted
> --
>
> Key: HDDS-2174
> URL: https://issues.apache.org/jira/browse/HDDS-2174
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> As advised by [~arp]  & [~aengineer], when a deleteKey command is executed, 
> delete the gdpr encryption key details from key metadata before moving it to 
> deletedTable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2174) Delete GDPR Encryption Key from metadata when a Key is deleted

2019-09-26 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2174:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

[~bharat] Thanks for the reviews. [~dineshchitlangia] Thanks you for the 
contribution. I have committed this patch to the trunk.

> Delete GDPR Encryption Key from metadata when a Key is deleted
> --
>
> Key: HDDS-2174
> URL: https://issues.apache.org/jira/browse/HDDS-2174
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> As advised by [~arp]  & [~aengineer], when a deleteKey command is executed, 
> delete the gdpr encryption key details from key metadata before moving it to 
> deletedTable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2174) Delete GDPR Encryption Key from metadata when a Key is deleted

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2174?focusedWorklogId=319336&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319336
 ]

ASF GitHub Bot logged work on HDDS-2174:


Author: ASF GitHub Bot
Created on: 27/Sep/19 04:26
Start Date: 27/Sep/19 04:26
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1519: HDDS-2174. Delete 
GDPR Encryption Key from metadata when a Key is deleted
URL: https://github.com/apache/hadoop/pull/1519#issuecomment-535780292
 
 
   Thank you for the contribution. I have committed this patch to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319336)
Time Spent: 3h  (was: 2h 50m)

> Delete GDPR Encryption Key from metadata when a Key is deleted
> --
>
> Key: HDDS-2174
> URL: https://issues.apache.org/jira/browse/HDDS-2174
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> As advised by [~arp]  & [~aengineer], when a deleteKey command is executed, 
> delete the gdpr encryption key details from key metadata before moving it to 
> deletedTable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2174) Delete GDPR Encryption Key from metadata when a Key is deleted

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2174?focusedWorklogId=319335&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319335
 ]

ASF GitHub Bot logged work on HDDS-2174:


Author: ASF GitHub Bot
Created on: 27/Sep/19 04:26
Start Date: 27/Sep/19 04:26
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1519: HDDS-2174. 
Delete GDPR Encryption Key from metadata when a Key is deleted
URL: https://github.com/apache/hadoop/pull/1519
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319335)
Time Spent: 3h  (was: 2h 50m)

> Delete GDPR Encryption Key from metadata when a Key is deleted
> --
>
> Key: HDDS-2174
> URL: https://issues.apache.org/jira/browse/HDDS-2174
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> As advised by [~arp]  & [~aengineer], when a deleteKey command is executed, 
> delete the gdpr encryption key details from key metadata before moving it to 
> deletedTable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2034) Async RATIS pipeline creation and destroy through heartbeat commands

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2034?focusedWorklogId=319334&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319334
 ]

ASF GitHub Bot logged work on HDDS-2034:


Author: ASF GitHub Bot
Created on: 27/Sep/19 04:08
Start Date: 27/Sep/19 04:08
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on issue #1469: HDDS-2034. Async 
RATIS pipeline creation and destroy through heartbea…
URL: https://github.com/apache/hadoop/pull/1469#issuecomment-535777599
 
 
   @anuengineer and @xiaoyuyao ,  should I provide a new patch on trunk now, or 
wait until the whole communication channel design come out next week? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319334)
Time Spent: 7h 10m  (was: 7h)

> Async RATIS pipeline creation and destroy through heartbeat commands
> 
>
> Key: HDDS-2034
> URL: https://issues.apache.org/jira/browse/HDDS-2034
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> Currently, pipeline creation and destroy are synchronous operations. SCM 
> directly connect to each datanode of the pipeline through gRPC channel to 
> create the pipeline to destroy the pipeline.  
> This task is to remove the gRPC channel, send pipeline creation and destroy 
> action through heartbeat command to each datanode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2169) Avoid buffer copies while submitting client requests in Ratis

2019-09-26 Thread Tsz-wo Sze (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16939098#comment-16939098
 ] 

Tsz-wo Sze commented on HDDS-2169:
--

The failure does not seem related to the patch; see 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/1/artifact/out/branch-mvninstall-hadoop-hdds.txt

bq. [ERROR] Failed to execute goal 
org.apache.hadoop:hadoop-maven-plugins:3.2.0:protoc (compile-protoc) on project 
hadoop-hdds-common: org.apache.maven.plugin.MojoExecutionException: protoc 
version is 'libprotoc 3.7.1', expected version is '2.5.0' -> [Help 1]



> Avoid buffer copies while submitting client requests in Ratis
> -
>
> Key: HDDS-2169
> URL: https://issues.apache.org/jira/browse/HDDS-2169
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Shashikant Banerjee
>Assignee: Tsz-wo Sze
>Priority: Major
>  Labels: pull-request-available
> Attachments: o2169_20190923.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently, while sending write requests to Ratis from ozone, a protobuf 
> object containing data encoded  and then resultant protobuf is again 
> converted to a byteString which internally does a copy of the buffer embedded 
> inside the protobuf again so that it can be submitted over to Ratis client. 
> Again, while sending the appendRequest as well while building up the 
> appendRequestProto, it might be again copying the data. The idea here is to 
> provide client so pass the raw data(stateMachine data) separately to ratis 
> client without copying overhead. 
>  
> {code:java}
> private CompletableFuture sendRequestAsync(
> ContainerCommandRequestProto request) {
>   try (Scope scope = GlobalTracer.get()
>   .buildSpan("XceiverClientRatis." + request.getCmdType().name())
>   .startActive(true)) {
> ContainerCommandRequestProto finalPayload =
> ContainerCommandRequestProto.newBuilder(request)
> .setTraceID(TracingUtil.exportCurrentSpan())
> .build();
> boolean isReadOnlyRequest = HddsUtils.isReadOnly(finalPayload);
> //  finalPayload already has the byteString data embedded. 
> ByteString byteString = finalPayload.toByteString(); -> It involves a 
> copy again.
> if (LOG.isDebugEnabled()) {
>   LOG.debug("sendCommandAsync {} {}", isReadOnlyRequest,
>   sanitizeForDebug(finalPayload));
> }
> return isReadOnlyRequest ?
> getClient().sendReadOnlyAsync(() -> byteString) :
> getClient().sendAsync(() -> byteString);
>   }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=319331&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319331
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 27/Sep/19 03:42
Start Date: 27/Sep/19 03:42
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1511: 
HDDS-2162. Make Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328903326
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -309,13 +305,33 @@ private OzoneManager(OzoneConfiguration conf) throws 
IOException,
   AuthenticationException {
 super(OzoneVersionInfo.OZONE_VERSION_INFO);
 Preconditions.checkNotNull(conf);
-configuration = conf;
+configuration = new OzoneConfiguration(conf);
 
 Review comment:
   here configuration = new OzoneConfiguration(conf);
   And later I am changing the configuration property values in this new object 
so that it will not affect to original configuration.
   
   I just checked with below:
   ```
   
   OzoneConfiguration configuration = new OzoneConfiguration();
   configuration.set(OZONE_METADATA_DIRS,
   folder.newFolder().getAbsolutePath());
   
   OzoneConfiguration configuration1 = new 
OzoneConfiguration(configuration);
   configuration1.set(OZONE_METADATA_DIRS, "bharat");
   
   System.out.println(configuration.get(OZONE_METADATA_DIRS));
   System.out.println(configuration1.get(OZONE_METADATA_DIRS));
   ```
   
/var/folders/g5/fk451xl14vdf891pq7b6m6v0gp/T/junit852875409842836/junit4506171024308775995
   bharat
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319331)
Time Spent: 5h 20m  (was: 5h 10m)

> Make Kerberos related configuration support HA style config
> ---
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,
> OZONE_OM_KERBEROS_PRINCIPAL_KEY,
> OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,
> OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14849) Erasure Coding: the internal block is replicated many times when datanode is decommissioning

2019-09-26 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16939084#comment-16939084
 ] 

Fei Hui commented on HDFS-14849:


+1 from me
This fix is like the function countReplicasForStripedBlock implement
{code}
  /**
   * For a striped block, it is possible it contains full number of internal
   * blocks (i.e., 9 by default), but with duplicated replicas of the same
   * internal block. E.g., for the following list of internal blocks
   * b0, b0, b1, b2, b3, b4, b5, b6, b7
   * we have 9 internal blocks but we actually miss b8.
   * We should use this method to detect the above scenario and schedule
   * necessary reconstruction.
   */
  private void countReplicasForStripedBlock(NumberReplicas counters,
  BlockInfoStriped block, Collection nodesCorrupt,
  boolean inStartupSafeMode) {
BitSet bitSet = new BitSet(block.getTotalBlockNum());
for (StorageAndBlockIndex si : block.getStorageAndIndexInfos()) {
  StoredReplicaState state = checkReplicaOnStorage(counters, block,
  si.getStorage(), nodesCorrupt, inStartupSafeMode);
  if (state == StoredReplicaState.LIVE) {
if (!bitSet.get(si.getBlockIndex())) {
  bitSet.set(si.getBlockIndex());
} else {
  counters.subtract(StoredReplicaState.LIVE, 1);
  counters.add(StoredReplicaState.REDUNDANT, 1);
}
  }
}
  }
{code}
[~ayushtkn] Could you please take a look?

> Erasure Coding: the internal block is replicated many times when datanode is 
> decommissioning
> 
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, erasure-coding
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Attachments: HDFS-14849.001.patch, HDFS-14849.002.patch, 
> fsck-file.png, liveBlockIndices.png, scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC internal 
> block in that datanode will be replicated many times.
> // added 2019/09/19
> I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
> simultaneously. 
>  !scheduleReconstruction.png! 
>  !fsck-file.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=319324&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319324
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 27/Sep/19 02:57
Start Date: 27/Sep/19 02:57
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1511: 
HDDS-2162. Make Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328897127
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerConfiguration.java
 ##
 @@ -119,10 +119,13 @@ public void testDefaultPortIfNotSpecified() throws 
Exception {
 String omNode1Id = "omNode1";
 String omNode2Id = "omNode2";
 String omNodesKeyValue = omNode1Id + "," + omNode2Id;
-conf.set(OMConfigKeys.OZONE_OM_NODES_KEY, omNodesKeyValue);
+String serviceID = "service1";
+conf.set(OMConfigKeys.OZONE_OM_SERVICE_IDS_KEY, serviceID);
+conf.set(OMConfigKeys.OZONE_OM_NODES_KEY + "." + serviceID,
+omNodesKeyValue);
 
-String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode1Id);
-String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode2Id);
+String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode1Id);
+String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode2Id);
 
 
 Review comment:
   Discussed offline, from my understanding this is being done as to share 
config across all OM's. And this PR is not changing any config loading code of 
OM HA, it just added Kerberos/DB config as described in Jira description. 
   
   Anu said we don't require it, and we shall continue the discussion later to 
see how we can do it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319324)
Time Spent: 5h  (was: 4h 50m)

> Make Kerberos related configuration support HA style config
> ---
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,
> OZONE_OM_KERBEROS_PRINCIPAL_KEY,
> OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,
> OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=319325&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319325
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 27/Sep/19 02:57
Start Date: 27/Sep/19 02:57
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1511: 
HDDS-2162. Make Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328897127
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerConfiguration.java
 ##
 @@ -119,10 +119,13 @@ public void testDefaultPortIfNotSpecified() throws 
Exception {
 String omNode1Id = "omNode1";
 String omNode2Id = "omNode2";
 String omNodesKeyValue = omNode1Id + "," + omNode2Id;
-conf.set(OMConfigKeys.OZONE_OM_NODES_KEY, omNodesKeyValue);
+String serviceID = "service1";
+conf.set(OMConfigKeys.OZONE_OM_SERVICE_IDS_KEY, serviceID);
+conf.set(OMConfigKeys.OZONE_OM_NODES_KEY + "." + serviceID,
+omNodesKeyValue);
 
-String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode1Id);
-String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode2Id);
+String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode1Id);
+String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode2Id);
 
 
 Review comment:
   Discussed offline, from my understanding this is being done as to share 
config across all OM's. And this PR is not changing any config loading code of 
OM HA, it just added Kerberos/DB config as described in Jira description. 
   
   Anu said we don't require the current way, and we shall continue the 
discussion later to see how we can do it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319325)
Time Spent: 5h 10m  (was: 5h)

> Make Kerberos related configuration support HA style config
> ---
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,
> OZONE_OM_KERBEROS_PRINCIPAL_KEY,
> OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,
> OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14814) RBF: RouterQuotaUpdateService supports inherited rule.

2019-09-26 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-14814:
---
Attachment: HDFS-14814.009.patch

> RBF: RouterQuotaUpdateService supports inherited rule.
> --
>
> Key: HDFS-14814
> URL: https://issues.apache.org/jira/browse/HDFS-14814
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-14814.001.patch, HDFS-14814.002.patch, 
> HDFS-14814.003.patch, HDFS-14814.004.patch, HDFS-14814.005.patch, 
> HDFS-14814.006.patch, HDFS-14814.007.patch, HDFS-14814.008.patch, 
> HDFS-14814.009.patch
>
>
> I want to add a rule *'The quota should be set the same as the nearest 
> parent'* to Global Quota. Supposing we have the mount table below.
> M1: /dir-a                            ns0->/dir-a     \{nquota=10,squota=20}
> M2: /dir-a/dir-b                 ns1->/dir-b     \{nquota=-1,squota=30}
> M3: /dir-a/dir-b/dir-c       ns2->/dir-c     \{nquota=-1,squota=-1}
> M4: /dir-d                           ns3->/dir-d     \{nquota=-1,squota=-1}
>  
> The quota for the remote locations on the namespaces should be:
>  ns0->/dir-a     \{nquota=10,squota=20}
>  ns1->/dir-b     \{nquota=10,squota=30}
>  ns2->/dir-c      \{nquota=10,squota=30}
>  ns3->/dir-d     \{nquota=-1,squota=-1}
>  
> The quota of the remote location is set the same as the corresponding 
> MountTable, and if there is no quota of the MountTable then the quota is set 
> to the nearest parent MountTable with quota.
>  
> It's easy to implement it. In RouterQuotaUpdateService each time we compute 
> the currentQuotaUsage, we can get the quota info for each MountTable. We can 
> do a
>  check and fix all the MountTable which's quota doesn't match the rule above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14814) RBF: RouterQuotaUpdateService supports inherited rule.

2019-09-26 Thread Jinglun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16939076#comment-16939076
 ] 

Jinglun commented on HDFS-14814:


Thanks [~ayushtkn] [~elgoiri] your nice comments and suggestions ! 
{quote}I didn't understand this comment.
{quote}
My bad, I missed it when I did refactor.
{quote}RouterQuotaManager#getParentsContainingQuota() is using Map#Entry while 
when we use that, we only use the values and not the Key.
Returning a list of entries looks weird.
{quote}
I not really like removing the key part. Because the key let me know which path 
the RouterQuotaUsage belongs to. Since it is a public method, I think it can 
make the method more complete. Another way is to create a new Class with 2 
members: path and RouterQuotaUsage. I'm ok with either 
Entry or a Wrapper class for path &  RouterQuotaUsage. 
What do you think ?

 

Upload v09 and pending jenkins.

> RBF: RouterQuotaUpdateService supports inherited rule.
> --
>
> Key: HDFS-14814
> URL: https://issues.apache.org/jira/browse/HDFS-14814
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-14814.001.patch, HDFS-14814.002.patch, 
> HDFS-14814.003.patch, HDFS-14814.004.patch, HDFS-14814.005.patch, 
> HDFS-14814.006.patch, HDFS-14814.007.patch, HDFS-14814.008.patch
>
>
> I want to add a rule *'The quota should be set the same as the nearest 
> parent'* to Global Quota. Supposing we have the mount table below.
> M1: /dir-a                            ns0->/dir-a     \{nquota=10,squota=20}
> M2: /dir-a/dir-b                 ns1->/dir-b     \{nquota=-1,squota=30}
> M3: /dir-a/dir-b/dir-c       ns2->/dir-c     \{nquota=-1,squota=-1}
> M4: /dir-d                           ns3->/dir-d     \{nquota=-1,squota=-1}
>  
> The quota for the remote locations on the namespaces should be:
>  ns0->/dir-a     \{nquota=10,squota=20}
>  ns1->/dir-b     \{nquota=10,squota=30}
>  ns2->/dir-c      \{nquota=10,squota=30}
>  ns3->/dir-d     \{nquota=-1,squota=-1}
>  
> The quota of the remote location is set the same as the corresponding 
> MountTable, and if there is no quota of the MountTable then the quota is set 
> to the nearest parent MountTable with quota.
>  
> It's easy to implement it. In RouterQuotaUpdateService each time we compute 
> the currentQuotaUsage, we can get the quota info for each MountTable. We can 
> do a
>  check and fix all the MountTable which's quota doesn't match the rule above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=319317&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319317
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 27/Sep/19 02:08
Start Date: 27/Sep/19 02:08
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1511: HDDS-2162. 
Make Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328890013
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -309,13 +305,33 @@ private OzoneManager(OzoneConfiguration conf) throws 
IOException,
   AuthenticationException {
 super(OzoneVersionInfo.OZONE_VERSION_INFO);
 Preconditions.checkNotNull(conf);
-configuration = conf;
+configuration = new OzoneConfiguration(conf);
 
 Review comment:
   But you just lost the reference to the original object. I am slightly 
confused here.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319317)
Time Spent: 4h 50m  (was: 4h 40m)

> Make Kerberos related configuration support HA style config
> ---
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,
> OZONE_OM_KERBEROS_PRINCIPAL_KEY,
> OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,
> OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=319316&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319316
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 27/Sep/19 02:06
Start Date: 27/Sep/19 02:06
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1511: HDDS-2162. 
Make Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328889881
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerConfiguration.java
 ##
 @@ -119,10 +119,13 @@ public void testDefaultPortIfNotSpecified() throws 
Exception {
 String omNode1Id = "omNode1";
 String omNode2Id = "omNode2";
 String omNodesKeyValue = omNode1Id + "," + omNode2Id;
-conf.set(OMConfigKeys.OZONE_OM_NODES_KEY, omNodesKeyValue);
+String serviceID = "service1";
+conf.set(OMConfigKeys.OZONE_OM_SERVICE_IDS_KEY, serviceID);
+conf.set(OMConfigKeys.OZONE_OM_NODES_KEY + "." + serviceID,
+omNodesKeyValue);
 
-String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode1Id);
-String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode2Id);
+String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode1Id);
+String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode2Id);
 
 
 Review comment:
   > Not got your last part what is proposed.
   
   
https://docs.microsoft.com/en-us/windows/win32/ad/name-formats-for-unique-spns
   
   OzoneManager/host1.example.com/CN=hrdb,OU=mktg,DC=example,DC=com
   OzoneManager/host2.example.com/CN=hrdb,OU=mktg,DC=example,DC=com
   OzoneManager/host3.example.com/CN=hrdb,OU=mktg,DC=example,DC=com
   
   This is all we need, is what I am trying to say. 
   > Suppose the user wants to use different keytab file location/principal 
name it will also help in this situation.
   
   Why would you want separate identities to communicate to the same service ? 
Can you give me an example of why this would be needed ? More over, why support 
that identity via naming tricks in Ozone instead of creating an new SPN in 
Kerberos Domain?
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319316)
Time Spent: 4h 40m  (was: 4.5h)

> Make Kerberos related configuration support HA style config
> ---
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,
> OZONE_OM_KERBEROS_PRINCIPAL_KEY,
> OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,
> OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1569) Add ability to SCM for creating multiple pipelines with same datanode

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1569?focusedWorklogId=319315&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319315
 ]

ASF GitHub Bot logged work on HDDS-1569:


Author: ASF GitHub Bot
Created on: 27/Sep/19 02:04
Start Date: 27/Sep/19 02:04
Worklog Time Spent: 10m 
  Work Description: timmylicheng commented on issue #1431: HDDS-1569 
Support creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop/pull/1431#issuecomment-535753019
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319315)
Time Spent: 5h 10m  (was: 5h)

> Add ability to SCM for creating multiple pipelines with same datanode
> -
>
> Key: HDDS-1569
> URL: https://issues.apache.org/jira/browse/HDDS-1569
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> - Refactor _RatisPipelineProvider.create()_ to be able to create pipelines 
> with datanodes that are not a part of sufficient pipelines
> - Define soft and hard upper bounds for pipeline membership
> - Create SCMAllocationManager that can be leveraged to get a candidate set of 
> datanodes based on placement policies
> - Add the datanodes to internal datastructures



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1737) Add Volume check in KeyManager and File Operations

2019-09-26 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1737 started by YiSheng Lien.
--
> Add Volume check in KeyManager and File Operations
> --
>
> Key: HDDS-1737
> URL: https://issues.apache.org/jira/browse/HDDS-1737
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie
>
> This is to address a TODO to check volume checks when performing Key/File 
> operations.
>  
> // TODO: Not checking volume exist here, once we have full cache we can
> // add volume exist check also.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1984) Fix listBucket API

2019-09-26 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien reassigned HDDS-1984:
--

Assignee: YiSheng Lien

> Fix listBucket API
> --
>
> Key: HDDS-1984
> URL: https://issues.apache.org/jira/browse/HDDS-1984
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: YiSheng Lien
>Priority: Major
>
> This Jira is to fix listBucket API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listBuckets, it should use both 
> in-memory cache and rocksdb bucket table to list buckets in a volume.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2194) Replication of Container fails with "Only closed containers could be exported"

2019-09-26 Thread Mukul Kumar Singh (Jira)
Mukul Kumar Singh created HDDS-2194:
---

 Summary: Replication of Container fails with "Only closed 
containers could be exported"
 Key: HDDS-2194
 URL: https://issues.apache.org/jira/browse/HDDS-2194
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Affects Versions: 0.5.0
Reporter: Mukul Kumar Singh


Replication of Container fails with "Only closed containers could be exported"

cc: [~nanda]

{code}
2019-09-26 15:00:17,640 [grpc-default-executor-13] INFO  
replication.GrpcReplicationService (GrpcReplicationService.java:download(57)) - 
Streaming container data (37) to other
datanode
Sep 26, 2019 3:00:17 PM 
org.apache.ratis.thirdparty.io.grpc.internal.SerializingExecutor run
SEVERE: Exception while executing runnable 
org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed@70e641f2
java.lang.IllegalStateException: Only closed containers could be exported: 
ContainerId=37
2019-09-26 15:00:17,644 [grpc-default-executor-17] ERROR 
replication.GrpcReplicationClient (GrpcReplicationClient.java:onError(142)) - 
Container download was unsuccessfull
at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.exportContainerData(KeyValueContainer.java:527)
org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNKNOWN
at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.exportContainer(KeyValueHandler.java:875)
at 
org.apache.ratis.thirdparty.io.grpc.Status.asRuntimeException(Status.java:526)
at 
org.apache.hadoop.ozone.container.ozoneimpl.ContainerController.exportContainer(ContainerController.java:134)
at 
org.apache.ratis.thirdparty.io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:434)
at 
org.apache.hadoop.ozone.container.replication.OnDemandContainerReplicationSource.copyData(OnDemandContainerReplicationSource
 at 
org.apache.ratis.thirdparty.io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
.java:64)
at 
org.apache.ratis.thirdparty.io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at 
org.apache.hadoop.ozone.container.replication.GrpcReplicationService.download(GrpcReplicationService.java:63)
at 
org.apache.ratis.thirdparty.io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClient
 at 
org.apache.hadoop.hdds.protocol.datanode.proto.IntraDatanodeProtocolServiceGrpc$MethodHandlers.invoke(IntraDatanodeProtocolSCallListener.java:40)
erviceGrpc.java:217)
at 
org.apache.ratis.thirdparty.io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1$1.onClose(CensusStatsModule.java:678)
at 
org.apache.ratis.thirdparty.io.grpc.stub.ServerCalls$UnaryServerCallHandler$UnaryServerCallListener.onHalfClose(ServerCalls.
 at 
org.apache.ratis.thirdparty.io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
java:171)
at 
org.apache.ratis.thirdparty.io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at 
org.apache.ratis.thirdparty.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerCallImpl.java:283)
at 
org.apache.ratis.thirdparty.io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClient
 at 
org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed.runInContext(ServerImpl.java:710)
CallListener.java:40)
at 
org.apache.ratis.thirdparty.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at 
org.apache.ratis.thirdparty.io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1$1.onClose(CensusTracingModule.ja
 at 
org.apache.ratis.thirdparty.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
va:397)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
org.apache.ratis.thirdparty.io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:459)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at 
org.apache.ratis.thirdparty.io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:63)
at java.lang.Thread.run(Thread.java:748)

at 
org.apache.ratis.thirdparty.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:546)
at 
org.apache.ratis.thirdparty.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:467)
at 
org.apache.ratis.thirdparty.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:584)
at 
org.apache.ratis.thirdparty.io.grpc.i

[jira] [Assigned] (HDDS-2188) Implement LocatedFileStatus & getFileBlockLocations to provide node/localization information to Yarn/Mapreduce

2019-09-26 Thread Siddharth Wagle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle reassigned HDDS-2188:
-

Assignee: Aravindan Vijayan  (was: Mukul Kumar Singh)

> Implement LocatedFileStatus & getFileBlockLocations to provide 
> node/localization information to Yarn/Mapreduce
> --
>
> Key: HDDS-2188
> URL: https://issues.apache.org/jira/browse/HDDS-2188
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.5.0
>Reporter: Mukul Kumar Singh
>Assignee: Aravindan Vijayan
>Priority: Major
>
> For applications like Hive/MapReduce to take advantage of the data locality 
> in Ozone, Ozone should return the location of the Ozone blocks. This is 
> needed for better read performance for Hadoop Applications.
> {code}
> if (file instanceof LocatedFileStatus) {
>   blkLocations = ((LocatedFileStatus) file).getBlockLocations();
> } else {
>   blkLocations = fs.getFileBlockLocations(file, 0, length);
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete

2019-09-26 Thread Siddharth Wagle (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16939048#comment-16939048
 ] 

Siddharth Wagle commented on HDDS-1868:
---

Just realized while writing the UT that we create pipeline in OPEN state, 
fixing it.

> Ozone pipelines should be marked as ready only after the leader election is 
> complete
> 
>
> Key: HDDS-1868
> URL: https://issues.apache.org/jira/browse/HDDS-1868
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1868.01.patch, HDDS-1868.02.patch, 
> HDDS-1868.03.patch, HDDS-1868.04.patch, HDDS-1868.05.patch
>
>
> Ozone pipeline on restart start in allocated state, they are moved into open 
> state after all the pipeline have reported to it. However this potentially 
> can lead into an issue where the pipeline is still not ready to accept any 
> incoming IO operations.
> The pipelines should be marked as ready only after the leader election is 
> complete and leader is ready to accept incoming IO.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1146) Adding container related metrics in SCM

2019-09-26 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16939034#comment-16939034
 ] 

Bharat Viswanadham commented on HDDS-1146:
--

I will post a patch for remaining metrics in this jira.

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-1146
> URL: https://issues.apache.org/jira/browse/HDDS-1146
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Supratim Deka
>Priority: Major
> Attachments: HDDS-1146.000.patch, HDDS-1146.001.patch, 
> HDDS-1146.002.patch
>
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of containers
>  * Number of open containers
>  * Number of closed containers
>  * Number of quasi closed containers
>  * Number of closing containers
> Above are already handled in HDDS-918.
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
> Handled in HDDS-2193.
>  * Number of successful container report processing
>  * Number of failed container report processing
>  * Number of successful incremental container report processing
>  * Number of failed incremental container report processing
> These will be handled in this jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-26 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2162:
-
Target Version/s: 0.5.0

> Make Kerberos related configuration support HA style config
> ---
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,
> OZONE_OM_KERBEROS_PRINCIPAL_KEY,
> OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,
> OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-26 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2162:
-
Status: Patch Available  (was: Open)

> Make Kerberos related configuration support HA style config
> ---
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,
> OZONE_OM_KERBEROS_PRINCIPAL_KEY,
> OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,
> OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2193) Adding container related metrics in SCM

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2193?focusedWorklogId=319276&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319276
 ]

ASF GitHub Bot logged work on HDDS-2193:


Author: ASF GitHub Bot
Created on: 26/Sep/19 23:38
Start Date: 26/Sep/19 23:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1534: HDDS-2193. 
Adding container related metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1534#issuecomment-535724358
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1800 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 66 | Maven dependency ordering for branch |
   | -1 | mvninstall | 31 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 30 | hadoop-ozone in trunk failed. |
   | -1 | compile | 20 | hadoop-hdds in trunk failed. |
   | -1 | compile | 15 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 60 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 860 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 21 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 958 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 30 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 21 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | -1 | mvninstall | 34 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 30 | hadoop-ozone in the patch failed. |
   | -1 | compile | 24 | hadoop-hdds in the patch failed. |
   | -1 | compile | 20 | hadoop-ozone in the patch failed. |
   | -1 | javac | 24 | hadoop-hdds in the patch failed. |
   | -1 | javac | 20 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 57 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 719 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 20 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 30 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 21 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 29 | hadoop-hdds in the patch failed. |
   | -1 | unit | 24 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 4201 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1534 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux b3121e605a17 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 2adcc3c |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/4/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/4/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/4/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/4/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/4/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/4/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/4/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/4/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/4/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/4/artifact/out/patch-mvninstal

[jira] [Work logged] (HDDS-2181) Ozone Manager should send correct ACL type in ACL requests to Authorizer

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2181?focusedWorklogId=319273&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319273
 ]

ASF GitHub Bot logged work on HDDS-2181:


Author: ASF GitHub Bot
Created on: 26/Sep/19 23:28
Start Date: 26/Sep/19 23:28
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1528: 
HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#discussion_r328864850
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequest.java
 ##
 @@ -177,7 +178,8 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 OMClientResponse omClientResponse = null;
 try {
   // check Acl
-  checkBucketAcls(ozoneManager, volumeName, bucketName, keyName);
+  checkKeyAcls(ozoneManager, volumeName, bucketName, keyName,
 
 Review comment:
   What I mean here is for Key/File/Directory create there will be no entry, so 
should we perform checkBucketAcls as before?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319273)
Time Spent: 3h 50m  (was: 3h 40m)

> Ozone Manager should send correct ACL type in ACL requests to Authorizer
> 
>
> Key: HDDS-2181
> URL: https://issues.apache.org/jira/browse/HDDS-2181
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Currently, Ozone manager sends "WRITE" as ACLType for key create, key delete 
> and bucket create operation. Fix the acl type in all requests to the 
> authorizer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2193) Adding container related metrics in SCM

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2193?focusedWorklogId=319272&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319272
 ]

ASF GitHub Bot logged work on HDDS-2193:


Author: ASF GitHub Bot
Created on: 26/Sep/19 23:16
Start Date: 26/Sep/19 23:16
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1534: HDDS-2193. 
Adding container related metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1534#issuecomment-535720142
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 164 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 94 | Maven dependency ordering for branch |
   | -1 | mvninstall | 59 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 37 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-hdds in trunk failed. |
   | -1 | compile | 16 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 77 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1168 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 23 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1282 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 44 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 20 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 34 | Maven dependency ordering for patch |
   | -1 | mvninstall | 40 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 30 | hadoop-ozone in the patch failed. |
   | -1 | compile | 24 | hadoop-hdds in the patch failed. |
   | -1 | compile | 19 | hadoop-ozone in the patch failed. |
   | -1 | javac | 24 | hadoop-hdds in the patch failed. |
   | -1 | javac | 19 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 65 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 899 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 19 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 18 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 29 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 18 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 29 | hadoop-hdds in the patch failed. |
   | -1 | unit | 20 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 3141 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1534 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux fc007d0fa81f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b1e55cf |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/1/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/1/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/1/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/1/artifact/out/patch-mvninstal

[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=319270&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319270
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 26/Sep/19 23:13
Start Date: 26/Sep/19 23:13
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1511: 
HDDS-2162. Make Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328861588
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerConfiguration.java
 ##
 @@ -119,10 +119,13 @@ public void testDefaultPortIfNotSpecified() throws 
Exception {
 String omNode1Id = "omNode1";
 String omNode2Id = "omNode2";
 String omNodesKeyValue = omNode1Id + "," + omNode2Id;
-conf.set(OMConfigKeys.OZONE_OM_NODES_KEY, omNodesKeyValue);
+String serviceID = "service1";
+conf.set(OMConfigKeys.OZONE_OM_SERVICE_IDS_KEY, serviceID);
+conf.set(OMConfigKeys.OZONE_OM_NODES_KEY + "." + serviceID,
+omNodesKeyValue);
 
-String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode1Id);
-String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode2Id);
+String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode1Id);
+String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode2Id);
 
 
 Review comment:
   > I have an uber comment on this JIRA. Under Ozone, what we really need is 3 
+ 3 six kerberos Identites.
   > 
   > Why don't we just follow the standard Kerberos SPN names? Simple take one 
config key from the user, either the SPN or file name path the service kerberos 
identity.
   > 
   > Once you have this, we don't have to do any second guessing or munching of 
names with any other strings -- after all it is just a service on a host. Code 
is simpler, and the best part, it is simple enough for any one to understand. 
In other words, what evil are we trying to prevent here with all the service 
name munching ?
   
   Not got your last part what is proposed.
   
   This is done for Kerberos settings and also for other configs like OM DB 
DIrs Http/Https Address. Suppose the user wants to use different keytab file 
location/principal name it will also help in this situation.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319270)
Time Spent: 4.5h  (was: 4h 20m)

> Make Kerberos related configuration support HA style config
> ---
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,
> OZONE_OM_KERBEROS_PRINCIPAL_KEY,
> OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,
> OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2193) Adding container related metrics in SCM

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2193?focusedWorklogId=319269&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319269
 ]

ASF GitHub Bot logged work on HDDS-2193:


Author: ASF GitHub Bot
Created on: 26/Sep/19 23:12
Start Date: 26/Sep/19 23:12
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1534: HDDS-2193. 
Adding container related metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1534#issuecomment-535719371
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 46 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 65 | Maven dependency ordering for branch |
   | -1 | mvninstall | 32 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 29 | hadoop-ozone in trunk failed. |
   | -1 | compile | 18 | hadoop-hdds in trunk failed. |
   | -1 | compile | 14 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 64 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1034 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 19 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1132 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 33 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 19 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for patch |
   | -1 | mvninstall | 38 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 29 | hadoop-ozone in the patch failed. |
   | -1 | compile | 26 | hadoop-hdds in the patch failed. |
   | -1 | compile | 19 | hadoop-ozone in the patch failed. |
   | -1 | javac | 26 | hadoop-hdds in the patch failed. |
   | -1 | javac | 19 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 61 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 904 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 17 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 32 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 19 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 34 | hadoop-hdds in the patch failed. |
   | -1 | unit | 23 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 2824 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1534 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux c6a0cff0280b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b1e55cf |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/2/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/2/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/2/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/2/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/2/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/2/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/2/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/2/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/2/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/2/artifact/out/patch-mvninstall

[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=319268&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319268
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 26/Sep/19 23:12
Start Date: 26/Sep/19 23:12
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1511: 
HDDS-2162. Make Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328861588
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerConfiguration.java
 ##
 @@ -119,10 +119,13 @@ public void testDefaultPortIfNotSpecified() throws 
Exception {
 String omNode1Id = "omNode1";
 String omNode2Id = "omNode2";
 String omNodesKeyValue = omNode1Id + "," + omNode2Id;
-conf.set(OMConfigKeys.OZONE_OM_NODES_KEY, omNodesKeyValue);
+String serviceID = "service1";
+conf.set(OMConfigKeys.OZONE_OM_SERVICE_IDS_KEY, serviceID);
+conf.set(OMConfigKeys.OZONE_OM_NODES_KEY + "." + serviceID,
+omNodesKeyValue);
 
-String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode1Id);
-String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode2Id);
+String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode1Id);
+String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode2Id);
 
 
 Review comment:
   > I have an uber comment on this JIRA. Under Ozone, what we really need is 3 
+ 3 six kerberos Identites.
   > 
   > Why don't we just follow the standard Kerberos SPN names? Simple take one 
config key from the user, either the SPN or file name path the service kerberos 
identity.
   > 
   > Once you have this, we don't have to do any second guessing or munching of 
names with any other strings -- after all it is just a service on a host. Code 
is simpler, and the best part, it is simple enough for any one to understand. 
In other words, what evil are we trying to prevent here with all the service 
name munching ?
   
   This is done for Kerberos settings and also for other configs like OM DB 
DIrs Http/Https Address. Suppose the user wants to use different keytab file 
location/principal name it will also help in this situation.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319268)
Time Spent: 4h 20m  (was: 4h 10m)

> Make Kerberos related configuration support HA style config
> ---
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,
> OZONE_OM_KERBEROS_PRINCIPAL_KEY,
> OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,
> OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=319267&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319267
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 26/Sep/19 23:11
Start Date: 26/Sep/19 23:11
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1511: 
HDDS-2162. Make Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328861588
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerConfiguration.java
 ##
 @@ -119,10 +119,13 @@ public void testDefaultPortIfNotSpecified() throws 
Exception {
 String omNode1Id = "omNode1";
 String omNode2Id = "omNode2";
 String omNodesKeyValue = omNode1Id + "," + omNode2Id;
-conf.set(OMConfigKeys.OZONE_OM_NODES_KEY, omNodesKeyValue);
+String serviceID = "service1";
+conf.set(OMConfigKeys.OZONE_OM_SERVICE_IDS_KEY, serviceID);
+conf.set(OMConfigKeys.OZONE_OM_NODES_KEY + "." + serviceID,
+omNodesKeyValue);
 
-String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode1Id);
-String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode2Id);
+String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode1Id);
+String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode2Id);
 
 
 Review comment:
   > I have an uber comment on this JIRA. Under Ozone, what we really need is 3 
+ 3 six kerberos Identites.
   > 
   > Why don't we just follow the standard Kerberos SPN names? Simple take one 
config key from the user, either the SPN or file name path the service kerberos 
identity.
   > 
   > Once you have this, we don't have to do any second guessing or munching of 
names with any other strings -- after all it is just a service on a host. Code 
is simpler, and the best part, it is simple enough for any one to understand. 
In other words, what evil are we trying to prevent here with all the service 
name munching ?
   
   I have an uber comment on this JIRA. Under Ozone, what we really need is 3 + 
3 six kerberos Identites.
   
   Why don't we just follow the standard Kerberos SPN names? Simple take one 
config key from the user, either the SPN or file name path the service kerberos 
identity.
   
   Once you have this, we don't have to do any second guessing or munching of 
names with any other strings -- after all it is just a service on a host. Code 
is simpler, and the best part, it is simple enough for any one to understand. 
In other words, what evil are we trying to prevent here with all the service 
name munching ?
   
   This is done for Kerberos settings and also for other configs like OM DB 
DIrs Http/Https Address. Suppose the user wants to use different keytab file 
location/principal name it will also help in this situation.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319267)
Time Spent: 4h 10m  (was: 4h)

> Make Kerberos related configuration support HA style config
> ---
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,
> OZONE_OM_KERBEROS_PRINCIPAL_KEY,
> OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,
> OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14785) [SBN read] Change client logging to be less aggressive

2019-09-26 Thread Chen Liang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14785:
--
Fix Version/s: 3.2.2
   3.1.4
   3.3.0
   2.10.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> [SBN read] Change client logging to be less aggressive
> --
>
> Key: HDFS-14785
> URL: https://issues.apache.org/jira/browse/HDFS-14785
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 2.10.0, 3.2.0, 3.1.2, 3.3.0
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
>  Labels: release-blocker
> Fix For: 2.10.0, 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14785.001.patch
>
>
> Currently {{ObserverReadProxyProvider}} logs a lot of information. There are 
> states that are acceptable but {{ObserverReadProxyProvider}} still log an 
> overwhelmingly large amount of messages. One example is that, if some NN runs 
> an older version, the lack of {{getHAServiceState}} method in older version 
> NN will lead to a Exception prints on every single call.
> We can change them to debug log. This should be minimum risk, because this is 
> only client side, we can always enable the log back by changing to DEBUG log 
> level on client side.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14785) [SBN read] Change client logging to be less aggressive

2019-09-26 Thread Chen Liang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16939023#comment-16939023
 ] 

Chen Liang commented on HDFS-14785:
---

Thanks for the review [~shv]! I've committed to trunk, branch-3.2, branch-3.1 
and branch-2.

> [SBN read] Change client logging to be less aggressive
> --
>
> Key: HDFS-14785
> URL: https://issues.apache.org/jira/browse/HDFS-14785
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 2.10.0, 3.2.0, 3.1.2, 3.3.0
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
>  Labels: release-blocker
> Attachments: HDFS-14785.001.patch
>
>
> Currently {{ObserverReadProxyProvider}} logs a lot of information. There are 
> states that are acceptable but {{ObserverReadProxyProvider}} still log an 
> overwhelmingly large amount of messages. One example is that, if some NN runs 
> an older version, the lack of {{getHAServiceState}} method in older version 
> NN will lead to a Exception prints on every single call.
> We can change them to debug log. This should be minimum risk, because this is 
> only client side, we can always enable the log back by changing to DEBUG log 
> level on client side.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=319265&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319265
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 26/Sep/19 23:08
Start Date: 26/Sep/19 23:08
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1511: 
HDDS-2162. Make Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328860862
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -309,13 +305,33 @@ private OzoneManager(OzoneConfiguration conf) throws 
IOException,
   AuthenticationException {
 super(OzoneVersionInfo.OZONE_VERSION_INFO);
 Preconditions.checkNotNull(conf);
-configuration = conf;
+configuration = new OzoneConfiguration(conf);
 
 Review comment:
   This is being done, so not to change the original values of configuration 
object passed.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319265)
Time Spent: 4h  (was: 3h 50m)

> Make Kerberos related configuration support HA style config
> ---
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,
> OZONE_OM_KERBEROS_PRINCIPAL_KEY,
> OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,
> OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2193) Adding container related metrics in SCM

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2193?focusedWorklogId=319264&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319264
 ]

ASF GitHub Bot logged work on HDDS-2193:


Author: ASF GitHub Bot
Created on: 26/Sep/19 23:07
Start Date: 26/Sep/19 23:07
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1534: HDDS-2193. 
Adding container related metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1534#issuecomment-535718246
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 66 | Maven dependency ordering for branch |
   | -1 | mvninstall | 31 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 30 | hadoop-ozone in trunk failed. |
   | -1 | compile | 21 | hadoop-hdds in trunk failed. |
   | -1 | compile | 15 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 61 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 849 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 20 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 951 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 34 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 21 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for patch |
   | -1 | mvninstall | 36 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 29 | hadoop-ozone in the patch failed. |
   | -1 | compile | 25 | hadoop-hdds in the patch failed. |
   | -1 | compile | 20 | hadoop-ozone in the patch failed. |
   | -1 | javac | 25 | hadoop-hdds in the patch failed. |
   | -1 | javac | 20 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 62 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 720 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 20 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 31 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 20 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 29 | hadoop-hdds in the patch failed. |
   | -1 | unit | 23 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 2443 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1534 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 86ecbcbe0081 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b1e55cf |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/3/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/3/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/3/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/3/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/3/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/3/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/3/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/3/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/3/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/3/artifact/out/patch-mvninstall-

[jira] [Updated] (HDDS-2193) Adding container related metrics in SCM

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2193:
-
Labels: pull-request-available  (was: )

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-2193
> URL: https://issues.apache.org/jira/browse/HDDS-2193
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
>  * Number of list container ops.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2019) Handle Set DtService of token in S3Gateway for OM HA

2019-09-26 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16939022#comment-16939022
 ] 

Bharat Viswanadham commented on HDDS-2019:
--

This is because of SaslRpcClient when select tokens it passes OM Server 
address. So, for OM HA, we need to set the token service with all OM's address.

> Handle Set DtService of token in S3Gateway for OM HA
> 
>
> Key: HDDS-2019
> URL: https://issues.apache.org/jira/browse/HDDS-2019
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> When OM HA is enabled, when tokens are generated, the service name should be 
> set with address of all OM's.
>  
> Current without HA, it is set with Om RpcAddress string. This Jira is to 
> handle:
>  # Set dtService with all OM address. Right now in OMClientProducer, UGI is 
> created with S3 token, and serviceName of token is set with OMAddress, for HA 
> case, this should be set with all OM RPC addresses.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14785) [SBN read] Change client logging to be less aggressive

2019-09-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16939019#comment-16939019
 ] 

Hudson commented on HDFS-14785:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17398 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17398/])
HDFS-14785. [SBN read] Change client logging to be less aggressive. (cliang: 
rev 2adcc3c932fd4f39a42724390ba81b2d431d7782)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java


> [SBN read] Change client logging to be less aggressive
> --
>
> Key: HDFS-14785
> URL: https://issues.apache.org/jira/browse/HDFS-14785
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 2.10.0, 3.2.0, 3.1.2, 3.3.0
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
>  Labels: release-blocker
> Attachments: HDFS-14785.001.patch
>
>
> Currently {{ObserverReadProxyProvider}} logs a lot of information. There are 
> states that are acceptable but {{ObserverReadProxyProvider}} still log an 
> overwhelmingly large amount of messages. One example is that, if some NN runs 
> an older version, the lack of {{getHAServiceState}} method in older version 
> NN will lead to a Exception prints on every single call.
> We can change them to debug log. This should be minimum risk, because this is 
> only client side, we can always enable the log back by changing to DEBUG log 
> level on client side.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14509) DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 3.x

2019-09-26 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16939016#comment-16939016
 ] 

Jonathan Hung commented on HDFS-14509:
--

Marking as 2.10.0 blocker as per discussion with [~shv]

> DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 
> 3.x
> ---
>
> Key: HDFS-14509
> URL: https://issues.apache.org/jira/browse/HDFS-14509
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuxuan Wang
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HDFS-14509-001.patch
>
>
> According to the doc, if we want to upgrade cluster from 2.x to 3.x, we need 
> upgrade NN first. And there will be a intermediate state that NN is 3.x and 
> DN is 2.x. At that moment, if a client reads (or writes) a block, it will get 
> a block token from NN and then deliver the token to DN who can verify the 
> token. But the verification in the code now is :
> {code:title=BlockTokenSecretManager.java|borderStyle=solid}
> public void checkAccess(...)
> {
> ...
> id.readFields(new DataInputStream(new 
> ByteArrayInputStream(token.getIdentifier(;
> ...
> if (!Arrays.equals(retrievePassword(id), token.getPassword())) {
>   throw new InvalidToken("Block token with " + id.toString()
>   + " doesn't have the correct token password");
> }
> }
> {code} 
> And {{retrievePassword(id)}} is:
> {code} 
> public byte[] retrievePassword(BlockTokenIdentifier identifier)
> {
> ...
> return createPassword(identifier.getBytes(), key.getKey());
> }
> {code} 
> So, if NN's identifier add new fields, DN will lose the fields and compute 
> wrong password.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14509) DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 3.x

2019-09-26 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HDFS-14509:
-
Labels: release-blocker  (was: )

> DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 
> 3.x
> ---
>
> Key: HDFS-14509
> URL: https://issues.apache.org/jira/browse/HDFS-14509
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuxuan Wang
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HDFS-14509-001.patch
>
>
> According to the doc, if we want to upgrade cluster from 2.x to 3.x, we need 
> upgrade NN first. And there will be a intermediate state that NN is 3.x and 
> DN is 2.x. At that moment, if a client reads (or writes) a block, it will get 
> a block token from NN and then deliver the token to DN who can verify the 
> token. But the verification in the code now is :
> {code:title=BlockTokenSecretManager.java|borderStyle=solid}
> public void checkAccess(...)
> {
> ...
> id.readFields(new DataInputStream(new 
> ByteArrayInputStream(token.getIdentifier(;
> ...
> if (!Arrays.equals(retrievePassword(id), token.getPassword())) {
>   throw new InvalidToken("Block token with " + id.toString()
>   + " doesn't have the correct token password");
> }
> }
> {code} 
> And {{retrievePassword(id)}} is:
> {code} 
> public byte[] retrievePassword(BlockTokenIdentifier identifier)
> {
> ...
> return createPassword(identifier.getBytes(), key.getKey());
> }
> {code} 
> So, if NN's identifier add new fields, DN will lose the fields and compute 
> wrong password.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14461) RBF: Fix intermittently failing kerberos related unit test

2019-09-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16939015#comment-16939015
 ] 

Hudson commented on HDFS-14461:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17397 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17397/])
HDFS-14461. RBF: Fix intermittently failing kerberos related unit test. 
(inigoiri: rev b1e55cfb557056306db92b4a74f7b0288fd193ee)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractRootDirectorySecure.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/SecurityConfUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractOpenSecure.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractAppendSecure.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractGetFileStatusSecure.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/security/TestRouterHttpDelegationToken.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractRenameSecure.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractSetTimesSecure.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractMkdirSecure.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractCreateSecure.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/RouterHDFSContract.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractDeleteSecure.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractDelegationToken.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractSeekSecure.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractConcatSecure.java


> RBF: Fix intermittently failing kerberos related unit test
> --
>
> Key: HDFS-14461
> URL: https://issues.apache.org/jira/browse/HDFS-14461
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: Xiaoqiao He
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14461.001.patch, HDFS-14461.002.patch, 
> HDFS-14461.003.patch, HDFS-14461.004.patch, HDFS-14461.005.patch
>
>
> TestRouterHttpDelegationToken#testGetDelegationToken fails intermittently. It 
> may be due to some race condition before using the keytab that's created for 
> testing.
>  
> {code:java}
>  Failed
> org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken.testGetDelegationToken
>  Failing for the past 1 build (Since 
> [!https://builds.apache.org/static/1e9ab9cc/images/16x16/red.png! 
> #26721|https://builds.apache.org/job/PreCommit-HDFS-Build/26721/] )
>  [Took 89 
> ms.|https://builds.apache.org/job/PreCommit-HDFS-Build/26721/testReport/org.apache.hadoop.hdfs.server.federation.security/TestRouterHttpDelegationToken/testGetDelegationToken/history]
>   
>  Error Message
> org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: router/localh...@example.com from keytab 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab
>  javax.security.auth.login.LoginException: Integrity check on decrypted field 
> failed (31) - PREAUTH_FAILED
> h3. Stacktrace
> org.apache.hadoop.service.ServiceStateException: 
> org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: router/localh...@example.com from keytab 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab
>  javax.security.auth.login.LoginException: Integrity check on decrypted field 
> failed (31) - PREAUTH_FAILED at 
> org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
>  at org.apache.hadoop.service.AbstractService.init(AbstractService.java:173) 
> at 
> org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken.setup(TestRouterHttpDelegationToken.java:99)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMe

[jira] [Commented] (HDFS-14305) Serial number in BlockTokenSecretManager could overlap between different namenodes

2019-09-26 Thread Konstantin Shvachko (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16939011#comment-16939011
 ] 

Konstantin Shvachko commented on HDFS-14305:


Glad we agree. Yes, I regret I bumped into this issue too late.

Another problem, that this change does not prevent from collisions during 
regular restarts (after upgrading). If you add a new NameNode in the beginning 
of the list in the config it will change {{nnIndex}} and therefore the 
respective node ranges.
I guess my point is that there is no "safe" way here, that is, I don't know 
which way is less "risky" as you put it. One way or another you need to know 
the ranges and follow a certain order of restarting NNs, which avoids 
collisions. And all these are not documented or mentioned in the release notes.

So my proposal is to revert this change, and fix the arithmetic bug in previous 
implementation. We can then think of a more robust solution, which avoids 
generating ranges based on NameNode ordering.

> Serial number in BlockTokenSecretManager could overlap between different 
> namenodes
> --
>
> Key: HDFS-14305
> URL: https://issues.apache.org/jira/browse/HDFS-14305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, security
>Reporter: Chao Sun
>Assignee: Xiaoqiao He
>Priority: Major
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14305.001.patch, HDFS-14305.002.patch, 
> HDFS-14305.003.patch, HDFS-14305.004.patch, HDFS-14305.005.patch, 
> HDFS-14305.006.patch
>
>
> Currently, a {{BlockTokenSecretManager}} starts with a random integer as the 
> initial serial number, and then use this formula to rotate it:
> {code:java}
> this.intRange = Integer.MAX_VALUE / numNNs;
> this.nnRangeStart = intRange * nnIndex;
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
>  {code}
> while {{numNNs}} is the total number of NameNodes in the cluster, and 
> {{nnIndex}} is the index of the current NameNode specified in the 
> configuration {{dfs.ha.namenodes.}}.
> However, with this approach, different NameNode could have overlapping ranges 
> for serial number. For simplicity, let's assume {{Integer.MAX_VALUE}} is 100, 
> and we have 2 NameNodes {{nn1}} and {{nn2}} in configuration. Then the ranges 
> for these two are:
> {code}
> nn1 -> [-49, 49]
> nn2 -> [1, 99]
> {code}
> This is because the initial serial number could be any negative integer.
> Moreover, when the keys are updated, the serial number will again be updated 
> with the formula:
> {code}
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
> {code}
> which means the new serial number could be updated to a range that belongs to 
> a different NameNode, thus increasing the chance of collision again.
> When the collision happens, DataNodes could overwrite an existing key which 
> will cause clients to fail because of {{InvalidToken}} error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2019) Handle Set DtService of token in S3Gateway for OM HA

2019-09-26 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16939013#comment-16939013
 ] 

Anu Engineer commented on HDDS-2019:


bq. the service name should be set with address of all OM's.

 I am not sure I understand this assertion. Can you please help me understand 
why we need this ? 

> Handle Set DtService of token in S3Gateway for OM HA
> 
>
> Key: HDDS-2019
> URL: https://issues.apache.org/jira/browse/HDDS-2019
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> When OM HA is enabled, when tokens are generated, the service name should be 
> set with address of all OM's.
>  
> Current without HA, it is set with Om RpcAddress string. This Jira is to 
> handle:
>  # Set dtService with all OM address. Right now in OMClientProducer, UGI is 
> created with S3 token, and serviceName of token is set with OMAddress, for HA 
> case, this should be set with all OM RPC addresses.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=319261&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319261
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 26/Sep/19 22:35
Start Date: 26/Sep/19 22:35
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1511: HDDS-2162. 
Make Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328839876
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerConfiguration.java
 ##
 @@ -119,10 +119,13 @@ public void testDefaultPortIfNotSpecified() throws 
Exception {
 String omNode1Id = "omNode1";
 String omNode2Id = "omNode2";
 String omNodesKeyValue = omNode1Id + "," + omNode2Id;
-conf.set(OMConfigKeys.OZONE_OM_NODES_KEY, omNodesKeyValue);
+String serviceID = "service1";
+conf.set(OMConfigKeys.OZONE_OM_SERVICE_IDS_KEY, serviceID);
+conf.set(OMConfigKeys.OZONE_OM_NODES_KEY + "." + serviceID,
+omNodesKeyValue);
 
-String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode1Id);
-String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode2Id);
+String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode1Id);
+String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode2Id);
 
 
 Review comment:
   Don't need to verify that these strings are in the expected format here?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319261)
Time Spent: 3h 50m  (was: 3h 40m)

> Make Kerberos related configuration support HA style config
> ---
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,
> OZONE_OM_KERBEROS_PRINCIPAL_KEY,
> OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,
> OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=319260&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319260
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 26/Sep/19 22:35
Start Date: 26/Sep/19 22:35
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1511: HDDS-2162. 
Make Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328851483
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -309,13 +305,33 @@ private OzoneManager(OzoneConfiguration conf) throws 
IOException,
   AuthenticationException {
 super(OzoneVersionInfo.OZONE_VERSION_INFO);
 Preconditions.checkNotNull(conf);
-configuration = conf;
+configuration = new OzoneConfiguration(conf);
 
 Review comment:
   This function passes a conf object of type OzoneConfiguration. Why are we 
reallocating a new object before assigining that configuration obect to the 
member variable?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319260)
Time Spent: 3h 40m  (was: 3.5h)

> Make Kerberos related configuration support HA style config
> ---
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,
> OZONE_OM_KERBEROS_PRINCIPAL_KEY,
> OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,
> OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete

2019-09-26 Thread Siddharth Wagle (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16939004#comment-16939004
 ] 

Siddharth Wagle edited comment on HDDS-1868 at 9/26/19 10:31 PM:
-

HI [~ljain] can you take a look at 5, addressed all other comments except the 
_notifyLeader_ impl, IMO we don't require that. Thanks.


was (Author: swagle):
HI [~ljain] cna you take a look at 5, addressed all other comments except the 
_notifyLeader_ impl, IMO we don't require that. Thanks.

> Ozone pipelines should be marked as ready only after the leader election is 
> complete
> 
>
> Key: HDDS-1868
> URL: https://issues.apache.org/jira/browse/HDDS-1868
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1868.01.patch, HDDS-1868.02.patch, 
> HDDS-1868.03.patch, HDDS-1868.04.patch, HDDS-1868.05.patch
>
>
> Ozone pipeline on restart start in allocated state, they are moved into open 
> state after all the pipeline have reported to it. However this potentially 
> can lead into an issue where the pipeline is still not ready to accept any 
> incoming IO operations.
> The pipelines should be marked as ready only after the leader election is 
> complete and leader is ready to accept incoming IO.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete

2019-09-26 Thread Siddharth Wagle (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16939004#comment-16939004
 ] 

Siddharth Wagle commented on HDDS-1868:
---

HI [~ljain] cna you take a look at 5, addressed all other comments except the 
_notifyLeader_ impl, IMO we don't require that. Thanks.

> Ozone pipelines should be marked as ready only after the leader election is 
> complete
> 
>
> Key: HDDS-1868
> URL: https://issues.apache.org/jira/browse/HDDS-1868
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1868.01.patch, HDDS-1868.02.patch, 
> HDDS-1868.03.patch, HDDS-1868.04.patch, HDDS-1868.05.patch
>
>
> Ozone pipeline on restart start in allocated state, they are moved into open 
> state after all the pipeline have reported to it. However this potentially 
> can lead into an issue where the pipeline is still not ready to accept any 
> incoming IO operations.
> The pipelines should be marked as ready only after the leader election is 
> complete and leader is ready to accept incoming IO.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete

2019-09-26 Thread Siddharth Wagle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-1868:
--
Attachment: HDDS-1868.05.patch

> Ozone pipelines should be marked as ready only after the leader election is 
> complete
> 
>
> Key: HDDS-1868
> URL: https://issues.apache.org/jira/browse/HDDS-1868
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1868.01.patch, HDDS-1868.02.patch, 
> HDDS-1868.03.patch, HDDS-1868.04.patch, HDDS-1868.05.patch
>
>
> Ozone pipeline on restart start in allocated state, they are moved into open 
> state after all the pipeline have reported to it. However this potentially 
> can lead into an issue where the pipeline is still not ready to accept any 
> incoming IO operations.
> The pipelines should be marked as ready only after the leader election is 
> complete and leader is ready to accept incoming IO.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1146) Adding container related metrics in SCM

2019-09-26 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1146:
-
Description: 
This jira aims to add more container related metrics to SCM.
 Following metrics will be added as part of this jira:
 * Number of containers
 * Number of open containers
 * Number of closed containers
 * Number of quasi closed containers
 * Number of closing containers

Above are already handled in HDDS-918.
 * Number of successful create container calls
 * Number of failed create container calls
 * Number of successful delete container calls
 * Number of failed delete container calls

Handled in HDDS-2193.
 * Number of successful container report processing
 * Number of failed container report processing
 * Number of successful incremental container report processing
 * Number of failed incremental container report processing

These will be handled in this jira.

  was:
This jira aims to add more container related metrics to SCM.
Following metrics will be added as part of this jira:

* Number of containers
* Number of open containers
* Number of closed containers
* Number of quasi closed containers
* Number of closing containers
* Number of successful create container calls
* Number of failed create container calls
* Number of successful delete container calls
* Number of failed delete container calls
* Number of successful container report processing
* Number of failed container report processing
* Number of successful incremental container report processing
* Number of failed incremental container report processing



> Adding container related metrics in SCM
> ---
>
> Key: HDDS-1146
> URL: https://issues.apache.org/jira/browse/HDDS-1146
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Supratim Deka
>Priority: Major
> Attachments: HDDS-1146.000.patch, HDDS-1146.001.patch, 
> HDDS-1146.002.patch
>
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of containers
>  * Number of open containers
>  * Number of closed containers
>  * Number of quasi closed containers
>  * Number of closing containers
> Above are already handled in HDDS-918.
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
> Handled in HDDS-2193.
>  * Number of successful container report processing
>  * Number of failed container report processing
>  * Number of successful incremental container report processing
>  * Number of failed incremental container report processing
> These will be handled in this jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14461) RBF: Fix intermittently failing kerberos related unit test

2019-09-26 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-14461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14461:
---
Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks [~hexiaoqiao] for the fix and thanks [~crh], [~zhangchen], and [~eyang] 
for the reviews!
Committed to trunk.

> RBF: Fix intermittently failing kerberos related unit test
> --
>
> Key: HDFS-14461
> URL: https://issues.apache.org/jira/browse/HDFS-14461
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: Xiaoqiao He
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14461.001.patch, HDFS-14461.002.patch, 
> HDFS-14461.003.patch, HDFS-14461.004.patch, HDFS-14461.005.patch
>
>
> TestRouterHttpDelegationToken#testGetDelegationToken fails intermittently. It 
> may be due to some race condition before using the keytab that's created for 
> testing.
>  
> {code:java}
>  Failed
> org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken.testGetDelegationToken
>  Failing for the past 1 build (Since 
> [!https://builds.apache.org/static/1e9ab9cc/images/16x16/red.png! 
> #26721|https://builds.apache.org/job/PreCommit-HDFS-Build/26721/] )
>  [Took 89 
> ms.|https://builds.apache.org/job/PreCommit-HDFS-Build/26721/testReport/org.apache.hadoop.hdfs.server.federation.security/TestRouterHttpDelegationToken/testGetDelegationToken/history]
>   
>  Error Message
> org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: router/localh...@example.com from keytab 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab
>  javax.security.auth.login.LoginException: Integrity check on decrypted field 
> failed (31) - PREAUTH_FAILED
> h3. Stacktrace
> org.apache.hadoop.service.ServiceStateException: 
> org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: router/localh...@example.com from keytab 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab
>  javax.security.auth.login.LoginException: Integrity check on decrypted field 
> failed (31) - PREAUTH_FAILED at 
> org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
>  at org.apache.hadoop.service.AbstractService.init(AbstractService.java:173) 
> at 
> org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken.setup(TestRouterHttpDelegationToken.java:99)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) 
> Caused by: org.apache.hadoop.security.KerberosAuthException: failure to 

[jira] [Assigned] (HDDS-2193) Adding container related metrics in SCM

2019-09-26 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-2193:


Assignee: Bharat Viswanadham  (was: Supratim Deka)

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-2193
> URL: https://issues.apache.org/jira/browse/HDDS-2193
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
>  * Number of list container ops.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2193) Adding container related metrics in SCM

2019-09-26 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2193:


 Summary: Adding container related metrics in SCM
 Key: HDDS-2193
 URL: https://issues.apache.org/jira/browse/HDDS-2193
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: SCM
Reporter: Bharat Viswanadham
Assignee: Supratim Deka


This jira aims to add more container related metrics to SCM.
Following metrics will be added as part of this jira:

* Number of containers
* Number of open containers
* Number of closed containers
* Number of quasi closed containers
* Number of closing containers
* Number of successful create container calls
* Number of failed create container calls
* Number of successful delete container calls
* Number of failed delete container calls
* Number of successful container report processing
* Number of failed container report processing
* Number of successful incremental container report processing
* Number of failed incremental container report processing




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2193) Adding container related metrics in SCM

2019-09-26 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2193:
-
Description: 
This jira aims to add more container related metrics to SCM.
 Following metrics will be added as part of this jira:
 * Number of successful create container calls
 * Number of failed create container calls
 * Number of successful delete container calls
 * Number of failed delete container calls
 * Number of list container ops.

  was:
This jira aims to add more container related metrics to SCM.
Following metrics will be added as part of this jira:

* Number of containers
* Number of open containers
* Number of closed containers
* Number of quasi closed containers
* Number of closing containers
* Number of successful create container calls
* Number of failed create container calls
* Number of successful delete container calls
* Number of failed delete container calls
* Number of successful container report processing
* Number of failed container report processing
* Number of successful incremental container report processing
* Number of failed incremental container report processing



> Adding container related metrics in SCM
> ---
>
> Key: HDDS-2193
> URL: https://issues.apache.org/jira/browse/HDDS-2193
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Supratim Deka
>Priority: Major
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
>  * Number of list container ops.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2181) Ozone Manager should send correct ACL type in ACL requests to Authorizer

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2181?focusedWorklogId=319249&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319249
 ]

ASF GitHub Bot logged work on HDDS-2181:


Author: ASF GitHub Bot
Created on: 26/Sep/19 22:04
Start Date: 26/Sep/19 22:04
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1528: HDDS-2181. Ozone 
Manager should send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#issuecomment-535703552
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 74 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 43 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 24 | hadoop-ozone in trunk failed. |
   | -1 | compile | 18 | hadoop-hdds in trunk failed. |
   | -1 | compile | 12 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 53 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 941 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 19 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 16 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1026 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 29 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 17 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 31 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 26 | hadoop-ozone in the patch failed. |
   | -1 | compile | 21 | hadoop-hdds in the patch failed. |
   | -1 | compile | 16 | hadoop-ozone in the patch failed. |
   | -1 | javac | 21 | hadoop-hdds in the patch failed. |
   | -1 | javac | 16 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 53 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 790 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 18 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 17 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 28 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 16 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 26 | hadoop-hdds in the patch failed. |
   | -1 | unit | 19 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 2459 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1528 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux fe6484579c06 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 06998a1 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/3/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/3/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/3/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/3/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/3/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/3/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/3/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/3/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/3/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528

[jira] [Work logged] (HDDS-2181) Ozone Manager should send correct ACL type in ACL requests to Authorizer

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2181?focusedWorklogId=319245&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319245
 ]

ASF GitHub Bot logged work on HDDS-2181:


Author: ASF GitHub Bot
Created on: 26/Sep/19 21:58
Start Date: 26/Sep/19 21:58
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1528: HDDS-2181. 
Ozone Manager should send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#discussion_r328844216
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRenameRequest.java
 ##
 @@ -118,7 +119,8 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 OMException.ResultCodes.INVALID_KEY_NAME);
   }
   // check Acl
-  checkKeyAcls(ozoneManager, volumeName, bucketName, fromKeyName);
+  checkKeyAcls(ozoneManager, volumeName, bucketName, toKeyName,
+  IAccessAuthorizer.ACLType.CREATE);
 
 Review comment:
   Discussed offline, we should have DELETE check for fromKeyName and CREATE 
check for toKeyName.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319245)
Time Spent: 3.5h  (was: 3h 20m)

> Ozone Manager should send correct ACL type in ACL requests to Authorizer
> 
>
> Key: HDDS-2181
> URL: https://issues.apache.org/jira/browse/HDDS-2181
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Currently, Ozone manager sends "WRITE" as ACLType for key create, key delete 
> and bucket create operation. Fix the acl type in all requests to the 
> authorizer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2149) Replace findbugs with spotbugs

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2149?focusedWorklogId=319241&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319241
 ]

ASF GitHub Bot logged work on HDDS-2149:


Author: ASF GitHub Bot
Created on: 26/Sep/19 21:37
Start Date: 26/Sep/19 21:37
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1513: HDDS-2149. 
Replace FindBugs with SpotBugs
URL: https://github.com/apache/hadoop/pull/1513#issuecomment-535695781
 
 
   This patch is not working for me on a Mac. If there are clear instructions 
to make it work, I can test it out and commit this. Otherwise, I am going to 
pass.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319241)
Time Spent: 3h 40m  (was: 3.5h)

> Replace findbugs with spotbugs
> --
>
> Key: HDDS-2149
> URL: https://issues.apache.org/jira/browse/HDDS-2149
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Dinesh Chitlangia
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Findbugs has been marked deprecated and all future work is now happening 
> under SpotBugs project.
> This Jira is to investigate and possibly transition to Spotbugs in Ozone
>  
> Ref1 - 
> [https://mailman.cs.umd.edu/pipermail/findbugs-discuss/2017-September/004383.html]
> Ref2 - [https://spotbugs.github.io/]
>  
> A turn off for developers is that IntelliJ does not yet have a plugin for 
> Spotbugs - [https://youtrack.jetbrains.com/issue/IDEA-201846]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14284) RBF: Log Router identifier when reporting exceptions

2019-09-26 Thread CR Hota (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16938979#comment-16938979
 ] 

CR Hota commented on HDFS-14284:


[~hemanthboyina] [~inigoiri] [~ayushtkn] Thanks for the discussion so far. 
Overall approach looks fine.

Can we separate out RIOEx from hadoop-common and StandByExe not extend from 
RIOEx? Its best not to change hadoop-common directly for this feature.

RIOEx can be added in hdfs-rbf project and standby can be used directly to 
construct the error msg containing the router id before creating standby 
exception. Anyways standby already has logic in client side to failover, log of 
standby will automatically output the router id used when exception was created 
in server.

> RBF: Log Router identifier when reporting exceptions
> 
>
> Key: HDFS-14284
> URL: https://issues.apache.org/jira/browse/HDFS-14284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14284.001.patch, HDFS-14284.002.patch
>
>
> The typical setup is to use multiple Routers through 
> ConfiguredFailoverProxyProvider.
> In a regular HA Namenode setup, it is easy to know which NN was used.
> However, in RBF, any Router can be the one reporting the exception and it is 
> hard to know which was the one.
> We should have a way to identify which Router/Namenode was the one triggering 
> the exception.
> This would also apply with Observer Namenodes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14461) RBF: Fix intermittently failing kerberos related unit test

2019-09-26 Thread CR Hota (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16938970#comment-16938970
 ] 

CR Hota commented on HDFS-14461:


[~hexiaoqiao] 

This looks so much better. Thanks for getting this through the finish line. +1 
for v5.

> RBF: Fix intermittently failing kerberos related unit test
> --
>
> Key: HDFS-14461
> URL: https://issues.apache.org/jira/browse/HDFS-14461
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-14461.001.patch, HDFS-14461.002.patch, 
> HDFS-14461.003.patch, HDFS-14461.004.patch, HDFS-14461.005.patch
>
>
> TestRouterHttpDelegationToken#testGetDelegationToken fails intermittently. It 
> may be due to some race condition before using the keytab that's created for 
> testing.
>  
> {code:java}
>  Failed
> org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken.testGetDelegationToken
>  Failing for the past 1 build (Since 
> [!https://builds.apache.org/static/1e9ab9cc/images/16x16/red.png! 
> #26721|https://builds.apache.org/job/PreCommit-HDFS-Build/26721/] )
>  [Took 89 
> ms.|https://builds.apache.org/job/PreCommit-HDFS-Build/26721/testReport/org.apache.hadoop.hdfs.server.federation.security/TestRouterHttpDelegationToken/testGetDelegationToken/history]
>   
>  Error Message
> org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: router/localh...@example.com from keytab 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab
>  javax.security.auth.login.LoginException: Integrity check on decrypted field 
> failed (31) - PREAUTH_FAILED
> h3. Stacktrace
> org.apache.hadoop.service.ServiceStateException: 
> org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: router/localh...@example.com from keytab 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab
>  javax.security.auth.login.LoginException: Integrity check on decrypted field 
> failed (31) - PREAUTH_FAILED at 
> org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
>  at org.apache.hadoop.service.AbstractService.init(AbstractService.java:173) 
> at 
> org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken.setup(TestRouterHttpDelegationToken.java:99)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) 
> Caused by: org.apache.hadoop.security.KerberosAuthException: failure to 
> login: for principal: router/localh...@example.com from keytab 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/tar

[jira] [Work logged] (HDDS-2180) Add Object ID and update ID on VolumeList Object

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2180?focusedWorklogId=319218&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319218
 ]

ASF GitHub Bot logged work on HDDS-2180:


Author: ASF GitHub Bot
Created on: 26/Sep/19 21:01
Start Date: 26/Sep/19 21:01
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1526: HDDS-2180. Add 
Object ID and update ID on VolumeList Object.
URL: https://github.com/apache/hadoop/pull/1526#issuecomment-535684699
 
 
   > None that I know of. However we should not commit a patch without CI on 
principle.
   
   This has been broken for quite a while; So I rely on Jenkins/Yetus and hand 
building before apply the commits. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319218)
Time Spent: 2h 10m  (was: 2h)

> Add Object ID and update ID on VolumeList Object
> 
>
> Key: HDDS-2180
> URL: https://issues.apache.org/jira/browse/HDDS-2180
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> This JIRA proposes to add Object ID and Update IDs to the Volume List Object.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2181) Ozone Manager should send correct ACL type in ACL requests to Authorizer

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2181?focusedWorklogId=319217&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319217
 ]

ASF GitHub Bot logged work on HDDS-2181:


Author: ASF GitHub Bot
Created on: 26/Sep/19 20:58
Start Date: 26/Sep/19 20:58
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #1528: HDDS-2181. 
Ozone Manager should send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#issuecomment-535683415
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319217)
Time Spent: 3h 20m  (was: 3h 10m)

> Ozone Manager should send correct ACL type in ACL requests to Authorizer
> 
>
> Key: HDDS-2181
> URL: https://issues.apache.org/jira/browse/HDDS-2181
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Currently, Ozone manager sends "WRITE" as ACLType for key create, key delete 
> and bucket create operation. Fix the acl type in all requests to the 
> authorizer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2180) Add Object ID and update ID on VolumeList Object

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2180?focusedWorklogId=319209&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319209
 ]

ASF GitHub Bot logged work on HDDS-2180:


Author: ASF GitHub Bot
Created on: 26/Sep/19 20:50
Start Date: 26/Sep/19 20:50
Worklog Time Spent: 10m 
  Work Description: arp7 commented on issue #1526: HDDS-2180. Add Object ID 
and update ID on VolumeList Object.
URL: https://github.com/apache/hadoop/pull/1526#issuecomment-535680717
 
 
   None that I know of. However we should not commit a patch without CI on 
principle.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319209)
Time Spent: 2h  (was: 1h 50m)

> Add Object ID and update ID on VolumeList Object
> 
>
> Key: HDDS-2180
> URL: https://issues.apache.org/jira/browse/HDDS-2180
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> This JIRA proposes to add Object ID and Update IDs to the Volume List Object.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14305) Serial number in BlockTokenSecretManager could overlap between different namenodes

2019-09-26 Thread Arpit Agarwal (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16938950#comment-16938950
 ] 

Arpit Agarwal edited comment on HDFS-14305 at 9/26/19 8:37 PM:
---

I agree and I had the same question [back in 
Feb|https://issues.apache.org/jira/browse/HDFS-14305?focusedCommentId=16780743&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16780743].
 I was convinced by [Erik's 
response|https://issues.apache.org/jira/browse/HDFS-14305?focusedCommentId=16780746&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16780746].


was (Author: arpitagarwal):
I agree and I had the same question back in Feb. I was convinced by Erik's 
response.

> Serial number in BlockTokenSecretManager could overlap between different 
> namenodes
> --
>
> Key: HDFS-14305
> URL: https://issues.apache.org/jira/browse/HDFS-14305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, security
>Reporter: Chao Sun
>Assignee: Xiaoqiao He
>Priority: Major
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14305.001.patch, HDFS-14305.002.patch, 
> HDFS-14305.003.patch, HDFS-14305.004.patch, HDFS-14305.005.patch, 
> HDFS-14305.006.patch
>
>
> Currently, a {{BlockTokenSecretManager}} starts with a random integer as the 
> initial serial number, and then use this formula to rotate it:
> {code:java}
> this.intRange = Integer.MAX_VALUE / numNNs;
> this.nnRangeStart = intRange * nnIndex;
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
>  {code}
> while {{numNNs}} is the total number of NameNodes in the cluster, and 
> {{nnIndex}} is the index of the current NameNode specified in the 
> configuration {{dfs.ha.namenodes.}}.
> However, with this approach, different NameNode could have overlapping ranges 
> for serial number. For simplicity, let's assume {{Integer.MAX_VALUE}} is 100, 
> and we have 2 NameNodes {{nn1}} and {{nn2}} in configuration. Then the ranges 
> for these two are:
> {code}
> nn1 -> [-49, 49]
> nn2 -> [1, 99]
> {code}
> This is because the initial serial number could be any negative integer.
> Moreover, when the keys are updated, the serial number will again be updated 
> with the formula:
> {code}
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
> {code}
> which means the new serial number could be updated to a range that belongs to 
> a different NameNode, thus increasing the chance of collision again.
> When the collision happens, DataNodes could overwrite an existing key which 
> will cause clients to fail because of {{InvalidToken}} error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2180) Add Object ID and update ID on VolumeList Object

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2180?focusedWorklogId=319205&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319205
 ]

ASF GitHub Bot logged work on HDDS-2180:


Author: ASF GitHub Bot
Created on: 26/Sep/19 20:36
Start Date: 26/Sep/19 20:36
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1526: HDDS-2180. Add 
Object ID and update ID on VolumeList Object.
URL: https://github.com/apache/hadoop/pull/1526#issuecomment-535675877
 
 
   > Was this committed without a pre-commit run from Anzix?
   
   Looks like it was not run. I did look at the yetus output but the output is 
too noisy to make any sense.
   
   Is this commit causing any issues ? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319205)
Time Spent: 1h 50m  (was: 1h 40m)

> Add Object ID and update ID on VolumeList Object
> 
>
> Key: HDDS-2180
> URL: https://issues.apache.org/jira/browse/HDDS-2180
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> This JIRA proposes to add Object ID and Update IDs to the Volume List Object.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14305) Serial number in BlockTokenSecretManager could overlap between different namenodes

2019-09-26 Thread Arpit Agarwal (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16938950#comment-16938950
 ] 

Arpit Agarwal commented on HDFS-14305:
--

I agree and I had the same question back in Feb. I was convinced by Erik's 
response.

> Serial number in BlockTokenSecretManager could overlap between different 
> namenodes
> --
>
> Key: HDFS-14305
> URL: https://issues.apache.org/jira/browse/HDFS-14305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, security
>Reporter: Chao Sun
>Assignee: Xiaoqiao He
>Priority: Major
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14305.001.patch, HDFS-14305.002.patch, 
> HDFS-14305.003.patch, HDFS-14305.004.patch, HDFS-14305.005.patch, 
> HDFS-14305.006.patch
>
>
> Currently, a {{BlockTokenSecretManager}} starts with a random integer as the 
> initial serial number, and then use this formula to rotate it:
> {code:java}
> this.intRange = Integer.MAX_VALUE / numNNs;
> this.nnRangeStart = intRange * nnIndex;
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
>  {code}
> while {{numNNs}} is the total number of NameNodes in the cluster, and 
> {{nnIndex}} is the index of the current NameNode specified in the 
> configuration {{dfs.ha.namenodes.}}.
> However, with this approach, different NameNode could have overlapping ranges 
> for serial number. For simplicity, let's assume {{Integer.MAX_VALUE}} is 100, 
> and we have 2 NameNodes {{nn1}} and {{nn2}} in configuration. Then the ranges 
> for these two are:
> {code}
> nn1 -> [-49, 49]
> nn2 -> [1, 99]
> {code}
> This is because the initial serial number could be any negative integer.
> Moreover, when the keys are updated, the serial number will again be updated 
> with the formula:
> {code}
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
> {code}
> which means the new serial number could be updated to a range that belongs to 
> a different NameNode, thus increasing the chance of collision again.
> When the collision happens, DataNodes could overwrite an existing key which 
> will cause clients to fail because of {{InvalidToken}} error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2180) Add Object ID and update ID on VolumeList Object

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2180?focusedWorklogId=319204&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319204
 ]

ASF GitHub Bot logged work on HDDS-2180:


Author: ASF GitHub Bot
Created on: 26/Sep/19 20:36
Start Date: 26/Sep/19 20:36
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1526: HDDS-2180. Add 
Object ID and update ID on VolumeList Object.
URL: https://github.com/apache/hadoop/pull/1526#issuecomment-535675877
 
 
   > Was this committed without a pre-commit run from Anzix?
   
   Looks like it was not run. I did look at the yetus output but the output is 
too noisy to make any sense.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319204)
Time Spent: 1h 40m  (was: 1.5h)

> Add Object ID and update ID on VolumeList Object
> 
>
> Key: HDDS-2180
> URL: https://issues.apache.org/jira/browse/HDDS-2180
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> This JIRA proposes to add Object ID and Update IDs to the Volume List Object.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2174) Delete GDPR Encryption Key from metadata when a Key is deleted

2019-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2174?focusedWorklogId=319199&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319199
 ]

ASF GitHub Bot logged work on HDDS-2174:


Author: ASF GitHub Bot
Created on: 26/Sep/19 20:21
Start Date: 26/Sep/19 20:21
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1519: HDDS-2174. 
Delete GDPR Encryption Key from metadata when a Key is deleted
URL: https://github.com/apache/hadoop/pull/1519#issuecomment-535671164
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for branch |
   | -1 | mvninstall | 31 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 27 | hadoop-ozone in trunk failed. |
   | -1 | compile | 21 | hadoop-hdds in trunk failed. |
   | -1 | compile | 15 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 62 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1067 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 19 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1165 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 32 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 20 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | -1 | mvninstall | 37 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 30 | hadoop-ozone in the patch failed. |
   | -1 | compile | 23 | hadoop-hdds in the patch failed. |
   | -1 | compile | 19 | hadoop-ozone in the patch failed. |
   | -1 | javac | 23 | hadoop-hdds in the patch failed. |
   | -1 | javac | 19 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 32 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 895 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 18 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 33 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 19 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 28 | hadoop-hdds in the patch failed. |
   | -1 | unit | 22 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 2764 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1519 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 27d12699ba98 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 06998a1 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/6/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/6/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/6/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/6/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/6/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/6/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/6/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/6/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/6/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
http

  1   2   3   >