[GitHub] [hadoop] hanishakoneru merged pull request #651: HDDS-1339. Implement ratis snapshots on OM

2019-04-03 Thread GitBox
hanishakoneru merged pull request #651: HDDS-1339. Implement ratis snapshots on 
OM
URL: https://github.com/apache/hadoop/pull/651
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on issue #651: HDDS-1339. Implement ratis snapshots on OM

2019-04-03 Thread GitBox
hanishakoneru commented on issue #651: HDDS-1339. Implement ratis snapshots on 
OM
URL: https://github.com/apache/hadoop/pull/651#issuecomment-479759384
 
 
   Thank you @bharatviswa504 for the reviews.
   The CI unit and acceptance test failure is not related to this PR. I will 
merge the PR with trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14747) S3AInputStream to implement CanUnbuffer

2019-04-03 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16809513#comment-16809513
 ] 

Sahil Takiar commented on HADOOP-14747:
---

[~ste...@apache.org] opened a PR: [https://github.com/apache/hadoop/pull/690] - 
I ran all S3 tests against US East (N. Virginia) without issue; the PR 
description contains a full list of details.

> S3AInputStream to implement CanUnbuffer
> ---
>
> Key: HADOOP-14747
> URL: https://issues.apache.org/jira/browse/HADOOP-14747
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Sahil Takiar
>Priority: Major
>
> HBase relies on FileSystems implementing {{CanUnbuffer.unbuffer()}} to force 
> input streams to free up remote connections (HBASE-9393). This works for 
> HDFS, but not elsewhere.
> S3A input stream can implement {{CanUnbuffer.unbuffer()}} by closing the 
> input stream and relying on lazy seek to reopen it on demand.
> Needs
> * Contract specification of unbuffer. As in "who added a new feature to 
> filesystems but forgot to mention what it should do?"
> * Contract test for filesystems which declare their support. 
> * S3AInputStream to call {{closeStream()}} on a call to {{unbuffer()}}.
> * Test case



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14747) S3AInputStream to implement CanUnbuffer

2019-04-03 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HADOOP-14747:
--
Status: Patch Available  (was: Open)

> S3AInputStream to implement CanUnbuffer
> ---
>
> Key: HADOOP-14747
> URL: https://issues.apache.org/jira/browse/HADOOP-14747
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Sahil Takiar
>Priority: Major
>
> HBase relies on FileSystems implementing {{CanUnbuffer.unbuffer()}} to force 
> input streams to free up remote connections (HBASE-9393). This works for 
> HDFS, but not elsewhere.
> S3A input stream can implement {{CanUnbuffer.unbuffer()}} by closing the 
> input stream and relying on lazy seek to reopen it on demand.
> Needs
> * Contract specification of unbuffer. As in "who added a new feature to 
> filesystems but forgot to mention what it should do?"
> * Contract test for filesystems which declare their support. 
> * S3AInputStream to call {{closeStream()}} on a call to {{unbuffer()}}.
> * Test case



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar opened a new pull request #690: HADOOP-14747: S3AInputStream to implement CanUnbuffer

2019-04-03 Thread GitBox
sahilTakiar opened a new pull request #690: HADOOP-14747: S3AInputStream to 
implement CanUnbuffer
URL: https://github.com/apache/hadoop/pull/690
 
 
   [HADOOP-14747](https://issues.apache.org/jira/browse/HADOOP-14747): 
S3AInputStream to implement CanUnbuffer
   
   Change Summary:
   * Added a contract specification for `CanUnbuffer` to `fsdatainputstream.md`
   * Unlike most other interfaces the logic for 
`FSDataInputStream#unbuffer` is a bit different since it delegates to 
`StreamCapabilitiesPolicy#unbuffer`
   * The one odd thing I noticed while writing up the contract 
specification is that the current implementation of `unbuffer` allows callers 
to invoke `unbuffer` on a closed file without issue; I'm not sure if that was 
by design or not so for now the specification allows for it and dictates that 
calling `unbuffer` after `close` is a no-op, but let me know if we think the 
behavior should be changed
   * `AbstractContractUnbufferTest` contains the contract tests and I added 
implementations for HDFS and S3; I don't see contract tests for 
`CryptoInputStream` so I left it alone
   * I added S3 specific tests: both unit tests (using mocks) and itests
   * The actual implementation of `unbuffer` in `S3AInputStream` just calls 
`closeStream`
   
   Testing:
   * Ran all unit tests and S3 itests against US East (N. Virginia) using `mvn 
verify`
   * All tests passed, `ITestS3AContractDistCp` failed once but worked when 
I retried it
   * I didn't run the scale tests, or several of the other tests that require 
additional configuration; let me know if I should run them


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16161) NetworkTopology#getWeightUsingNetworkLocation return unexpected result

2019-04-03 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-16161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16809502#comment-16809502
 ] 

Íñigo Goiri commented on HADOOP-16161:
--

The main thing with {{assertEquals()}} is that when debugging failed unit tests 
from others, it will tell you the expected and actual in the assert error in 
the wrong way.
In this way is easier to get assertion errors.
[^HADOOP-16161.009.patch] LGTM.
+1 pending Yetus.

> NetworkTopology#getWeightUsingNetworkLocation return unexpected result
> --
>
> Key: HADOOP-16161
> URL: https://issues.apache.org/jira/browse/HADOOP-16161
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HADOOP-16161.001.patch, HADOOP-16161.002.patch, 
> HADOOP-16161.003.patch, HADOOP-16161.004.patch, HADOOP-16161.005.patch, 
> HADOOP-16161.006.patch, HADOOP-16161.007.patch, HADOOP-16161.008.patch, 
> HADOOP-16161.009.patch
>
>
> Consider the following scenario:
> 1. there are 4 slaves and topology like:
> Rack: /IDC/RACK1
>hostname1
>hostname2
> Rack: /IDC/RACK2
>hostname3
>hostname4
> 2. Reader from hostname1, and calculate weight between reader and [hostname1, 
> hostname3, hostname4] by #getWeight, and their corresponding values are 
> [0,4,4]
> 3. Reader from client which is not in the topology, and in the same IDC but 
> in none rack of the topology, and calculate weight between reader and 
> [hostname1, hostname3, hostname4] by #getWeightUsingNetworkLocation, and 
> their corresponding values are [2,2,2]
> 4. Other different Reader can get the similar results.
> The weight result for case #3 is obviously not the expected value, the truth 
> is [4,4,4]. this issue may cause reader not really following arrange: local 
> -> local rack -> remote rack. 
> After dig the detailed implement, the root cause is 
> #getWeightUsingNetworkLocation only calculate distance between Racks rather 
> than hosts.
> I think we should add constant 2 to correct the weight of 
> #getWeightUsingNetworkLocation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #653: HDDS-1333. OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes

2019-04-03 Thread GitBox
xiaoyuyao commented on a change in pull request #653: HDDS-1333. 
OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security 
classes
URL: https://github.com/apache/hadoop/pull/653#discussion_r272015093
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozonefs/docker-compose.yaml
 ##
 @@ -49,21 +49,53 @@ services:
   environment:
  ENSURE_SCM_INITIALIZED: /data/metadata/scm/current/VERSION
   command: ["/opt/hadoop/bin/ozone","scm"]
-   hadoop3:
+   hadoop32:
   image: flokkr/hadoop:3.1.0
 
 Review comment:
   Agree, let's fix that post 0.4.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16227) Upgrade checkstyle to 8.19

2019-04-03 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16809492#comment-16809492
 ] 

Akira Ajisaka commented on HADOOP-16227:


The xml parsing error has been fixed by HADOOP-16232. Hi [~jojochuang], would 
you review this?

> Upgrade checkstyle to 8.19
> --
>
> Key: HADOOP-16227
> URL: https://issues.apache.org/jira/browse/HADOOP-16227
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16227.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on issue #661: HDDS-976: Parse network topology from yaml file

2019-04-03 Thread GitBox
xiaoyuyao commented on issue #661: HDDS-976: Parse network topology from yaml 
file
URL: https://github.com/apache/hadoop/pull/661#issuecomment-479743474
 
 
   Thanks @cjjnjust  for working on this. Patch LGTM overall, just few minor 
issues commented inline.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #651: HDDS-1339. Implement ratis snapshots on OM

2019-04-03 Thread GitBox
hadoop-yetus commented on issue #651: HDDS-1339. Implement ratis snapshots on OM
URL: https://github.com/apache/hadoop/pull/651#issuecomment-479742555
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 59 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1006 | trunk passed |
   | +1 | compile | 964 | trunk passed |
   | +1 | checkstyle | 192 | trunk passed |
   | -1 | mvnsite | 37 | ozone-manager in trunk failed. |
   | +1 | shadedclient | 1092 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | -1 | findbugs | 29 | ozone-manager in trunk failed. |
   | +1 | javadoc | 122 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 129 | the patch passed |
   | +1 | compile | 938 | the patch passed |
   | +1 | javac | 938 | the patch passed |
   | +1 | checkstyle | 209 | the patch passed |
   | +1 | mvnsite | 148 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 614 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 198 | the patch passed |
   | +1 | javadoc | 118 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 82 | common in the patch passed. |
   | +1 | unit | 39 | common in the patch passed. |
   | -1 | unit | 1545 | integration-test in the patch failed. |
   | +1 | unit | 50 | ozone-manager in the patch passed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 7827 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-651/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/651 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
   | uname | Linux 78ec5a2dac0e 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7b5b783 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-651/7/artifact/out/branch-mvnsite-hadoop-ozone_ozone-manager.txt
 |
   | findbugs | v3.1.0-RC1 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-651/7/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-651/7/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-651/7/testReport/ |
   | Max. process+thread count | 3676 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-651/7/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16161) NetworkTopology#getWeightUsingNetworkLocation return unexpected result

2019-04-03 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16809484#comment-16809484
 ] 

He Xiaoqiao commented on HADOOP-16161:
--

Thanks [~elgoiri] correct {{assertEquals}} usage times,  
[^HADOOP-16161.009.patch] update that.
Maybe I need to highlight this rule. :)
{quote}assertEquals() should have the excepted value as the first parameter and 
the second to be the one checking{quote}
Thanks again.

> NetworkTopology#getWeightUsingNetworkLocation return unexpected result
> --
>
> Key: HADOOP-16161
> URL: https://issues.apache.org/jira/browse/HADOOP-16161
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HADOOP-16161.001.patch, HADOOP-16161.002.patch, 
> HADOOP-16161.003.patch, HADOOP-16161.004.patch, HADOOP-16161.005.patch, 
> HADOOP-16161.006.patch, HADOOP-16161.007.patch, HADOOP-16161.008.patch, 
> HADOOP-16161.009.patch
>
>
> Consider the following scenario:
> 1. there are 4 slaves and topology like:
> Rack: /IDC/RACK1
>hostname1
>hostname2
> Rack: /IDC/RACK2
>hostname3
>hostname4
> 2. Reader from hostname1, and calculate weight between reader and [hostname1, 
> hostname3, hostname4] by #getWeight, and their corresponding values are 
> [0,4,4]
> 3. Reader from client which is not in the topology, and in the same IDC but 
> in none rack of the topology, and calculate weight between reader and 
> [hostname1, hostname3, hostname4] by #getWeightUsingNetworkLocation, and 
> their corresponding values are [2,2,2]
> 4. Other different Reader can get the similar results.
> The weight result for case #3 is obviously not the expected value, the truth 
> is [4,4,4]. this issue may cause reader not really following arrange: local 
> -> local rack -> remote rack. 
> After dig the detailed implement, the root cause is 
> #getWeightUsingNetworkLocation only calculate distance between Racks rather 
> than hosts.
> I think we should add constant 2 to correct the weight of 
> #getWeightUsingNetworkLocation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #661: HDDS-976: Parse network topology from yaml file

2019-04-03 Thread GitBox
xiaoyuyao commented on a change in pull request #661: HDDS-976: Parse network 
topology from yaml file
URL: https://github.com/apache/hadoop/pull/661#discussion_r272010158
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/net/NodeSchemaManager.java
 ##
 @@ -59,13 +59,20 @@ public void init(Configuration conf) {
 /**
  * Load schemas from network topology schema configuration file
  */
+String schemaFileType = conf.get(
+ScmConfigKeys.OZONE_SCM_NETWORK_TOPOLOGY_SCHEMA_FILE_TYPE);
+
 String schemaFile = conf.get(
 ScmConfigKeys.OZONE_SCM_NETWORK_TOPOLOGY_SCHEMA_FILE,
 ScmConfigKeys.OZONE_SCM_NETWORK_TOPOLOGY_SCHEMA_FILE_DEFAULT);
 
 NodeSchemaLoadResult result;
 try {
-  result = NodeSchemaLoader.getInstance().loadSchemaFromFile(schemaFile);
+  if (schemaFileType.compareTo("yaml") == 0) {
+result = NodeSchemaLoader.getInstance().loadSchemaFromYaml(schemaFile);
+  } else {
+result = NodeSchemaLoader.getInstance().loadSchemaFromFile(schemaFile);
 
 Review comment:
   Maybe change this to loadSchemaFromXml?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #661: HDDS-976: Parse network topology from yaml file

2019-04-03 Thread GitBox
xiaoyuyao commented on a change in pull request #661: HDDS-976: Parse network 
topology from yaml file
URL: https://github.com/apache/hadoop/pull/661#discussion_r272010041
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/net/NodeSchemaManager.java
 ##
 @@ -59,13 +59,20 @@ public void init(Configuration conf) {
 /**
  * Load schemas from network topology schema configuration file
  */
+String schemaFileType = conf.get(
+ScmConfigKeys.OZONE_SCM_NETWORK_TOPOLOGY_SCHEMA_FILE_TYPE);
+
 String schemaFile = conf.get(
 ScmConfigKeys.OZONE_SCM_NETWORK_TOPOLOGY_SCHEMA_FILE,
 ScmConfigKeys.OZONE_SCM_NETWORK_TOPOLOGY_SCHEMA_FILE_DEFAULT);
 
 NodeSchemaLoadResult result;
 try {
-  result = NodeSchemaLoader.getInstance().loadSchemaFromFile(schemaFile);
+  if (schemaFileType.compareTo("yaml") == 0) {
 
 Review comment:
   Can we make the schema type string case insensitive?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16161) NetworkTopology#getWeightUsingNetworkLocation return unexpected result

2019-04-03 Thread He Xiaoqiao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Xiaoqiao updated HADOOP-16161:
-
Attachment: HADOOP-16161.009.patch

> NetworkTopology#getWeightUsingNetworkLocation return unexpected result
> --
>
> Key: HADOOP-16161
> URL: https://issues.apache.org/jira/browse/HADOOP-16161
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HADOOP-16161.001.patch, HADOOP-16161.002.patch, 
> HADOOP-16161.003.patch, HADOOP-16161.004.patch, HADOOP-16161.005.patch, 
> HADOOP-16161.006.patch, HADOOP-16161.007.patch, HADOOP-16161.008.patch, 
> HADOOP-16161.009.patch
>
>
> Consider the following scenario:
> 1. there are 4 slaves and topology like:
> Rack: /IDC/RACK1
>hostname1
>hostname2
> Rack: /IDC/RACK2
>hostname3
>hostname4
> 2. Reader from hostname1, and calculate weight between reader and [hostname1, 
> hostname3, hostname4] by #getWeight, and their corresponding values are 
> [0,4,4]
> 3. Reader from client which is not in the topology, and in the same IDC but 
> in none rack of the topology, and calculate weight between reader and 
> [hostname1, hostname3, hostname4] by #getWeightUsingNetworkLocation, and 
> their corresponding values are [2,2,2]
> 4. Other different Reader can get the similar results.
> The weight result for case #3 is obviously not the expected value, the truth 
> is [4,4,4]. this issue may cause reader not really following arrange: local 
> -> local rack -> remote rack. 
> After dig the detailed implement, the root cause is 
> #getWeightUsingNetworkLocation only calculate distance between Racks rather 
> than hosts.
> I think we should add constant 2 to correct the weight of 
> #getWeightUsingNetworkLocation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #661: HDDS-976: Parse network topology from yaml file

2019-04-03 Thread GitBox
xiaoyuyao commented on a change in pull request #661: HDDS-976: Parse network 
topology from yaml file
URL: https://github.com/apache/hadoop/pull/661#discussion_r272008319
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/net/NodeSchemaLoader.java
 ##
 @@ -165,6 +169,81 @@ private NodeSchemaLoadResult loadSchema(File schemaFile) 
throws
 return schemaList;
   }
 
+  /**
+   * Load user defined network layer schemas from a YAML configuration file.
+   * @param schemaFilePath path of schema file
+   * @return all valid node schemas defined in schema file
+   */
+  public NodeSchemaLoadResult loadSchemaFromYaml(String schemaFilePath)
+  throws IllegalArgumentException {
+try {
+  File schemaFile = new File(schemaFilePath);
+  if (!schemaFile.exists()) {
+String msg = "Network topology layer schema file " + schemaFilePath +
+" is not found.";
+LOG.warn(msg);
+throw new IllegalArgumentException(msg);
+  }
+  return loadSchemaFromYaml(schemaFile);
+} catch (Exception e) {
+  throw new IllegalArgumentException("Fail to load network topology node"
+  + " schema file: " + schemaFilePath + " , error:" + 
e.getMessage());
+}
+  }
+
+  /**
+   * Load network topology layer schemas from a YAML configuration file.
+   * @param schemaFile schema file
+   * @return all valid node schemas defined in schema file
+   * @throws ParserConfigurationException ParserConfigurationException happen
+   * @throws IOException no such schema file
+   * @throws SAXException xml file has some invalid elements
+   * @throws IllegalArgumentException xml file content is logically invalid
+   */
+  private NodeSchemaLoadResult loadSchemaFromYaml(File schemaFile) {
+LOG.info("Loading network topology layer schema file " + schemaFile);
+NodeSchemaLoadResult finalSchema;
+
+try {
+  FileInputStream fileInputStream = new FileInputStream(schemaFile);
 
 Review comment:
   Can we use try-with-resource to ensure the FileInputStream is closed 
properly even Exception is thrown on line 210?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #661: HDDS-976: Parse network topology from yaml file

2019-04-03 Thread GitBox
xiaoyuyao commented on a change in pull request #661: HDDS-976: Parse network 
topology from yaml file
URL: https://github.com/apache/hadoop/pull/661#discussion_r272008319
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/net/NodeSchemaLoader.java
 ##
 @@ -165,6 +169,81 @@ private NodeSchemaLoadResult loadSchema(File schemaFile) 
throws
 return schemaList;
   }
 
+  /**
+   * Load user defined network layer schemas from a YAML configuration file.
+   * @param schemaFilePath path of schema file
+   * @return all valid node schemas defined in schema file
+   */
+  public NodeSchemaLoadResult loadSchemaFromYaml(String schemaFilePath)
+  throws IllegalArgumentException {
+try {
+  File schemaFile = new File(schemaFilePath);
+  if (!schemaFile.exists()) {
+String msg = "Network topology layer schema file " + schemaFilePath +
+" is not found.";
+LOG.warn(msg);
+throw new IllegalArgumentException(msg);
+  }
+  return loadSchemaFromYaml(schemaFile);
+} catch (Exception e) {
+  throw new IllegalArgumentException("Fail to load network topology node"
+  + " schema file: " + schemaFilePath + " , error:" + 
e.getMessage());
+}
+  }
+
+  /**
+   * Load network topology layer schemas from a YAML configuration file.
+   * @param schemaFile schema file
+   * @return all valid node schemas defined in schema file
+   * @throws ParserConfigurationException ParserConfigurationException happen
+   * @throws IOException no such schema file
+   * @throws SAXException xml file has some invalid elements
+   * @throws IllegalArgumentException xml file content is logically invalid
+   */
+  private NodeSchemaLoadResult loadSchemaFromYaml(File schemaFile) {
+LOG.info("Loading network topology layer schema file " + schemaFile);
+NodeSchemaLoadResult finalSchema;
+
+try {
+  FileInputStream fileInputStream = new FileInputStream(schemaFile);
 
 Review comment:
   Can we use try-with-resource to ensure the FileInputStream is closed 
properly?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #661: HDDS-976: Parse network topology from yaml file

2019-04-03 Thread GitBox
xiaoyuyao commented on a change in pull request #661: HDDS-976: Parse network 
topology from yaml file
URL: https://github.com/apache/hadoop/pull/661#discussion_r272008223
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/net/NodeSchemaLoader.java
 ##
 @@ -165,6 +169,81 @@ private NodeSchemaLoadResult loadSchema(File schemaFile) 
throws
 return schemaList;
   }
 
+  /**
+   * Load user defined network layer schemas from a YAML configuration file.
+   * @param schemaFilePath path of schema file
+   * @return all valid node schemas defined in schema file
+   */
+  public NodeSchemaLoadResult loadSchemaFromYaml(String schemaFilePath)
+  throws IllegalArgumentException {
+try {
+  File schemaFile = new File(schemaFilePath);
+  if (!schemaFile.exists()) {
+String msg = "Network topology layer schema file " + schemaFilePath +
+" is not found.";
+LOG.warn(msg);
+throw new IllegalArgumentException(msg);
+  }
+  return loadSchemaFromYaml(schemaFile);
+} catch (Exception e) {
+  throw new IllegalArgumentException("Fail to load network topology node"
+  + " schema file: " + schemaFilePath + " , error:" + 
e.getMessage());
+}
+  }
+
+  /**
+   * Load network topology layer schemas from a YAML configuration file.
+   * @param schemaFile schema file
+   * @return all valid node schemas defined in schema file
+   * @throws ParserConfigurationException ParserConfigurationException happen
+   * @throws IOException no such schema file
+   * @throws SAXException xml file has some invalid elements
+   * @throws IllegalArgumentException xml file content is logically invalid
+   */
+  private NodeSchemaLoadResult loadSchemaFromYaml(File schemaFile) {
+LOG.info("Loading network topology layer schema file " + schemaFile);
 
 Review comment:
   NIT:  can we use parameterized log4j like below
   LOG.info("Loading network topology layer schema file {}", schemaFile);


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16227) Upgrade checkstyle to 8.19

2019-04-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16809462#comment-16809462
 ] 

Hadoop QA commented on HADOOP-16227:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
3s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 14m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
58m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 13m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 48s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 25s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.util.TestReadWriteDiskValidator |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16227 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12964516/HADOOP-16227.001.patch
 |
| Optional Tests |  dupname  asflicense  xml  compile  javac  javadoc  
mvninstall  mvnsite  unit  shadedclient  |
| uname | Linux 43ef97fec3fc 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7b5b783 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16117/artifact/out/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16117/testReport/ |
| Max. process+thread count | 1427 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16117/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Upgrade checkstyle to 

[jira] [Commented] (HADOOP-16208) Do Not Log InterruptedException in Client

2019-04-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16809432#comment-16809432
 ] 

Hadoop QA commented on HADOOP-16208:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
21s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16208 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12964792/HADOOP-16208.3.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 06dccbb717b2 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7b5b783 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16116/testReport/ |
| Max. process+thread count | 1629 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16116/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Do Not Log 

[jira] [Reopened] (HADOOP-10848) Cleanup calling of sun.security.krb5.Config

2019-04-03 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HADOOP-10848:


Now I'm trying to run HDFS on Java 11 and faced the following warnings:
{noformat}
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by 
org.apache.hadoop.security.authentication.util.KerberosUtil 
(file:/opt/hadoop-3.3.0-SNAPSHOT/share/hadoop/common/lib/hadoop-auth-3.3.0-SNAPSHOT.jar)
 to method sun.security.krb5.Config.getInstance()
WARNING: Please consider reporting this to the maintainers of 
org.apache.hadoop.security.authentication.util.KerberosUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal 
reflective access operations
WARNING: All illegal access operations will be denied in a future release
{noformat}
Reopening this because It's better to fix these warnings.

> Cleanup calling of sun.security.krb5.Config
> ---
>
> Key: HADOOP-10848
> URL: https://issues.apache.org/jira/browse/HADOOP-10848
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Priority: Minor
>
> As was told by Max (Oracle), JDK9 is likely to block all accesses to sun.* 
> classes.
> In 
> ./hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosUtil.java,
> sun.security.krb5.Config is called against the method getDefaultRealm() to 
> get default Kerberos realm. It was proposed to remove the call by Oracle:
> {code}
> new 
> javax.security.auth.kerberos.KerberosPrincipal("dummy").toString().split("@")[1]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15961) S3A committers: make sure there's regular progress() calls

2019-04-03 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16809420#comment-16809420
 ] 

lqjacklee commented on HADOOP-15961:


[~ste...@apache.org] I am working to this task, thanks. 

> S3A committers: make sure there's regular progress() calls
> --
>
> Key: HADOOP-15961
> URL: https://issues.apache.org/jira/browse/HADOOP-15961
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Minor
> Attachments: HADOOP-15961-001.patch, HADOOP-15961-002.patch
>
>
> MAPREDUCE-7164 highlights how inside job/task commit more context.progress() 
> callbacks are needed, just for HDFS.
> the S3A committers should be reviewed similarly.
> At a glance:
> StagingCommitter.commitTaskInternal() is at risk if a task write upload 
> enough data to the localfs that the upload takes longer than the timeout.
> it should call progress it every single file commits, or better: modify 
> {{uploadFileToPendingCommit}} to take a Progressable for progress callbacks 
> after every part upload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #651: HDDS-1339. Implement ratis snapshots on OM

2019-04-03 Thread GitBox
hadoop-yetus commented on issue #651: HDDS-1339. Implement ratis snapshots on OM
URL: https://github.com/apache/hadoop/pull/651#issuecomment-479713962
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 64 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1011 | trunk passed |
   | +1 | compile | 963 | trunk passed |
   | +1 | checkstyle | 191 | trunk passed |
   | +1 | mvnsite | 217 | trunk passed |
   | +1 | shadedclient | 1135 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 213 | trunk passed |
   | +1 | javadoc | 169 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 61 | Maven dependency ordering for patch |
   | -1 | mvninstall | 25 | integration-test in the patch failed. |
   | +1 | compile | 923 | the patch passed |
   | +1 | javac | 923 | the patch passed |
   | +1 | checkstyle | 193 | the patch passed |
   | +1 | mvnsite | 171 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 656 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 214 | the patch passed |
   | +1 | javadoc | 139 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 73 | common in the patch passed. |
   | +1 | unit | 40 | common in the patch passed. |
   | -1 | unit | 1095 | integration-test in the patch failed. |
   | +1 | unit | 50 | ozone-manager in the patch passed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 7661 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-651/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/651 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
   | uname | Linux abb72d223c51 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7b5b783 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-651/6/artifact/out/patch-mvninstall-hadoop-ozone_integration-test.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-651/6/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-651/6/testReport/ |
   | Max. process+thread count | 3897 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-651/6/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16232) Fix errors in the checkstyle configration xmls

2019-04-03 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16232:
---
Fix Version/s: 3.1.3
   3.2.1
   3.3.0
   3.0.4

> Fix errors in the checkstyle configration xmls
> --
>
> Key: HADOOP-16232
> URL: https://issues.apache.org/jira/browse/HADOOP-16232
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Wanqiang Ji
>Priority: Major
>  Labels: newbie
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16232.001.patch, HADOOP-16232.002.patch, 
> HADOOP-16232.003.patch
>
>
> http://www.puppycrawl.com/dtds/configuration_1_2.dtd is not found and 
> https://checkstyle.org/dtds/configuration_1_2.dtd should be used instead.
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1094/artifact/out/xml.txt
> {noformat}
> hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml:
> java.lang.RuntimeException: java.io.FileNotFoundException: 
> http://www.puppycrawl.com/dtds/configuration_1_2.dtd
>   at 
> jdk.nashorn.internal.runtime.ScriptRuntime.apply(ScriptRuntime.java:397)
>   at 
> jdk.nashorn.api.scripting.NashornScriptEngine.evalImpl(NashornScriptEngine.java:449)
>   at 
> jdk.nashorn.api.scripting.NashornScriptEngine.evalImpl(NashornScriptEngine.java:406)
>   at 
> jdk.nashorn.api.scripting.NashornScriptEngine.evalImpl(NashornScriptEngine.java:402)
>   at 
> jdk.nashorn.api.scripting.NashornScriptEngine.eval(NashornScriptEngine.java:155)
>   at javax.script.AbstractScriptEngine.eval(AbstractScriptEngine.java:264)
>   at com.sun.tools.script.shell.Main.evaluateString(Main.java:298)
>   at com.sun.tools.script.shell.Main.evaluateString(Main.java:319)
>   at com.sun.tools.script.shell.Main.access$300(Main.java:37)
>   at com.sun.tools.script.shell.Main$3.run(Main.java:217)
>   at com.sun.tools.script.shell.Main.main(Main.java:48)
> Caused by: java.io.FileNotFoundException: 
> http://www.puppycrawl.com/dtds/configuration_1_2.dtd
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1890)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
>   at 
> com.sun.org.apache.xerces.internal.impl.XMLEntityManager.setupCurrentEntity(XMLEntityManager.java:647)
>   at 
> com.sun.org.apache.xerces.internal.impl.XMLEntityManager.startEntity(XMLEntityManager.java:1304)
>   at 
> com.sun.org.apache.xerces.internal.impl.XMLEntityManager.startDTDEntity(XMLEntityManager.java:1270)
>   at 
> com.sun.org.apache.xerces.internal.impl.XMLDTDScannerImpl.setInputSource(XMLDTDScannerImpl.java:264)
>   at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$DTDDriver.dispatch(XMLDocumentScannerImpl.java:1161)
>   at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$DTDDriver.next(XMLDocumentScannerImpl.java:1045)
>   at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(XMLDocumentScannerImpl.java:959)
>   at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:602)
>   at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:505)
>   at 
> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:842)
>   at 
> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:771)
>   at 
> com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)
>   at 
> com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:243)
>   at 
> com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:339)
>   at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:205)
>   at 
> jdk.nashorn.internal.scripts.Script$Recompilation$2$19313A$\^system_init\_.XMLDocument(:747)
>   at jdk.nashorn.internal.scripts.Script$1$\^string\_.:program(:1)
>   at 
> jdk.nashorn.internal.runtime.ScriptFunctionData.invoke(ScriptFunctionData.java:637)
>   at 
> jdk.nashorn.internal.runtime.ScriptFunction.invoke(ScriptFunction.java:494)
>   at 
> jdk.nashorn.internal.runtime.ScriptRuntime.apply(ScriptRuntime.java:393)
>   ... 10 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16208) Do Not Log InterruptedException in Client

2019-04-03 Thread David Mollitor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HADOOP-16208:

Status: Patch Available  (was: Open)

Yup.  That's embarrassing that it didn't even compile.  Sorry about that.  I am 
now posting a new patch.

> Do Not Log InterruptedException in Client
> -
>
> Key: HADOOP-16208
> URL: https://issues.apache.org/jira/browse/HADOOP-16208
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HADOOP-16208.1.patch, HADOOP-16208.2.patch, 
> HADOOP-16208.3.patch
>
>
> {code:java}
>    } catch (InterruptedException e) {
> Thread.currentThread().interrupt();
> LOG.warn("interrupted waiting to send rpc request to server", e);
> throw new IOException(e);
>   }
> {code}
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L1450
> I'm working on a project that uses an {{ExecutorService}} to launch a bunch 
> of threads.  Each thread spins up an HDFS client connection.  At any point in 
> time, the program can terminate and call {{ExecutorService#shutdownNow()}} to 
> forcibly close vis-à-vis {{Thread#interrupt()}}.  At that point, I get a 
> cascade of logging from the above code and there's no easy to way to turn it 
> off.
> "Log and throw" is generally frowned upon, just throw the {{Exception}} and 
> move on.
> https://community.oracle.com/docs/DOC-983543#logAndThrow



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16208) Do Not Log InterruptedException in Client

2019-04-03 Thread David Mollitor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HADOOP-16208:

Status: Open  (was: Patch Available)

> Do Not Log InterruptedException in Client
> -
>
> Key: HADOOP-16208
> URL: https://issues.apache.org/jira/browse/HADOOP-16208
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HADOOP-16208.1.patch, HADOOP-16208.2.patch, 
> HADOOP-16208.3.patch
>
>
> {code:java}
>    } catch (InterruptedException e) {
> Thread.currentThread().interrupt();
> LOG.warn("interrupted waiting to send rpc request to server", e);
> throw new IOException(e);
>   }
> {code}
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L1450
> I'm working on a project that uses an {{ExecutorService}} to launch a bunch 
> of threads.  Each thread spins up an HDFS client connection.  At any point in 
> time, the program can terminate and call {{ExecutorService#shutdownNow()}} to 
> forcibly close vis-à-vis {{Thread#interrupt()}}.  At that point, I get a 
> cascade of logging from the above code and there's no easy to way to turn it 
> off.
> "Log and throw" is generally frowned upon, just throw the {{Exception}} and 
> move on.
> https://community.oracle.com/docs/DOC-983543#logAndThrow



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16208) Do Not Log InterruptedException in Client

2019-04-03 Thread David Mollitor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HADOOP-16208:

Attachment: HADOOP-16208.3.patch

> Do Not Log InterruptedException in Client
> -
>
> Key: HADOOP-16208
> URL: https://issues.apache.org/jira/browse/HADOOP-16208
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HADOOP-16208.1.patch, HADOOP-16208.2.patch, 
> HADOOP-16208.3.patch
>
>
> {code:java}
>    } catch (InterruptedException e) {
> Thread.currentThread().interrupt();
> LOG.warn("interrupted waiting to send rpc request to server", e);
> throw new IOException(e);
>   }
> {code}
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L1450
> I'm working on a project that uses an {{ExecutorService}} to launch a bunch 
> of threads.  Each thread spins up an HDFS client connection.  At any point in 
> time, the program can terminate and call {{ExecutorService#shutdownNow()}} to 
> forcibly close vis-à-vis {{Thread#interrupt()}}.  At that point, I get a 
> cascade of logging from the above code and there's no easy to way to turn it 
> off.
> "Log and throw" is generally frowned upon, just throw the {{Exception}} and 
> move on.
> https://community.oracle.com/docs/DOC-983543#logAndThrow



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #689: HDDS-1379. Convert all OM Volume related operations to HA model.

2019-04-03 Thread GitBox
bharatviswa504 commented on a change in pull request #689: HDDS-1379. Convert 
all OM Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#discussion_r271976741
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -1594,6 +1598,107 @@ public void createVolume(OmVolumeArgs args) throws 
IOException {
 }
   }
 
+  @Override
+  public VolumeList startCreateVolume(OmVolumeArgs args) throws IOException {
+try {
+  // TODO: Need to add metrics and Audit log for HA requests
+  if(isAclEnabled) {
 
 Review comment:
   Will update it


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #689: HDDS-1379. Convert all OM Volume related operations to HA model.

2019-04-03 Thread GitBox
bharatviswa504 commented on a change in pull request #689: HDDS-1379. Convert 
all OM Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#discussion_r271976703
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmVolumeOwnerChangeResponse.java
 ##
 @@ -0,0 +1,56 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om.helpers;
+
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.VolumeList;
+
+/**
+ * OM response for owner change request for a ozone volume.
+ */
+public class OmVolumeOwnerChangeResponse {
+  private VolumeList originalOwnerVolumeList;
 
 Review comment:
   I think you have figured out this, but adding my response here.
   Added these fields because during applyTransaction we don't want to read 
OMDB, in applyTransaction, it is just applied to OM DB. (Like put call, commit 
batch call). If we don't return these values, we need to read OM DB again in 
applyTransaction. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #689: HDDS-1379. Convert all OM Volume related operations to HA model.

2019-04-03 Thread GitBox
bharatviswa504 commented on a change in pull request #689: HDDS-1379. Convert 
all OM Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#discussion_r271976232
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/S3BucketManagerImpl.java
 ##
 @@ -166,6 +170,10 @@ public boolean createOzoneVolumeIfNeeded(String userName)
   .setVolume(ozoneVolumeName)
   .setQuotaInBytes(OzoneConsts.MAX_QUOTA_IN_BYTES)
   .build();
+  if (isRatisEnabled) {
 
 Review comment:
   S3Bucket create internally calls createVolume. As now we have not separated 
s3 createbucket into 2 phase, we need to do this (In applyTransaction 
createBucket is called now). As createVolume when ratis is enabled it does not 
apply to OM DB. So, we need to call apply also here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #689: HDDS-1379. Convert all OM Volume related operations to HA model.

2019-04-03 Thread GitBox
bharatviswa504 commented on a change in pull request #689: HDDS-1379. Convert 
all OM Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#discussion_r271976281
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -1594,6 +1598,107 @@ public void createVolume(OmVolumeArgs args) throws 
IOException {
 }
   }
 
+  @Override
+  public VolumeList startCreateVolume(OmVolumeArgs args) throws IOException {
+try {
+  // TODO: Need to add metrics and Audit log for HA requests
+  if(isAclEnabled) {
+checkAcls(ResourceType.VOLUME, StoreType.OZONE,
+ACLType.CREATE, args.getVolume(), null, null);
+  }
+  VolumeList volumeList = volumeManager.createVolume(args);
+  return volumeList;
+} catch (Exception ex) {
+  throw ex;
 
 Review comment:
   Will update it


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #689: HDDS-1379. Convert all OM Volume related operations to HA model.

2019-04-03 Thread GitBox
bharatviswa504 commented on a change in pull request #689: HDDS-1379. Convert 
all OM Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#discussion_r271975878
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerRequestHandler.java
 ##
 @@ -363,6 +365,199 @@ public OMResponse handle(OMRequest request) {
 return responseBuilder.build();
   }
 
+  @Override
+  public OMRequest handleStartTransaction(OMRequest omRequest)
+  throws IOException {
+LOG.debug("Received OMRequest: {}, ", omRequest);
+Type cmdType = omRequest.getCmdType();
+OMRequest newOmRequest = null;
+try {
+  switch (cmdType) {
+  case CreateVolume:
+newOmRequest = handleCreateVolumeStart(omRequest);
+break;
+  case SetVolumeProperty:
+newOmRequest = handleSetVolumePropertyStart(omRequest);
+break;
+  case DeleteVolume:
+newOmRequest = handleDeleteVolumeStart(omRequest);
+break;
+  default:
+new OMException("Unrecognized Command Type:" + cmdType,
+OMException.ResultCodes.INVALID_REQUEST);
+  }
+} catch (IOException ex) {
+  throw ex;
+}
+return newOmRequest;
+  }
+
+
+  @Override
+  public OMResponse handleApplyTransaction(OMRequest omRequest) {
+LOG.debug("Received OMRequest: {}, ", omRequest);
+Type cmdType = omRequest.getCmdType();
+OMResponse.Builder responseBuilder = OMResponse.newBuilder()
+.setCmdType(cmdType)
+.setStatus(Status.OK);
+try {
+  switch (cmdType) {
+  case CreateVolume:
+responseBuilder.setCreateVolumeResponse(
+handleCreateVolumeApply(omRequest));
+break;
+  case SetVolumeProperty:
+responseBuilder.setSetVolumePropertyResponse(
+handleSetVolumePropertyApply(omRequest));
+break;
+  case DeleteVolume:
+responseBuilder.setDeleteVolumeResponse(
+handleDeleteVolumeApply(omRequest));
+break;
+  default:
+// As all request types are not changed so we need to call handle
+// here.
+return handle(omRequest);
+  }
+  responseBuilder.setSuccess(true);
+} catch (IOException ex) {
+  responseBuilder.setSuccess(false);
+  responseBuilder.setStatus(exceptionToResponseStatus(ex));
+  if (ex.getMessage() != null) {
+responseBuilder.setMessage(ex.getMessage());
+  }
+}
+return responseBuilder.build();
+  }
+
+
+  private OMRequest handleCreateVolumeStart(OMRequest omRequest)
+  throws IOException {
+try {
+  OzoneManagerProtocolProtos.VolumeInfo volumeInfo =
+  omRequest.getCreateVolumeRequest().getVolumeInfo();
+  OzoneManagerProtocolProtos.VolumeList volumeList =
+  impl.startCreateVolume(OmVolumeArgs.getFromProtobuf(volumeInfo));
+
+  CreateVolumeRequest createVolumeRequest =
+  CreateVolumeRequest.newBuilder().setVolumeInfo(volumeInfo)
+  .setVolumeList(volumeList).build();
+  return omRequest.toBuilder().setCreateVolumeRequest(createVolumeRequest)
+  .build();
+} catch (IOException ex) {
+  throw ex;
+}
+  }
+
+  private CreateVolumeResponse handleCreateVolumeApply(OMRequest omRequest)
+  throws IOException {
+try {
+  OzoneManagerProtocolProtos.VolumeInfo volumeInfo =
+  omRequest.getCreateVolumeRequest().getVolumeInfo();
+  OzoneManagerProtocolProtos.VolumeList volumeList =
+  omRequest.getCreateVolumeRequest().getVolumeList();
+  impl.applyCreateVolume(OmVolumeArgs.getFromProtobuf(volumeInfo),
+  volumeList);
+} catch (IOException ex) {
+  throw ex;
+}
+return CreateVolumeResponse.newBuilder().build();
+  }
+
+  private OMRequest handleSetVolumePropertyStart(OMRequest omRequest)
+  throws IOException {
+SetVolumePropertyRequest setVolumePropertyRequest =
+omRequest.getSetVolumePropertyRequest();
+String volume = setVolumePropertyRequest.getVolumeName();
+OMRequest newOmRequest = null;
+if (setVolumePropertyRequest.hasQuotaInBytes()) {
+  long quota = setVolumePropertyRequest.getQuotaInBytes();
+  OmVolumeArgs omVolumeArgs = impl.startSetQuota(volume, quota);
+  SetVolumePropertyRequest newSetVolumePropertyRequest =
+  SetVolumePropertyRequest.newBuilder().setVolumeName(volume)
+  .setVolumeInfo(omVolumeArgs.getProtobuf()).build();
+  newOmRequest =
+  omRequest.toBuilder().setSetVolumePropertyRequest(
+  newSetVolumePropertyRequest).build();
+} else {
+  String owner = setVolumePropertyRequest.getOwnerName();
+  OmVolumeOwnerChangeResponse omVolumeOwnerChangeResponse =
+  impl.startSetOwner(volume, owner);
+  // If volumeLists become large and when writing to disk we might take
+  // more space if the lists become very big in size. We might need to

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #689: HDDS-1379. Convert all OM Volume related operations to HA model.

2019-04-03 Thread GitBox
bharatviswa504 commented on a change in pull request #689: HDDS-1379. Convert 
all OM Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#discussion_r271975546
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManagerImpl.java
 ##
 @@ -322,28 +409,56 @@ public void deleteVolume(String volume) throws 
IOException {
   Preconditions.checkState(volume.equals(volumeArgs.getVolume()));
   // delete the volume from the owner list
   // as well as delete the volume entry
-  try (BatchOperation batch = metadataManager.getStore()
-  .initBatchOperation()) {
-delVolumeFromOwnerList(volume, volumeArgs.getOwnerName(), batch);
-metadataManager.getVolumeTable().deleteWithBatch(batch, dbVolumeKey);
-metadataManager.getStore().commitBatchOperation(batch);
+  VolumeList newVolumeList = delVolumeFromOwnerList(volume,
+  volumeArgs.getOwnerName());
+
+  if (!isRatisEnabled) {
+deleteVolumeCommitToDB(newVolumeList,
+volume, owner);
   }
-} catch (RocksDBException| IOException ex) {
+  return new OmDeleteVolumeResponse(volume, owner, newVolumeList);
+} catch (IOException ex) {
   if (!(ex instanceof OMException)) {
 LOG.error("Delete volume failed for volume:{}", volume, ex);
   }
-  if(ex instanceof RocksDBException) {
-throw RocksDBStore.toIOException("Volume creation failed.",
-(RocksDBException) ex);
-  } else {
-throw (IOException) ex;
-  }
+  throw ex;
 } finally {
   metadataManager.getLock().releaseVolumeLock(volume);
   metadataManager.getLock().releaseUserLock(owner);
 }
   }
 
+  @Override
+  public void applyDeleteVolume(String volume, String owner,
+  VolumeList newVolumeList) throws IOException {
+try {
+  deleteVolumeCommitToDB(newVolumeList, volume, owner);
+} catch (IOException ex) {
+  LOG.error("Delete volume failed for volume:{}", volume,
+  ex);
+  throw ex;
+}
+  }
+
+  private void deleteVolumeCommitToDB(VolumeList newVolumeList,
+  String volume, String owner) throws IOException {
+try (BatchOperation batch = metadataManager.getStore()
+.initBatchOperation()) {
+  String dbUserKey = metadataManager.getUserKey(owner);
 
 Review comment:
   This is not DB reading, this method applys "/" before volume.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #689: HDDS-1379. Convert all OM Volume related operations to HA model.

2019-04-03 Thread GitBox
bharatviswa504 commented on a change in pull request #689: HDDS-1379. Convert 
all OM Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#discussion_r271975546
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManagerImpl.java
 ##
 @@ -322,28 +409,56 @@ public void deleteVolume(String volume) throws 
IOException {
   Preconditions.checkState(volume.equals(volumeArgs.getVolume()));
   // delete the volume from the owner list
   // as well as delete the volume entry
-  try (BatchOperation batch = metadataManager.getStore()
-  .initBatchOperation()) {
-delVolumeFromOwnerList(volume, volumeArgs.getOwnerName(), batch);
-metadataManager.getVolumeTable().deleteWithBatch(batch, dbVolumeKey);
-metadataManager.getStore().commitBatchOperation(batch);
+  VolumeList newVolumeList = delVolumeFromOwnerList(volume,
+  volumeArgs.getOwnerName());
+
+  if (!isRatisEnabled) {
+deleteVolumeCommitToDB(newVolumeList,
+volume, owner);
   }
-} catch (RocksDBException| IOException ex) {
+  return new OmDeleteVolumeResponse(volume, owner, newVolumeList);
+} catch (IOException ex) {
   if (!(ex instanceof OMException)) {
 LOG.error("Delete volume failed for volume:{}", volume, ex);
   }
-  if(ex instanceof RocksDBException) {
-throw RocksDBStore.toIOException("Volume creation failed.",
-(RocksDBException) ex);
-  } else {
-throw (IOException) ex;
-  }
+  throw ex;
 } finally {
   metadataManager.getLock().releaseVolumeLock(volume);
   metadataManager.getLock().releaseUserLock(owner);
 }
   }
 
+  @Override
+  public void applyDeleteVolume(String volume, String owner,
+  VolumeList newVolumeList) throws IOException {
+try {
+  deleteVolumeCommitToDB(newVolumeList, volume, owner);
+} catch (IOException ex) {
+  LOG.error("Delete volume failed for volume:{}", volume,
+  ex);
+  throw ex;
+}
+  }
+
+  private void deleteVolumeCommitToDB(VolumeList newVolumeList,
+  String volume, String owner) throws IOException {
+try (BatchOperation batch = metadataManager.getStore()
+.initBatchOperation()) {
+  String dbUserKey = metadataManager.getUserKey(owner);
 
 Review comment:
   This is not DB reading, this method returns the same user name which we have 
passed. I think this was added for future purpose I believe if we want to have 
a different key for user


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #689: HDDS-1379. Convert all OM Volume related operations to HA model.

2019-04-03 Thread GitBox
bharatviswa504 commented on a change in pull request #689: HDDS-1379. Convert 
all OM Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#discussion_r271975546
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManagerImpl.java
 ##
 @@ -322,28 +409,56 @@ public void deleteVolume(String volume) throws 
IOException {
   Preconditions.checkState(volume.equals(volumeArgs.getVolume()));
   // delete the volume from the owner list
   // as well as delete the volume entry
-  try (BatchOperation batch = metadataManager.getStore()
-  .initBatchOperation()) {
-delVolumeFromOwnerList(volume, volumeArgs.getOwnerName(), batch);
-metadataManager.getVolumeTable().deleteWithBatch(batch, dbVolumeKey);
-metadataManager.getStore().commitBatchOperation(batch);
+  VolumeList newVolumeList = delVolumeFromOwnerList(volume,
+  volumeArgs.getOwnerName());
+
+  if (!isRatisEnabled) {
+deleteVolumeCommitToDB(newVolumeList,
+volume, owner);
   }
-} catch (RocksDBException| IOException ex) {
+  return new OmDeleteVolumeResponse(volume, owner, newVolumeList);
+} catch (IOException ex) {
   if (!(ex instanceof OMException)) {
 LOG.error("Delete volume failed for volume:{}", volume, ex);
   }
-  if(ex instanceof RocksDBException) {
-throw RocksDBStore.toIOException("Volume creation failed.",
-(RocksDBException) ex);
-  } else {
-throw (IOException) ex;
-  }
+  throw ex;
 } finally {
   metadataManager.getLock().releaseVolumeLock(volume);
   metadataManager.getLock().releaseUserLock(owner);
 }
   }
 
+  @Override
+  public void applyDeleteVolume(String volume, String owner,
+  VolumeList newVolumeList) throws IOException {
+try {
+  deleteVolumeCommitToDB(newVolumeList, volume, owner);
+} catch (IOException ex) {
+  LOG.error("Delete volume failed for volume:{}", volume,
+  ex);
+  throw ex;
+}
+  }
+
+  private void deleteVolumeCommitToDB(VolumeList newVolumeList,
+  String volume, String owner) throws IOException {
+try (BatchOperation batch = metadataManager.getStore()
+.initBatchOperation()) {
+  String dbUserKey = metadataManager.getUserKey(owner);
 
 Review comment:
   This is not DB reading, this method applys, it returns the same user name 
which we have passed. I think this was added for future purpose I believe if we 
want to have a different key for user


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #689: HDDS-1379. Convert all OM Volume related operations to HA model.

2019-04-03 Thread GitBox
bharatviswa504 commented on a change in pull request #689: HDDS-1379. Convert 
all OM Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#discussion_r271975519
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManagerImpl.java
 ##
 @@ -255,6 +328,19 @@ public void setQuota(String volume, long quota) throws 
IOException {
 }
   }
 
+  @Override
+  public void applySetQuota(OmVolumeArgs omVolumeArgs) throws IOException {
+try {
+  String dbVolumeKey = metadataManager.getVolumeKey(
 
 Review comment:
   This is not DB reading, this method applys "/" before volume.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #689: HDDS-1379. Convert all OM Volume related operations to HA model.

2019-04-03 Thread GitBox
arp7 commented on a change in pull request #689: HDDS-1379. Convert all OM 
Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#discussion_r271970646
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -1594,6 +1598,107 @@ public void createVolume(OmVolumeArgs args) throws 
IOException {
 }
   }
 
+  @Override
+  public VolumeList startCreateVolume(OmVolumeArgs args) throws IOException {
+try {
+  // TODO: Need to add metrics and Audit log for HA requests
+  if(isAclEnabled) {
 
 Review comment:
   Nitpick: need space between if and (
   
   This needs to be fixed in multiple places.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #689: HDDS-1379. Convert all OM Volume related operations to HA model.

2019-04-03 Thread GitBox
arp7 commented on a change in pull request #689: HDDS-1379. Convert all OM 
Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#discussion_r271971359
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerRequestHandler.java
 ##
 @@ -363,6 +365,199 @@ public OMResponse handle(OMRequest request) {
 return responseBuilder.build();
   }
 
+  @Override
+  public OMRequest handleStartTransaction(OMRequest omRequest)
+  throws IOException {
+LOG.debug("Received OMRequest: {}, ", omRequest);
+Type cmdType = omRequest.getCmdType();
+OMRequest newOmRequest = null;
+try {
+  switch (cmdType) {
+  case CreateVolume:
+newOmRequest = handleCreateVolumeStart(omRequest);
+break;
+  case SetVolumeProperty:
+newOmRequest = handleSetVolumePropertyStart(omRequest);
+break;
+  case DeleteVolume:
+newOmRequest = handleDeleteVolumeStart(omRequest);
+break;
+  default:
+new OMException("Unrecognized Command Type:" + cmdType,
+OMException.ResultCodes.INVALID_REQUEST);
+  }
+} catch (IOException ex) {
+  throw ex;
+}
+return newOmRequest;
+  }
+
+
+  @Override
+  public OMResponse handleApplyTransaction(OMRequest omRequest) {
+LOG.debug("Received OMRequest: {}, ", omRequest);
+Type cmdType = omRequest.getCmdType();
+OMResponse.Builder responseBuilder = OMResponse.newBuilder()
+.setCmdType(cmdType)
+.setStatus(Status.OK);
+try {
+  switch (cmdType) {
+  case CreateVolume:
+responseBuilder.setCreateVolumeResponse(
+handleCreateVolumeApply(omRequest));
+break;
+  case SetVolumeProperty:
+responseBuilder.setSetVolumePropertyResponse(
+handleSetVolumePropertyApply(omRequest));
+break;
+  case DeleteVolume:
+responseBuilder.setDeleteVolumeResponse(
+handleDeleteVolumeApply(omRequest));
+break;
+  default:
+// As all request types are not changed so we need to call handle
+// here.
+return handle(omRequest);
+  }
+  responseBuilder.setSuccess(true);
+} catch (IOException ex) {
+  responseBuilder.setSuccess(false);
+  responseBuilder.setStatus(exceptionToResponseStatus(ex));
+  if (ex.getMessage() != null) {
+responseBuilder.setMessage(ex.getMessage());
+  }
+}
+return responseBuilder.build();
+  }
+
+
+  private OMRequest handleCreateVolumeStart(OMRequest omRequest)
+  throws IOException {
+try {
+  OzoneManagerProtocolProtos.VolumeInfo volumeInfo =
+  omRequest.getCreateVolumeRequest().getVolumeInfo();
+  OzoneManagerProtocolProtos.VolumeList volumeList =
+  impl.startCreateVolume(OmVolumeArgs.getFromProtobuf(volumeInfo));
+
+  CreateVolumeRequest createVolumeRequest =
+  CreateVolumeRequest.newBuilder().setVolumeInfo(volumeInfo)
+  .setVolumeList(volumeList).build();
+  return omRequest.toBuilder().setCreateVolumeRequest(createVolumeRequest)
+  .build();
+} catch (IOException ex) {
+  throw ex;
+}
+  }
+
+  private CreateVolumeResponse handleCreateVolumeApply(OMRequest omRequest)
+  throws IOException {
+try {
+  OzoneManagerProtocolProtos.VolumeInfo volumeInfo =
+  omRequest.getCreateVolumeRequest().getVolumeInfo();
+  OzoneManagerProtocolProtos.VolumeList volumeList =
+  omRequest.getCreateVolumeRequest().getVolumeList();
+  impl.applyCreateVolume(OmVolumeArgs.getFromProtobuf(volumeInfo),
+  volumeList);
+} catch (IOException ex) {
+  throw ex;
+}
+return CreateVolumeResponse.newBuilder().build();
+  }
+
+  private OMRequest handleSetVolumePropertyStart(OMRequest omRequest)
+  throws IOException {
+SetVolumePropertyRequest setVolumePropertyRequest =
+omRequest.getSetVolumePropertyRequest();
+String volume = setVolumePropertyRequest.getVolumeName();
+OMRequest newOmRequest = null;
+if (setVolumePropertyRequest.hasQuotaInBytes()) {
+  long quota = setVolumePropertyRequest.getQuotaInBytes();
+  OmVolumeArgs omVolumeArgs = impl.startSetQuota(volume, quota);
+  SetVolumePropertyRequest newSetVolumePropertyRequest =
+  SetVolumePropertyRequest.newBuilder().setVolumeName(volume)
+  .setVolumeInfo(omVolumeArgs.getProtobuf()).build();
+  newOmRequest =
+  omRequest.toBuilder().setSetVolumePropertyRequest(
+  newSetVolumePropertyRequest).build();
+} else {
+  String owner = setVolumePropertyRequest.getOwnerName();
+  OmVolumeOwnerChangeResponse omVolumeOwnerChangeResponse =
+  impl.startSetOwner(volume, owner);
+  // If volumeLists become large and when writing to disk we might take
+  // more space if the lists become very big in size. We might need to
+  // 

[GitHub] [hadoop] arp7 commented on a change in pull request #689: HDDS-1379. Convert all OM Volume related operations to HA model.

2019-04-03 Thread GitBox
arp7 commented on a change in pull request #689: HDDS-1379. Convert all OM 
Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#discussion_r271971191
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManagerImpl.java
 ##
 @@ -255,6 +328,19 @@ public void setQuota(String volume, long quota) throws 
IOException {
 }
   }
 
+  @Override
+  public void applySetQuota(OmVolumeArgs omVolumeArgs) throws IOException {
+try {
+  String dbVolumeKey = metadataManager.getVolumeKey(
 
 Review comment:
   We should not need to do a read in applySetQuota. Can we pass the 
dbVolumeKey from start to apply?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #689: HDDS-1379. Convert all OM Volume related operations to HA model.

2019-04-03 Thread GitBox
arp7 commented on a change in pull request #689: HDDS-1379. Convert all OM 
Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#discussion_r271970568
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -1594,6 +1598,107 @@ public void createVolume(OmVolumeArgs args) throws 
IOException {
 }
   }
 
+  @Override
+  public VolumeList startCreateVolume(OmVolumeArgs args) throws IOException {
+try {
+  // TODO: Need to add metrics and Audit log for HA requests
 
 Review comment:
   Let's file a follow up jira. Audit especially is critical to fix.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #689: HDDS-1379. Convert all OM Volume related operations to HA model.

2019-04-03 Thread GitBox
arp7 commented on a change in pull request #689: HDDS-1379. Convert all OM 
Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#discussion_r271970849
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/S3BucketManagerImpl.java
 ##
 @@ -166,6 +170,10 @@ public boolean createOzoneVolumeIfNeeded(String userName)
   .setVolume(ozoneVolumeName)
   .setQuotaInBytes(OzoneConsts.MAX_QUOTA_IN_BYTES)
   .build();
+  if (isRatisEnabled) {
 
 Review comment:
   Why do we need to call applyCreateVolume separately? Is this being pushed 
through Ratis consensus correctly?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #689: HDDS-1379. Convert all OM Volume related operations to HA model.

2019-04-03 Thread GitBox
arp7 commented on a change in pull request #689: HDDS-1379. Convert all OM 
Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#discussion_r271970445
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmVolumeOwnerChangeResponse.java
 ##
 @@ -0,0 +1,56 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om.helpers;
+
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.VolumeList;
+
+/**
+ * OM response for owner change request for a ozone volume.
+ */
+public class OmVolumeOwnerChangeResponse {
+  private VolumeList originalOwnerVolumeList;
 
 Review comment:
   Why does the response need to have all these fields? Still reviewing so it 
may be clearer later.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #689: HDDS-1379. Convert all OM Volume related operations to HA model.

2019-04-03 Thread GitBox
arp7 commented on a change in pull request #689: HDDS-1379. Convert all OM 
Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#discussion_r271970765
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -1594,6 +1598,107 @@ public void createVolume(OmVolumeArgs args) throws 
IOException {
 }
   }
 
+  @Override
+  public VolumeList startCreateVolume(OmVolumeArgs args) throws IOException {
+try {
+  // TODO: Need to add metrics and Audit log for HA requests
+  if(isAclEnabled) {
+checkAcls(ResourceType.VOLUME, StoreType.OZONE,
+ACLType.CREATE, args.getVolume(), null, null);
+  }
+  VolumeList volumeList = volumeManager.createVolume(args);
+  return volumeList;
+} catch (Exception ex) {
+  throw ex;
 
 Review comment:
   Don't catch and throw same exception. This is an anti-pattern. This needs to 
be fixed in multiple places in different files.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #689: HDDS-1379. Convert all OM Volume related operations to HA model.

2019-04-03 Thread GitBox
arp7 commented on a change in pull request #689: HDDS-1379. Convert all OM 
Volume related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#discussion_r271971268
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManagerImpl.java
 ##
 @@ -322,28 +409,56 @@ public void deleteVolume(String volume) throws 
IOException {
   Preconditions.checkState(volume.equals(volumeArgs.getVolume()));
   // delete the volume from the owner list
   // as well as delete the volume entry
-  try (BatchOperation batch = metadataManager.getStore()
-  .initBatchOperation()) {
-delVolumeFromOwnerList(volume, volumeArgs.getOwnerName(), batch);
-metadataManager.getVolumeTable().deleteWithBatch(batch, dbVolumeKey);
-metadataManager.getStore().commitBatchOperation(batch);
+  VolumeList newVolumeList = delVolumeFromOwnerList(volume,
+  volumeArgs.getOwnerName());
+
+  if (!isRatisEnabled) {
+deleteVolumeCommitToDB(newVolumeList,
+volume, owner);
   }
-} catch (RocksDBException| IOException ex) {
+  return new OmDeleteVolumeResponse(volume, owner, newVolumeList);
+} catch (IOException ex) {
   if (!(ex instanceof OMException)) {
 LOG.error("Delete volume failed for volume:{}", volume, ex);
   }
-  if(ex instanceof RocksDBException) {
-throw RocksDBStore.toIOException("Volume creation failed.",
-(RocksDBException) ex);
-  } else {
-throw (IOException) ex;
-  }
+  throw ex;
 } finally {
   metadataManager.getLock().releaseVolumeLock(volume);
   metadataManager.getLock().releaseUserLock(owner);
 }
   }
 
+  @Override
+  public void applyDeleteVolume(String volume, String owner,
+  VolumeList newVolumeList) throws IOException {
+try {
+  deleteVolumeCommitToDB(newVolumeList, volume, owner);
+} catch (IOException ex) {
+  LOG.error("Delete volume failed for volume:{}", volume,
+  ex);
+  throw ex;
+}
+  }
+
+  private void deleteVolumeCommitToDB(VolumeList newVolumeList,
+  String volume, String owner) throws IOException {
+try (BatchOperation batch = metadataManager.getStore()
+.initBatchOperation()) {
+  String dbUserKey = metadataManager.getUserKey(owner);
 
 Review comment:
   Same. Can we pass the userKey from start to apply?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16161) NetworkTopology#getWeightUsingNetworkLocation return unexpected result

2019-04-03 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-16161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16809347#comment-16809347
 ] 

Íñigo Goiri commented on HADOOP-16161:
--

{{assertEquals()}} should have the excepted value as the first parameter and 
the second to be the one checking so it should have things like:
{code}
assertEquals(DatanodeInfo.AdminStates.DECOMMISSIONED,
sortedLocs[sortedLocs.length - 2].getAdminState());
assertEquals(totalDNs, sortedLocs2.length);
{code}

> NetworkTopology#getWeightUsingNetworkLocation return unexpected result
> --
>
> Key: HADOOP-16161
> URL: https://issues.apache.org/jira/browse/HADOOP-16161
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HADOOP-16161.001.patch, HADOOP-16161.002.patch, 
> HADOOP-16161.003.patch, HADOOP-16161.004.patch, HADOOP-16161.005.patch, 
> HADOOP-16161.006.patch, HADOOP-16161.007.patch, HADOOP-16161.008.patch
>
>
> Consider the following scenario:
> 1. there are 4 slaves and topology like:
> Rack: /IDC/RACK1
>hostname1
>hostname2
> Rack: /IDC/RACK2
>hostname3
>hostname4
> 2. Reader from hostname1, and calculate weight between reader and [hostname1, 
> hostname3, hostname4] by #getWeight, and their corresponding values are 
> [0,4,4]
> 3. Reader from client which is not in the topology, and in the same IDC but 
> in none rack of the topology, and calculate weight between reader and 
> [hostname1, hostname3, hostname4] by #getWeightUsingNetworkLocation, and 
> their corresponding values are [2,2,2]
> 4. Other different Reader can get the similar results.
> The weight result for case #3 is obviously not the expected value, the truth 
> is [4,4,4]. this issue may cause reader not really following arrange: local 
> -> local rack -> remote rack. 
> After dig the detailed implement, the root cause is 
> #getWeightUsingNetworkLocation only calculate distance between Racks rather 
> than hosts.
> I think we should add constant 2 to correct the weight of 
> #getWeightUsingNetworkLocation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #676: HDDS-1324. TestOzoneManagerHA tests are flaky

2019-04-03 Thread GitBox
bharatviswa504 merged pull request #676: HDDS-1324. TestOzoneManagerHA tests 
are flaky
URL: https://github.com/apache/hadoop/pull/676
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #651: HDDS-1339. Implement ratis snapshots on OM

2019-04-03 Thread GitBox
hadoop-yetus commented on issue #651: HDDS-1339. Implement ratis snapshots on OM
URL: https://github.com/apache/hadoop/pull/651#issuecomment-479688829
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1014 | trunk passed |
   | +1 | compile | 951 | trunk passed |
   | +1 | checkstyle | 189 | trunk passed |
   | -1 | mvnsite | 28 | ozone-manager in trunk failed. |
   | +1 | shadedclient | 1015 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | -1 | findbugs | 37 | ozone-manager in trunk failed. |
   | +1 | javadoc | 166 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | -1 | mvninstall | 24 | integration-test in the patch failed. |
   | +1 | compile | 904 | the patch passed |
   | +1 | javac | 904 | the patch passed |
   | +1 | checkstyle | 195 | the patch passed |
   | +1 | mvnsite | 190 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 678 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 224 | the patch passed |
   | +1 | javadoc | 162 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 91 | common in the patch passed. |
   | +1 | unit | 47 | common in the patch passed. |
   | -1 | unit | 1162 | integration-test in the patch failed. |
   | +1 | unit | 60 | ozone-manager in the patch passed. |
   | +1 | asflicense | 55 | The patch does not generate ASF License warnings. |
   | | | 7584 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-651/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/651 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
   | uname | Linux b30eca944121 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 366186d |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-651/5/artifact/out/branch-mvnsite-hadoop-ozone_ozone-manager.txt
 |
   | findbugs | v3.1.0-RC1 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-651/5/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-651/5/artifact/out/patch-mvninstall-hadoop-ozone_integration-test.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-651/5/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-651/5/testReport/ |
   | Max. process+thread count | 4333 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-651/5/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #676: HDDS-1324. TestOzoneManagerHA tests are flaky

2019-04-03 Thread GitBox
bharatviswa504 commented on a change in pull request #676: HDDS-1324. 
TestOzoneManagerHA tests are flaky
URL: https://github.com/apache/hadoop/pull/676#discussion_r271960508
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHA.java
 ##
 @@ -222,7 +221,7 @@ public void testMultipartUploadWithOneOmNodeDown() throws 
Exception {
 // Stop one of the ozone manager, to see when the OM leader changes
 // multipart upload is happening successfully or not.
 cluster.stopOzoneManager(leaderOMNodeId);
-
+Thread.sleep(NODE_FAILURE_TIMEOUT * 2);
 
 Review comment:
   Question: why this sleep is required?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16210) Update guava to 27.0-jre in hadoop-project trunk

2019-04-03 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16809327#comment-16809327
 ] 

Steve Loughran commented on HADOOP-16210:
-

+1 for trunk-only for now, even if complicates my IDE builds now end whenever I 
switch branches. Let's get it stable. I did mark the branch it's committed to 
though in the fix version; as new ones go in, we can update it

> Update guava to 27.0-jre in hadoop-project trunk
> 
>
> Key: HADOOP-16210
> URL: https://issues.apache.org/jira/browse/HADOOP-16210
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Fix For: 3.3.0
>
> Attachments: HADOOP-16210.001.patch, 
> HADOOP-16210.002.findbugsfix.wip.patch, HADOOP-16210.002.patch, 
> HADOOP-16210.003.patch
>
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> CVE-2018-10237.
> This is a sub-task for trunk from HADOOP-15960 to track issues with that 
> particular branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16210) Update guava to 27.0-jre in hadoop-project trunk

2019-04-03 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16210:

Fix Version/s: 3.3.0

> Update guava to 27.0-jre in hadoop-project trunk
> 
>
> Key: HADOOP-16210
> URL: https://issues.apache.org/jira/browse/HADOOP-16210
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Fix For: 3.3.0
>
> Attachments: HADOOP-16210.001.patch, 
> HADOOP-16210.002.findbugsfix.wip.patch, HADOOP-16210.002.patch, 
> HADOOP-16210.003.patch
>
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> CVE-2018-10237.
> This is a sub-task for trunk from HADOOP-15960 to track issues with that 
> particular branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #674: HADOOP-16210. Update guava to 27.0-jre in hadoop-project trunk

2019-04-03 Thread GitBox
steveloughran commented on issue #674: HADOOP-16210. Update guava to 27.0-jre 
in hadoop-project trunk
URL: https://github.com/apache/hadoop/pull/674#issuecomment-479681091
 
 
   Sean, there's a "close" button down below. If you don't see it, you may not 
be completely wired up ASF-permission wise


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #653: HDDS-1333. OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes

2019-04-03 Thread GitBox
xiaoyuyao commented on a change in pull request #653: HDDS-1333. 
OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security 
classes
URL: https://github.com/apache/hadoop/pull/653#discussion_r271955896
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/env-compose.robot
 ##
 @@ -13,4 +13,20 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-org.apache.hadoop.fs.ozone.OzoneFileSystem
 
 Review comment:
   Can you elaborate why this is removed/renamed? 
   
   This seems to break the MR use case where the node manager running against 
o3fs got ClassNotFoundException on org.apache.hadoop.fs.ozone.OzoneFileSystem.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #621: HADOOP-16090 S3A Client to add explicit support for versioned stores.

2019-04-03 Thread GitBox
steveloughran commented on issue #621: HADOOP-16090 S3A Client to add explicit 
support for versioned stores.
URL: https://github.com/apache/hadoop/pull/621#issuecomment-479677374
 
 
   On the ASF JIRA, it's being reported that testing this for Flink 
checkpointing is generating enough HEAD requests that writes are being 
throttled. DELETE calls aren't throttled, see, whereas the poll & delete 
operation was doing 1 HEAD/entry.
   
   I wonder if we could be more minimal and only look for a fake directory 
marker in the parent, so 1 HEAD per write only. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #543: HDDS-1211. Test SCMChillMode failing randomly in Jenkins run

2019-04-03 Thread GitBox
bharatviswa504 merged pull request #543: HDDS-1211. Test SCMChillMode failing 
randomly in Jenkins run
URL: https://github.com/apache/hadoop/pull/543
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 merged pull request #668: HDDS-1358 : Recon Server REST API not working as expected.

2019-04-03 Thread GitBox
arp7 merged pull request #668: HDDS-1358 : Recon Server REST API not working as 
expected.
URL: https://github.com/apache/hadoop/pull/668
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on issue #668: HDDS-1358 : Recon Server REST API not working as expected.

2019-04-03 Thread GitBox
arp7 commented on issue #668: HDDS-1358 : Recon Server REST API not working as 
expected.
URL: https://github.com/apache/hadoop/pull/668#issuecomment-479672277
 
 
   +1 lgtm.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16011) OsSecureRandom very slow compared to other SecureRandom implementations

2019-04-03 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16809296#comment-16809296
 ] 

Hudson commented on HADOOP-16011:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16343 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16343/])
HADOOP-16011. OsSecureRandom very slow compared to other SecureRandom (weichiu: 
rev e62cbcbc83026a7af43eac6223fe53f9de963d91)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslAesCtrCryptoCodec.java
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


> OsSecureRandom very slow compared to other SecureRandom implementations
> ---
>
> Key: HADOOP-16011
> URL: https://issues.apache.org/jira/browse/HADOOP-16011
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Todd Lipcon
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16011.001.patch, HADOOP-16011.002.patch, 
> MyBenchmark.java
>
>
> In looking at performance of a workload which creates a lot of short-lived 
> remote connections to a secured DN, [~philip] and I found very high system 
> CPU usage. We tracked it down to reads from /dev/random, which are incurred 
> by the DN using CryptoCodec.generateSecureRandom to generate a transient 
> session key and IV for AES encryption.
> In the case that the OpenSSL codec is not enabled, the above code falls 
> through to the JDK SecureRandom implementation, which performs reasonably. 
> However, OpenSSLCodec defaults to using OsSecureRandom, which reads all 
> random data from /dev/random rather than doing something more efficient like 
> initializing a CSPRNG from a small seed.
> I wrote a simple JMH benchmark to compare various approaches when running 
> with concurrency 10:
>  testHadoop - using CryptoCodec
>  testNewSecureRandom - using 'new SecureRandom()' each iteration
>  testSha1PrngNew - using the SHA1PRNG explicitly, new instance each iteration
>  testSha1PrngShared - using a single shared instance of SHA1PRNG
>  testSha1PrngThread - using a thread-specific instance of SHA1PRNG
> {code:java}
> Benchmark Mode  CntScore   Error  Units
> MyBenchmark.testHadoop   thrpt  1293.000  ops/s  
> [with libhadoop.so]
> MyBenchmark.testHadoop   thrpt461515.697  ops/s 
> [without libhadoop.so]
> MyBenchmark.testNewSecureRandom  thrpt 43413.640  ops/s
> MyBenchmark.testSha1PrngNew  thrpt395515.000  ops/s
> MyBenchmark.testSha1PrngShared   thrpt164488.713  ops/s
> MyBenchmark.testSha1PrngThread   thrpt   4295123.210  ops/s
> {code}
> In other words, the presence of the OpenSSL acceleration slows down this code 
> path by 356x. And, compared to the optimal (thread-local Sha1Prng) it's 3321x 
> slower.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16233) S3AFileStatus to declare that isEncrypted() is always true

2019-04-03 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16233.
-
   Resolution: Fixed
Fix Version/s: 3.1.3
   3.2.1
   3.0.4

> S3AFileStatus to declare that isEncrypted() is always true
> --
>
> Key: HADOOP-16233
> URL: https://issues.apache.org/jira/browse/HADOOP-16233
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0, 3.1.1, 3.0.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.0.4, 3.2.1, 3.1.3
>
>
> Some bits of yarn treat encrypted files differently than unencrypted ones, 
> and get confused by S3A (and other stores) where the permissions say "world 
> readable" but they aren't really. 
> Proposed: always declare that the file/dir is encrypted.
> Need to fix {{ITestS3AContractOpen}} to skip the encryption test, or just 
> push it down to HDFS/webhdfs, as the rule "empty files are not encrypted" 
> doesn't have to hold elsewhere



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16011) OsSecureRandom very slow compared to other SecureRandom implementations

2019-04-03 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16011:
-
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Pushed to trunk. 
Thanks [~smeng] for the patch and [~tlipcon] for reporting the issue!

> OsSecureRandom very slow compared to other SecureRandom implementations
> ---
>
> Key: HADOOP-16011
> URL: https://issues.apache.org/jira/browse/HADOOP-16011
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Todd Lipcon
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16011.001.patch, HADOOP-16011.002.patch, 
> MyBenchmark.java
>
>
> In looking at performance of a workload which creates a lot of short-lived 
> remote connections to a secured DN, [~philip] and I found very high system 
> CPU usage. We tracked it down to reads from /dev/random, which are incurred 
> by the DN using CryptoCodec.generateSecureRandom to generate a transient 
> session key and IV for AES encryption.
> In the case that the OpenSSL codec is not enabled, the above code falls 
> through to the JDK SecureRandom implementation, which performs reasonably. 
> However, OpenSSLCodec defaults to using OsSecureRandom, which reads all 
> random data from /dev/random rather than doing something more efficient like 
> initializing a CSPRNG from a small seed.
> I wrote a simple JMH benchmark to compare various approaches when running 
> with concurrency 10:
>  testHadoop - using CryptoCodec
>  testNewSecureRandom - using 'new SecureRandom()' each iteration
>  testSha1PrngNew - using the SHA1PRNG explicitly, new instance each iteration
>  testSha1PrngShared - using a single shared instance of SHA1PRNG
>  testSha1PrngThread - using a thread-specific instance of SHA1PRNG
> {code:java}
> Benchmark Mode  CntScore   Error  Units
> MyBenchmark.testHadoop   thrpt  1293.000  ops/s  
> [with libhadoop.so]
> MyBenchmark.testHadoop   thrpt461515.697  ops/s 
> [without libhadoop.so]
> MyBenchmark.testNewSecureRandom  thrpt 43413.640  ops/s
> MyBenchmark.testSha1PrngNew  thrpt395515.000  ops/s
> MyBenchmark.testSha1PrngShared   thrpt164488.713  ops/s
> MyBenchmark.testSha1PrngThread   thrpt   4295123.210  ops/s
> {code}
> In other words, the presence of the OpenSSL acceleration slows down this code 
> path by 356x. And, compared to the optimal (thread-local Sha1Prng) it's 3321x 
> slower.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on issue #681: HDDS-1353 : Metrics scm_pipeline_metrics_num_pipeline_creation_failed keeps increasing because of BackgroundPipelineCreator.

2019-04-03 Thread GitBox
vivekratnavel commented on issue #681: HDDS-1353 : Metrics 
scm_pipeline_metrics_num_pipeline_creation_failed keeps increasing because of 
BackgroundPipelineCreator.
URL: https://github.com/apache/hadoop/pull/681#issuecomment-479649552
 
 
   +1 LGTM


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16233) S3AFileStatus to declare that isEncrypted() is always true

2019-04-03 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16809234#comment-16809234
 ] 

Steve Loughran commented on HADOOP-16233:
-

Given the +1 by larry

bq. Seems to me that this means of determining whether something is public or 
private is brittle since it is derived by assumptions about certain 
filesystems. I'd like to see a follow up JIRA for addressing this in a more 
explicit way to consider something public.

yeah Maybe the PathCapabilities patch could declare if a path supported POSIX 
permissions, which on, say, ABFS could change on a file-by-file basis, so you 
really would need to check the path, not just the FS. 

Or you let apps hint which files can be considered for public sharing and they 
must be expected to be world readable. Hey, maybe we could fallback: if the NM 
can't D/L a public file, it just says "lets try as the user" and convert to 
private? That'd handle things like ranger policies, object store permission 
policies etc. 

bq. Here is my +1 for the change to align S3A filesystem with those assumptions.

thanks

> S3AFileStatus to declare that isEncrypted() is always true
> --
>
> Key: HADOOP-16233
> URL: https://issues.apache.org/jira/browse/HADOOP-16233
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0, 3.1.1, 3.0.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> Some bits of yarn treat encrypted files differently than unencrypted ones, 
> and get confused by S3A (and other stores) where the permissions say "world 
> readable" but they aren't really. 
> Proposed: always declare that the file/dir is encrypted.
> Need to fix {{ITestS3AContractOpen}} to skip the encryption test, or just 
> push it down to HDFS/webhdfs, as the rule "empty files are not encrypted" 
> doesn't have to hold elsewhere



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #689: HDDS-1379. Convert all OM Volume related operations to HA model.

2019-04-03 Thread GitBox
hadoop-yetus commented on issue #689: HDDS-1379. Convert all OM Volume related 
operations to HA model.
URL: https://github.com/apache/hadoop/pull/689#issuecomment-479647097
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 63 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1034 | trunk passed |
   | +1 | compile | 98 | trunk passed |
   | +1 | checkstyle | 26 | trunk passed |
   | +1 | mvnsite | 97 | trunk passed |
   | +1 | shadedclient | 750 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 93 | trunk passed |
   | +1 | javadoc | 68 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 10 | Maven dependency ordering for patch |
   | +1 | mvninstall | 97 | the patch passed |
   | +1 | compile | 88 | the patch passed |
   | +1 | cc | 88 | the patch passed |
   | +1 | javac | 88 | the patch passed |
   | -0 | checkstyle | 22 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 79 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 706 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | -1 | findbugs | 50 | hadoop-ozone/ozone-manager generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | -1 | javadoc | 35 | hadoop-ozone_common generated 1 new + 1 unchanged - 0 
fixed = 2 total (was 1) |
   ||| _ Other Tests _ |
   | +1 | unit | 36 | common in the patch passed. |
   | +1 | unit | 42 | ozone-manager in the patch passed. |
   | -1 | unit | 784 | integration-test in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 4380 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-ozone/ozone-manager |
   |  |  Dead store to volume in 
org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleSetVolumePropertyApply(OzoneManagerProtocolProtos$OMRequest)
  At 
OzoneManagerRequestHandler.java:org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleSetVolumePropertyApply(OzoneManagerProtocolProtos$OMRequest)
  At OzoneManagerRequestHandler.java:[line 512] |
   |  |  new org.apache.hadoop.ozone.om.exceptions.OMException(String, 
OMException$ResultCodes) not thrown in 
org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleStartTransaction(OzoneManagerProtocolProtos$OMRequest)
  At OzoneManagerRequestHandler.java:in 
org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleStartTransaction(OzoneManagerProtocolProtos$OMRequest)
  At OzoneManagerRequestHandler.java:[line 386] |
   | Failed junit tests | hadoop.ozone.om.TestOmMetrics |
   |   | hadoop.ozone.scm.pipeline.TestPipelineManagerMXBean |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-689/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/689 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
   | uname | Linux 2fe446930f83 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d797907 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-689/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-689/1/artifact/out/new-findbugs-hadoop-ozone_ozone-manager.html
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-689/1/artifact/out/diff-javadoc-javadoc-hadoop-ozone_common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-689/1/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-689/1/testReport/ |
   | Max. process+thread count | 3800 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | 

[jira] [Commented] (HADOOP-16233) S3AFileStatus to declare that isEncrypted() is always true

2019-04-03 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16809227#comment-16809227
 ] 

Hudson commented on HADOOP-16233:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16342 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16342/])
HADOOP-16233. S3AFileStatus to declare that isEncrypted() is always true 
(github: rev 366186d9990ef9059b6ac9a19ad24310d6f36d04)
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/ITestDelegatedMRJob.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractOpenTest.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/mapreduce/filecache/TestS3AResourceScope.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileStatus.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractOpen.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/AbstractDelegationIT.java
* (edit) 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md


> S3AFileStatus to declare that isEncrypted() is always true
> --
>
> Key: HADOOP-16233
> URL: https://issues.apache.org/jira/browse/HADOOP-16233
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0, 3.1.1, 3.0.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> Some bits of yarn treat encrypted files differently than unencrypted ones, 
> and get confused by S3A (and other stores) where the permissions say "world 
> readable" but they aren't really. 
> Proposed: always declare that the file/dir is encrypted.
> Need to fix {{ITestS3AContractOpen}} to skip the encryption test, or just 
> push it down to HDFS/webhdfs, as the rule "empty files are not encrypted" 
> doesn't have to hold elsewhere



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #685: HADOOP-16233. S3AFileStatus to declare that isEncrypted() is always true

2019-04-03 Thread GitBox
steveloughran merged pull request #685: HADOOP-16233. S3AFileStatus to declare 
that isEncrypted() is always true
URL: https://github.com/apache/hadoop/pull/685
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 merged pull request #669: HDDS-1330 : Add a docker compose for Ozone deployment with Recon.

2019-04-03 Thread GitBox
arp7 merged pull request #669: HDDS-1330 : Add a docker compose for Ozone 
deployment with Recon.
URL: https://github.com/apache/hadoop/pull/669
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on issue #669: HDDS-1330 : Add a docker compose for Ozone deployment with Recon.

2019-04-03 Thread GitBox
arp7 commented on issue #669: HDDS-1330 : Add a docker compose for Ozone 
deployment with Recon.
URL: https://github.com/apache/hadoop/pull/669#issuecomment-479643721
 
 
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #651: HDDS-1339. Implement ratis snapshots on OM

2019-04-03 Thread GitBox
bharatviswa504 commented on issue #651: HDDS-1339. Implement ratis snapshots on 
OM
URL: https://github.com/apache/hadoop/pull/651#issuecomment-479643547
 
 
   +1 LGTM.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #651: HDDS-1339. Implement ratis snapshots on OM

2019-04-03 Thread GitBox
bharatviswa504 commented on a change in pull request #651: HDDS-1339. Implement 
ratis snapshots on OM
URL: https://github.com/apache/hadoop/pull/651#discussion_r271915226
 
 

 ##
 File path: hadoop-hdds/common/src/main/resources/ozone-default.xml
 ##
 @@ -1617,7 +1617,7 @@
 
   
 ozone.om.ratis.snapshot.auto.trigger.threshold
-40L
+40
 
 Review comment:
   Thanks for the info. We can tweak this later.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #651: HDDS-1339. Implement ratis snapshots on OM

2019-04-03 Thread GitBox
hanishakoneru commented on a change in pull request #651: HDDS-1339. Implement 
ratis snapshots on OM
URL: https://github.com/apache/hadoop/pull/651#discussion_r271914300
 
 

 ##
 File path: hadoop-hdds/common/src/main/resources/ozone-default.xml
 ##
 @@ -1617,7 +1617,7 @@
 
   
 ozone.om.ratis.snapshot.auto.trigger.threshold
-40L
+40
 
 Review comment:
   I think 400k should not be too small a number. In HDFS, the default number 
of transactions after which a checkpoint is saved is 1M. Also, the ratis log 
index is not the same as the actual transaction count. There are lot of 
internal ratis log entries also.
   But we can re-tweak the default after some testing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #685: HADOOP-16233. S3AFileStatus to declare that isEncrypted() is always true

2019-04-03 Thread GitBox
hadoop-yetus commented on issue #685: HADOOP-16233. S3AFileStatus to declare 
that isEncrypted() is always true
URL: https://github.com/apache/hadoop/pull/685#issuecomment-479640680
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 19 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 63 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1159 | trunk passed |
   | +1 | compile | 1079 | trunk passed |
   | +1 | checkstyle | 215 | trunk passed |
   | +1 | mvnsite | 136 | trunk passed |
   | +1 | shadedclient | 1103 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 154 | trunk passed |
   | +1 | javadoc | 91 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 77 | the patch passed |
   | +1 | compile | 914 | the patch passed |
   | +1 | javac | 914 | the patch passed |
   | -0 | checkstyle | 208 | root: The patch generated 2 new + 6 unchanged - 1 
fixed = 8 total (was 7) |
   | +1 | mvnsite | 117 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 712 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 183 | the patch passed |
   | +1 | javadoc | 90 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 537 | hadoop-common in the patch passed. |
   | +1 | unit | 283 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 7112 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-685/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/685 |
   | Optional Tests |  dupname  asflicense  mvnsite  compile  javac  javadoc  
mvninstall  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux d9fa7a0f50bd 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / be488b6 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-685/1/artifact/out/diff-checkstyle-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-685/1/testReport/ |
   | Max. process+thread count | 1458 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-685/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #688: HDDS-1379. Convert all OM Volume related operations to HA model

2019-04-03 Thread GitBox
hadoop-yetus commented on issue #688: HDDS-1379. Convert all OM Volume related 
operations to HA model
URL: https://github.com/apache/hadoop/pull/688#issuecomment-479631355
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 59 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1028 | trunk passed |
   | +1 | compile | 99 | trunk passed |
   | +1 | checkstyle | 29 | trunk passed |
   | +1 | mvnsite | 113 | trunk passed |
   | +1 | shadedclient | 813 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 102 | trunk passed |
   | +1 | javadoc | 84 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for patch |
   | +1 | mvninstall | 104 | the patch passed |
   | +1 | compile | 92 | the patch passed |
   | +1 | cc | 92 | the patch passed |
   | +1 | javac | 92 | the patch passed |
   | -0 | checkstyle | 24 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 87 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 744 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | -1 | findbugs | 46 | hadoop-ozone/ozone-manager generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | -1 | javadoc | 33 | hadoop-ozone_common generated 1 new + 1 unchanged - 0 
fixed = 2 total (was 1) |
   ||| _ Other Tests _ |
   | +1 | unit | 32 | common in the patch passed. |
   | +1 | unit | 39 | ozone-manager in the patch passed. |
   | -1 | unit | 684 | integration-test in the patch failed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 4409 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-ozone/ozone-manager |
   |  |  Dead store to volume in 
org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleSetVolumePropertyApply(OzoneManagerProtocolProtos$OMRequest)
  At 
OzoneManagerRequestHandler.java:org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleSetVolumePropertyApply(OzoneManagerProtocolProtos$OMRequest)
  At OzoneManagerRequestHandler.java:[line 512] |
   |  |  new org.apache.hadoop.ozone.om.exceptions.OMException(String, 
OMException$ResultCodes) not thrown in 
org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleStartTransaction(OzoneManagerProtocolProtos$OMRequest)
  At OzoneManagerRequestHandler.java:in 
org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleStartTransaction(OzoneManagerProtocolProtos$OMRequest)
  At OzoneManagerRequestHandler.java:[line 386] |
   | Failed junit tests | hadoop.ozone.om.TestOmMetrics |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-688/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/688 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
   | uname | Linux 734c03b03799 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / be488b6 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-688/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-688/1/artifact/out/new-findbugs-hadoop-ozone_ozone-manager.html
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-688/1/artifact/out/diff-javadoc-javadoc-hadoop-ozone_common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-688/1/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-688/1/testReport/ |
   | Max. process+thread count | 5048 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 

[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.

2019-04-03 Thread GitBox
hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update 
documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#discussion_r271900761
 
 

 ##
 File path: hadoop-hdds/docs/content/SetupSecureOzone.md
 ##
 @@ -0,0 +1,88 @@
+---
+title: "Setup secure ozone cluster"
+date: "2019-April-03"
+menu:
+   main:
+   parent: Architecture
+weight: 11
+---
+
+
+# Setup secure ozone cluster #
+
+To enable security in ozone cluster **ozone.security.enabled** should be set 
to true.
+
+ozone.security.enabled| true
+--|--
+
+## Kerberos ##
+Configuration for service daemons:
+
+Property|Description
+|
+hdds.scm.kerberos.principal | The SCM service principal. Ex 
scm/_HOST@REALM.COM_
+hdds.scm.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+ozone.om.kerberos.principal |The OzoneManager service principal. Ex 
om/_h...@realm.com
+ozone.om.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+hdds.scm.http.kerberos.principal|SCM http server service principal. 
+hdds.scm.http.kerberos.keytab   |The keytab file used by SCM http server to 
login as its service principal.
+ozone.om.http.kerberos.principal|OzoneManager http server principal. 
+ozone.om.http.kerberos.keytab   |The keytab file used by OM http server to 
login as its service principal.
+ozone.s3g.keytab.file   |The keytab file used by S3 gateway. Ex 
/etc/security/keytabs/HTTP.keytab
+ozone.s3g.authentication.kerberos.principal|S3 Gateway principal. Ex 
HTTP/_h...@example.com
+
+## Tokens ##
+
+## Delegation token ##
+Delegation tokens are enabled by default when security is enabled.
+
+## Block Tokens ##
+hdds.block.token.enabled | true
+-|--
+
+## S3Token ##
+S3 token are enabled by default when security is enabled. 
+To use S3 tokens users need to perform following steps:
+* S3 clients should get the secret access id and user secret from OzoneManager.
+```
+ozone s3 getsecret
+```
+* Setup secret in aws configs:
+```
+aws configure set default.s3.signature_version s3v4  
+aws configure set aws_access_key_id ${accessId}  
+aws configure set aws_secret_access_key ${secret}  
+aws configure set region us-west-1  
+```
+
+## Certificates ##
+Certificates are used internally inside Ozone. Its enabled be default when 
security is enabled.
+
+## Authorization ##
+Default access authorizer for Ozone approves every request. It is not suitable 
for production environments. It is recommended that clients use ranger plugin 
for Ozone to manage authorizations.
+
+Property|Description
+|
+ozone.acl.enabled | true 
+ozone.acl.authorizer.class| 
org.apache.ranger.authorization.ozone.authorizer.RangerOzoneAuthorizer   
+
+## TDE ##
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.

2019-04-03 Thread GitBox
hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update 
documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#discussion_r271900713
 
 

 ##
 File path: hadoop-hdds/docs/content/SetupSecureOzone.md
 ##
 @@ -0,0 +1,88 @@
+---
+title: "Setup secure ozone cluster"
+date: "2019-April-03"
+menu:
+   main:
+   parent: Architecture
+weight: 11
+---
+
+
+# Setup secure ozone cluster #
+
+To enable security in ozone cluster **ozone.security.enabled** should be set 
to true.
+
+ozone.security.enabled| true
+--|--
+
+## Kerberos ##
+Configuration for service daemons:
+
+Property|Description
+|
+hdds.scm.kerberos.principal | The SCM service principal. Ex 
scm/_HOST@REALM.COM_
+hdds.scm.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+ozone.om.kerberos.principal |The OzoneManager service principal. Ex 
om/_h...@realm.com
+ozone.om.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+hdds.scm.http.kerberos.principal|SCM http server service principal. 
+hdds.scm.http.kerberos.keytab   |The keytab file used by SCM http server to 
login as its service principal.
+ozone.om.http.kerberos.principal|OzoneManager http server principal. 
+ozone.om.http.kerberos.keytab   |The keytab file used by OM http server to 
login as its service principal.
+ozone.s3g.keytab.file   |The keytab file used by S3 gateway. Ex 
/etc/security/keytabs/HTTP.keytab
+ozone.s3g.authentication.kerberos.principal|S3 Gateway principal. Ex 
HTTP/_h...@example.com
+
+## Tokens ##
+
+## Delegation token ##
+Delegation tokens are enabled by default when security is enabled.
+
+## Block Tokens ##
+hdds.block.token.enabled | true
+-|--
+
+## S3Token ##
+S3 token are enabled by default when security is enabled. 
+To use S3 tokens users need to perform following steps:
+* S3 clients should get the secret access id and user secret from OzoneManager.
+```
+ozone s3 getsecret
+```
+* Setup secret in aws configs:
+```
+aws configure set default.s3.signature_version s3v4  
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.

2019-04-03 Thread GitBox
hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update 
documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#discussion_r271900682
 
 

 ##
 File path: hadoop-hdds/docs/content/SetupSecureOzone.md
 ##
 @@ -0,0 +1,88 @@
+---
+title: "Setup secure ozone cluster"
+date: "2019-April-03"
+menu:
+   main:
+   parent: Architecture
+weight: 11
+---
+
+
+# Setup secure ozone cluster #
+
+To enable security in ozone cluster **ozone.security.enabled** should be set 
to true.
+
+ozone.security.enabled| true
+--|--
+
+## Kerberos ##
+Configuration for service daemons:
+
+Property|Description
+|
+hdds.scm.kerberos.principal | The SCM service principal. Ex 
scm/_HOST@REALM.COM_
+hdds.scm.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+ozone.om.kerberos.principal |The OzoneManager service principal. Ex 
om/_h...@realm.com
+ozone.om.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+hdds.scm.http.kerberos.principal|SCM http server service principal. 
+hdds.scm.http.kerberos.keytab   |The keytab file used by SCM http server to 
login as its service principal.
+ozone.om.http.kerberos.principal|OzoneManager http server principal. 
+ozone.om.http.kerberos.keytab   |The keytab file used by OM http server to 
login as its service principal.
+ozone.s3g.keytab.file   |The keytab file used by S3 gateway. Ex 
/etc/security/keytabs/HTTP.keytab
+ozone.s3g.authentication.kerberos.principal|S3 Gateway principal. Ex 
HTTP/_h...@example.com
+
+## Tokens ##
+
+## Delegation token ##
+Delegation tokens are enabled by default when security is enabled.
+
+## Block Tokens ##
+hdds.block.token.enabled | true
+-|--
+
+## S3Token ##
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.

2019-04-03 Thread GitBox
hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update 
documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#discussion_r271900670
 
 

 ##
 File path: hadoop-hdds/docs/content/SetupSecureOzone.md
 ##
 @@ -0,0 +1,88 @@
+---
+title: "Setup secure ozone cluster"
+date: "2019-April-03"
+menu:
+   main:
+   parent: Architecture
+weight: 11
+---
+
+
+# Setup secure ozone cluster #
+
+To enable security in ozone cluster **ozone.security.enabled** should be set 
to true.
+
+ozone.security.enabled| true
+--|--
+
+## Kerberos ##
+Configuration for service daemons:
+
+Property|Description
+|
+hdds.scm.kerberos.principal | The SCM service principal. Ex 
scm/_HOST@REALM.COM_
+hdds.scm.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+ozone.om.kerberos.principal |The OzoneManager service principal. Ex 
om/_h...@realm.com
+ozone.om.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.

2019-04-03 Thread GitBox
hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update 
documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#discussion_r271900656
 
 

 ##
 File path: hadoop-hdds/docs/content/OzoneSecurityArchitecture.md
 ##
 @@ -0,0 +1,82 @@
+---
+title: "Ozone Security Overview"
+date: "2019-April-03"
+menu:
+   main:
+   parent: Architecture
+weight: 11
+---
+
+
+# Security in Ozone #
+
+Starting with badlands release (ozone-0.4.0-alpha) ozone cluster can be 
secured against external threats. Specifically it can be configured for 
following security features:
+
+1. Authentication
+2. Authorization
+3. Audit
+4. Transparent Data Encryption (TDE)
+
+## Authentication ##
+
+### Kerberos ###
+Similar to hadoop, Ozone allows kerberos-based authentication. So one way to 
setup identities for all the daemons and clients is to create kerberos keytabs 
and configure it like any other service in hadoop.
+
+### Tokens ###
+Tokens are widely used in Hadoop to achieve lightweight authentication without 
compromising on security. Main motivation for using tokens inside Ozone is to 
prevent the unauthorized access while keeping the protocol lightweight and 
without sharing secret over the wire. Ozone utilizes three types of token:
+
+ Delegation token 
+Once client establishes their identity via kerberos they can request a 
delegation token from OzoneManager. This token can be used by a client to prove 
its identity until the token expires. Like Hadoop delegation tokens, an Ozone 
delegation token has 3 important fields:
+
+Renewer:User responsible for renewing the token.
+Issue date:  Time at which token was issued.
+Max date:Time after which token can’t be renewed. 
+
+Token operations like get, renew and cancel can only be performed over an 
Kerberos authenticated connection. Clients can use delegation token to 
establish connection with OzoneManager and perform any file system/object store 
related operations like, listing the objects in a bucket or creating a volume 
etc.
+
+ Block Tokens 
+Block tokens are similar to delegation tokens in sense that they are signed by 
OzoneManager. Block tokens are created by OM (OzoneManager) when a client 
request involves interaction with DataNodes such as read/write Ozone keys. 
Unlike delegation tokens there is no client API to request block tokens. 
Instead, they are handed transparently to client along with key/block 
locations. Block tokens are validated by Datanodes when receiving read/write 
requests from clients. Block token can't be renewed explicitly by client. 
Client with expired block token will need to refetch the key/block locations to 
get new block tokens.
+ S3Token 
+Like block tokens S3Tokens are handled transparently for clients. It is signed 
by S3secret created by client. S3Gateway creates this token for every s3 client 
request. To create an S3Token user must have a S3 secret.
+
+### Certificates ###
+Apart from kerberos and tokens Ozone utilizes certificate based authentication 
for Ozone service components. To enable this, SCM (StorageContainerManager) 
bootstraps itself as an Certificate Authority when security is enabled. This 
allows all daemons inside Ozone to have an SCM signed certificate. Below is 
brief descriptions of steps involved:
+Datanodes and OzoneManagers submits a CSR (certificate signing request) to 
SCM. 
+SCM verifies identity of DN (Datanode) or OM via Kerberos and generates a 
certificate. 
+This certificate is used by OM and DN to prove their identities. 
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.

2019-04-03 Thread GitBox
hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update 
documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#discussion_r271900749
 
 

 ##
 File path: hadoop-hdds/docs/content/SetupSecureOzone.md
 ##
 @@ -0,0 +1,88 @@
+---
+title: "Setup secure ozone cluster"
+date: "2019-April-03"
+menu:
+   main:
+   parent: Architecture
+weight: 11
+---
+
+
+# Setup secure ozone cluster #
+
+To enable security in ozone cluster **ozone.security.enabled** should be set 
to true.
+
+ozone.security.enabled| true
+--|--
+
+## Kerberos ##
+Configuration for service daemons:
+
+Property|Description
+|
+hdds.scm.kerberos.principal | The SCM service principal. Ex 
scm/_HOST@REALM.COM_
+hdds.scm.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+ozone.om.kerberos.principal |The OzoneManager service principal. Ex 
om/_h...@realm.com
+ozone.om.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+hdds.scm.http.kerberos.principal|SCM http server service principal. 
+hdds.scm.http.kerberos.keytab   |The keytab file used by SCM http server to 
login as its service principal.
+ozone.om.http.kerberos.principal|OzoneManager http server principal. 
+ozone.om.http.kerberos.keytab   |The keytab file used by OM http server to 
login as its service principal.
+ozone.s3g.keytab.file   |The keytab file used by S3 gateway. Ex 
/etc/security/keytabs/HTTP.keytab
+ozone.s3g.authentication.kerberos.principal|S3 Gateway principal. Ex 
HTTP/_h...@example.com
+
+## Tokens ##
+
+## Delegation token ##
+Delegation tokens are enabled by default when security is enabled.
+
+## Block Tokens ##
+hdds.block.token.enabled | true
+-|--
+
+## S3Token ##
+S3 token are enabled by default when security is enabled. 
+To use S3 tokens users need to perform following steps:
+* S3 clients should get the secret access id and user secret from OzoneManager.
+```
+ozone s3 getsecret
+```
+* Setup secret in aws configs:
+```
+aws configure set default.s3.signature_version s3v4  
+aws configure set aws_access_key_id ${accessId}  
+aws configure set aws_secret_access_key ${secret}  
+aws configure set region us-west-1  
+```
+
+## Certificates ##
+Certificates are used internally inside Ozone. Its enabled be default when 
security is enabled.
+
+## Authorization ##
+Default access authorizer for Ozone approves every request. It is not suitable 
for production environments. It is recommended that clients use ranger plugin 
for Ozone to manage authorizations.
+
+Property|Description
+|
+ozone.acl.enabled | true 
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.

2019-04-03 Thread GitBox
hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update 
documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#discussion_r271900637
 
 

 ##
 File path: hadoop-hdds/docs/content/OzoneSecurityArchitecture.md
 ##
 @@ -0,0 +1,82 @@
+---
+title: "Ozone Security Overview"
+date: "2019-April-03"
+menu:
+   main:
+   parent: Architecture
+weight: 11
+---
+
+
+# Security in Ozone #
+
+Starting with badlands release (ozone-0.4.0-alpha) ozone cluster can be 
secured against external threats. Specifically it can be configured for 
following security features:
+
+1. Authentication
+2. Authorization
+3. Audit
+4. Transparent Data Encryption (TDE)
+
+## Authentication ##
+
+### Kerberos ###
+Similar to hadoop, Ozone allows kerberos-based authentication. So one way to 
setup identities for all the daemons and clients is to create kerberos keytabs 
and configure it like any other service in hadoop.
+
+### Tokens ###
+Tokens are widely used in Hadoop to achieve lightweight authentication without 
compromising on security. Main motivation for using tokens inside Ozone is to 
prevent the unauthorized access while keeping the protocol lightweight and 
without sharing secret over the wire. Ozone utilizes three types of token:
+
+ Delegation token 
+Once client establishes their identity via kerberos they can request a 
delegation token from OzoneManager. This token can be used by a client to prove 
its identity until the token expires. Like Hadoop delegation tokens, an Ozone 
delegation token has 3 important fields:
+
+Renewer:User responsible for renewing the token.
+Issue date:  Time at which token was issued.
+Max date:Time after which token can’t be renewed. 
+
+Token operations like get, renew and cancel can only be performed over an 
Kerberos authenticated connection. Clients can use delegation token to 
establish connection with OzoneManager and perform any file system/object store 
related operations like, listing the objects in a bucket or creating a volume 
etc.
+
+ Block Tokens 
+Block tokens are similar to delegation tokens in sense that they are signed by 
OzoneManager. Block tokens are created by OM (OzoneManager) when a client 
request involves interaction with DataNodes such as read/write Ozone keys. 
Unlike delegation tokens there is no client API to request block tokens. 
Instead, they are handed transparently to client along with key/block 
locations. Block tokens are validated by Datanodes when receiving read/write 
requests from clients. Block token can't be renewed explicitly by client. 
Client with expired block token will need to refetch the key/block locations to 
get new block tokens.
+ S3Token 
+Like block tokens S3Tokens are handled transparently for clients. It is signed 
by S3secret created by client. S3Gateway creates this token for every s3 client 
request. To create an S3Token user must have a S3 secret.
+
+### Certificates ###
+Apart from kerberos and tokens Ozone utilizes certificate based authentication 
for Ozone service components. To enable this, SCM (StorageContainerManager) 
bootstraps itself as an Certificate Authority when security is enabled. This 
allows all daemons inside Ozone to have an SCM signed certificate. Below is 
brief descriptions of steps involved:
+Datanodes and OzoneManagers submits a CSR (certificate signing request) to 
SCM. 
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.

2019-04-03 Thread GitBox
hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update 
documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#discussion_r271900729
 
 

 ##
 File path: hadoop-hdds/docs/content/SetupSecureOzone.md
 ##
 @@ -0,0 +1,88 @@
+---
+title: "Setup secure ozone cluster"
+date: "2019-April-03"
+menu:
+   main:
+   parent: Architecture
+weight: 11
+---
+
+
+# Setup secure ozone cluster #
+
+To enable security in ozone cluster **ozone.security.enabled** should be set 
to true.
+
+ozone.security.enabled| true
+--|--
+
+## Kerberos ##
+Configuration for service daemons:
+
+Property|Description
+|
+hdds.scm.kerberos.principal | The SCM service principal. Ex 
scm/_HOST@REALM.COM_
+hdds.scm.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+ozone.om.kerberos.principal |The OzoneManager service principal. Ex 
om/_h...@realm.com
+ozone.om.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+hdds.scm.http.kerberos.principal|SCM http server service principal. 
+hdds.scm.http.kerberos.keytab   |The keytab file used by SCM http server to 
login as its service principal.
+ozone.om.http.kerberos.principal|OzoneManager http server principal. 
+ozone.om.http.kerberos.keytab   |The keytab file used by OM http server to 
login as its service principal.
+ozone.s3g.keytab.file   |The keytab file used by S3 gateway. Ex 
/etc/security/keytabs/HTTP.keytab
+ozone.s3g.authentication.kerberos.principal|S3 Gateway principal. Ex 
HTTP/_h...@example.com
+
+## Tokens ##
+
+## Delegation token ##
+Delegation tokens are enabled by default when security is enabled.
+
+## Block Tokens ##
+hdds.block.token.enabled | true
+-|--
+
+## S3Token ##
+S3 token are enabled by default when security is enabled. 
+To use S3 tokens users need to perform following steps:
+* S3 clients should get the secret access id and user secret from OzoneManager.
+```
+ozone s3 getsecret
+```
+* Setup secret in aws configs:
+```
+aws configure set default.s3.signature_version s3v4  
+aws configure set aws_access_key_id ${accessId}  
+aws configure set aws_secret_access_key ${secret}  
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.

2019-04-03 Thread GitBox
hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update 
documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#discussion_r271900738
 
 

 ##
 File path: hadoop-hdds/docs/content/SetupSecureOzone.md
 ##
 @@ -0,0 +1,88 @@
+---
+title: "Setup secure ozone cluster"
+date: "2019-April-03"
+menu:
+   main:
+   parent: Architecture
+weight: 11
+---
+
+
+# Setup secure ozone cluster #
+
+To enable security in ozone cluster **ozone.security.enabled** should be set 
to true.
+
+ozone.security.enabled| true
+--|--
+
+## Kerberos ##
+Configuration for service daemons:
+
+Property|Description
+|
+hdds.scm.kerberos.principal | The SCM service principal. Ex 
scm/_HOST@REALM.COM_
+hdds.scm.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+ozone.om.kerberos.principal |The OzoneManager service principal. Ex 
om/_h...@realm.com
+ozone.om.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+hdds.scm.http.kerberos.principal|SCM http server service principal. 
+hdds.scm.http.kerberos.keytab   |The keytab file used by SCM http server to 
login as its service principal.
+ozone.om.http.kerberos.principal|OzoneManager http server principal. 
+ozone.om.http.kerberos.keytab   |The keytab file used by OM http server to 
login as its service principal.
+ozone.s3g.keytab.file   |The keytab file used by S3 gateway. Ex 
/etc/security/keytabs/HTTP.keytab
+ozone.s3g.authentication.kerberos.principal|S3 Gateway principal. Ex 
HTTP/_h...@example.com
+
+## Tokens ##
+
+## Delegation token ##
+Delegation tokens are enabled by default when security is enabled.
+
+## Block Tokens ##
+hdds.block.token.enabled | true
+-|--
+
+## S3Token ##
+S3 token are enabled by default when security is enabled. 
+To use S3 tokens users need to perform following steps:
+* S3 clients should get the secret access id and user secret from OzoneManager.
+```
+ozone s3 getsecret
+```
+* Setup secret in aws configs:
+```
+aws configure set default.s3.signature_version s3v4  
+aws configure set aws_access_key_id ${accessId}  
+aws configure set aws_secret_access_key ${secret}  
+aws configure set region us-west-1  
+```
+
+## Certificates ##
+Certificates are used internally inside Ozone. Its enabled be default when 
security is enabled.
+
+## Authorization ##
+Default access authorizer for Ozone approves every request. It is not suitable 
for production environments. It is recommended that clients use ranger plugin 
for Ozone to manage authorizations.
+
+Property|Description
+|
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #687: HDDS-1329. Update documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.

2019-04-03 Thread GitBox
hadoop-yetus commented on issue #687: HDDS-1329. Update documentation for 
Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#issuecomment-479630078
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 47 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1132 | trunk passed |
   | +1 | mvnsite | 20 | trunk passed |
   | +1 | shadedclient | 1839 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 19 | the patch passed |
   | +1 | mvnsite | 16 | the patch passed |
   | -1 | whitespace | 0 | The patch has 15 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 780 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | asflicense | 25 | The patch does not generate ASF License warnings. |
   | | | 2825 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-687/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/687 |
   | Optional Tests |  dupname  asflicense  mvnsite  |
   | uname | Linux ec7ff6fa3ff6 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / be488b6 |
   | maven | version: Apache Maven 3.3.9 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-687/2/artifact/out/whitespace-eol.txt
 |
   | Max. process+thread count | 306 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/docs U: hadoop-hdds/docs |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-687/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.

2019-04-03 Thread GitBox
hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update 
documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#discussion_r271900693
 
 

 ##
 File path: hadoop-hdds/docs/content/SetupSecureOzone.md
 ##
 @@ -0,0 +1,88 @@
+---
+title: "Setup secure ozone cluster"
+date: "2019-April-03"
+menu:
+   main:
+   parent: Architecture
+weight: 11
+---
+
+
+# Setup secure ozone cluster #
+
+To enable security in ozone cluster **ozone.security.enabled** should be set 
to true.
+
+ozone.security.enabled| true
+--|--
+
+## Kerberos ##
+Configuration for service daemons:
+
+Property|Description
+|
+hdds.scm.kerberos.principal | The SCM service principal. Ex 
scm/_HOST@REALM.COM_
+hdds.scm.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+ozone.om.kerberos.principal |The OzoneManager service principal. Ex 
om/_h...@realm.com
+ozone.om.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+hdds.scm.http.kerberos.principal|SCM http server service principal. 
+hdds.scm.http.kerberos.keytab   |The keytab file used by SCM http server to 
login as its service principal.
+ozone.om.http.kerberos.principal|OzoneManager http server principal. 
+ozone.om.http.kerberos.keytab   |The keytab file used by OM http server to 
login as its service principal.
+ozone.s3g.keytab.file   |The keytab file used by S3 gateway. Ex 
/etc/security/keytabs/HTTP.keytab
+ozone.s3g.authentication.kerberos.principal|S3 Gateway principal. Ex 
HTTP/_h...@example.com
+
+## Tokens ##
+
+## Delegation token ##
+Delegation tokens are enabled by default when security is enabled.
+
+## Block Tokens ##
+hdds.block.token.enabled | true
+-|--
+
+## S3Token ##
+S3 token are enabled by default when security is enabled. 
+To use S3 tokens users need to perform following steps:
+* S3 clients should get the secret access id and user secret from OzoneManager.
+```
+ozone s3 getsecret
+```
+* Setup secret in aws configs:
+```
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.

2019-04-03 Thread GitBox
hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update 
documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#discussion_r271900722
 
 

 ##
 File path: hadoop-hdds/docs/content/SetupSecureOzone.md
 ##
 @@ -0,0 +1,88 @@
+---
+title: "Setup secure ozone cluster"
+date: "2019-April-03"
+menu:
+   main:
+   parent: Architecture
+weight: 11
+---
+
+
+# Setup secure ozone cluster #
+
+To enable security in ozone cluster **ozone.security.enabled** should be set 
to true.
+
+ozone.security.enabled| true
+--|--
+
+## Kerberos ##
+Configuration for service daemons:
+
+Property|Description
+|
+hdds.scm.kerberos.principal | The SCM service principal. Ex 
scm/_HOST@REALM.COM_
+hdds.scm.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+ozone.om.kerberos.principal |The OzoneManager service principal. Ex 
om/_h...@realm.com
+ozone.om.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+hdds.scm.http.kerberos.principal|SCM http server service principal. 
+hdds.scm.http.kerberos.keytab   |The keytab file used by SCM http server to 
login as its service principal.
+ozone.om.http.kerberos.principal|OzoneManager http server principal. 
+ozone.om.http.kerberos.keytab   |The keytab file used by OM http server to 
login as its service principal.
+ozone.s3g.keytab.file   |The keytab file used by S3 gateway. Ex 
/etc/security/keytabs/HTTP.keytab
+ozone.s3g.authentication.kerberos.principal|S3 Gateway principal. Ex 
HTTP/_h...@example.com
+
+## Tokens ##
+
+## Delegation token ##
+Delegation tokens are enabled by default when security is enabled.
+
+## Block Tokens ##
+hdds.block.token.enabled | true
+-|--
+
+## S3Token ##
+S3 token are enabled by default when security is enabled. 
+To use S3 tokens users need to perform following steps:
+* S3 clients should get the secret access id and user secret from OzoneManager.
+```
+ozone s3 getsecret
+```
+* Setup secret in aws configs:
+```
+aws configure set default.s3.signature_version s3v4  
+aws configure set aws_access_key_id ${accessId}  
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.

2019-04-03 Thread GitBox
hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update 
documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#discussion_r271900646
 
 

 ##
 File path: hadoop-hdds/docs/content/OzoneSecurityArchitecture.md
 ##
 @@ -0,0 +1,82 @@
+---
+title: "Ozone Security Overview"
+date: "2019-April-03"
+menu:
+   main:
+   parent: Architecture
+weight: 11
+---
+
+
+# Security in Ozone #
+
+Starting with badlands release (ozone-0.4.0-alpha) ozone cluster can be 
secured against external threats. Specifically it can be configured for 
following security features:
+
+1. Authentication
+2. Authorization
+3. Audit
+4. Transparent Data Encryption (TDE)
+
+## Authentication ##
+
+### Kerberos ###
+Similar to hadoop, Ozone allows kerberos-based authentication. So one way to 
setup identities for all the daemons and clients is to create kerberos keytabs 
and configure it like any other service in hadoop.
+
+### Tokens ###
+Tokens are widely used in Hadoop to achieve lightweight authentication without 
compromising on security. Main motivation for using tokens inside Ozone is to 
prevent the unauthorized access while keeping the protocol lightweight and 
without sharing secret over the wire. Ozone utilizes three types of token:
+
+ Delegation token 
+Once client establishes their identity via kerberos they can request a 
delegation token from OzoneManager. This token can be used by a client to prove 
its identity until the token expires. Like Hadoop delegation tokens, an Ozone 
delegation token has 3 important fields:
+
+Renewer:User responsible for renewing the token.
+Issue date:  Time at which token was issued.
+Max date:Time after which token can’t be renewed. 
+
+Token operations like get, renew and cancel can only be performed over an 
Kerberos authenticated connection. Clients can use delegation token to 
establish connection with OzoneManager and perform any file system/object store 
related operations like, listing the objects in a bucket or creating a volume 
etc.
+
+ Block Tokens 
+Block tokens are similar to delegation tokens in sense that they are signed by 
OzoneManager. Block tokens are created by OM (OzoneManager) when a client 
request involves interaction with DataNodes such as read/write Ozone keys. 
Unlike delegation tokens there is no client API to request block tokens. 
Instead, they are handed transparently to client along with key/block 
locations. Block tokens are validated by Datanodes when receiving read/write 
requests from clients. Block token can't be renewed explicitly by client. 
Client with expired block token will need to refetch the key/block locations to 
get new block tokens.
+ S3Token 
+Like block tokens S3Tokens are handled transparently for clients. It is signed 
by S3secret created by client. S3Gateway creates this token for every s3 client 
request. To create an S3Token user must have a S3 secret.
+
+### Certificates ###
+Apart from kerberos and tokens Ozone utilizes certificate based authentication 
for Ozone service components. To enable this, SCM (StorageContainerManager) 
bootstraps itself as an Certificate Authority when security is enabled. This 
allows all daemons inside Ozone to have an SCM signed certificate. Below is 
brief descriptions of steps involved:
+Datanodes and OzoneManagers submits a CSR (certificate signing request) to 
SCM. 
+SCM verifies identity of DN (Datanode) or OM via Kerberos and generates a 
certificate. 
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.

2019-04-03 Thread GitBox
hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update 
documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#discussion_r271900673
 
 

 ##
 File path: hadoop-hdds/docs/content/SetupSecureOzone.md
 ##
 @@ -0,0 +1,88 @@
+---
+title: "Setup secure ozone cluster"
+date: "2019-April-03"
+menu:
+   main:
+   parent: Architecture
+weight: 11
+---
+
+
+# Setup secure ozone cluster #
+
+To enable security in ozone cluster **ozone.security.enabled** should be set 
to true.
+
+ozone.security.enabled| true
+--|--
+
+## Kerberos ##
+Configuration for service daemons:
+
+Property|Description
+|
+hdds.scm.kerberos.principal | The SCM service principal. Ex 
scm/_HOST@REALM.COM_
+hdds.scm.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+ozone.om.kerberos.principal |The OzoneManager service principal. Ex 
om/_h...@realm.com
+ozone.om.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+hdds.scm.http.kerberos.principal|SCM http server service principal. 
+hdds.scm.http.kerberos.keytab   |The keytab file used by SCM http server to 
login as its service principal.
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.

2019-04-03 Thread GitBox
hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update 
documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#discussion_r271900667
 
 

 ##
 File path: hadoop-hdds/docs/content/OzoneSecurityArchitecture.md
 ##
 @@ -0,0 +1,82 @@
+---
+title: "Ozone Security Overview"
+date: "2019-April-03"
+menu:
+   main:
+   parent: Architecture
+weight: 11
+---
+
+
+# Security in Ozone #
+
+Starting with badlands release (ozone-0.4.0-alpha) ozone cluster can be 
secured against external threats. Specifically it can be configured for 
following security features:
+
+1. Authentication
+2. Authorization
+3. Audit
+4. Transparent Data Encryption (TDE)
+
+## Authentication ##
+
+### Kerberos ###
+Similar to hadoop, Ozone allows kerberos-based authentication. So one way to 
setup identities for all the daemons and clients is to create kerberos keytabs 
and configure it like any other service in hadoop.
+
+### Tokens ###
+Tokens are widely used in Hadoop to achieve lightweight authentication without 
compromising on security. Main motivation for using tokens inside Ozone is to 
prevent the unauthorized access while keeping the protocol lightweight and 
without sharing secret over the wire. Ozone utilizes three types of token:
+
+ Delegation token 
+Once client establishes their identity via kerberos they can request a 
delegation token from OzoneManager. This token can be used by a client to prove 
its identity until the token expires. Like Hadoop delegation tokens, an Ozone 
delegation token has 3 important fields:
+
+Renewer:User responsible for renewing the token.
+Issue date:  Time at which token was issued.
+Max date:Time after which token can’t be renewed. 
+
+Token operations like get, renew and cancel can only be performed over an 
Kerberos authenticated connection. Clients can use delegation token to 
establish connection with OzoneManager and perform any file system/object store 
related operations like, listing the objects in a bucket or creating a volume 
etc.
+
+ Block Tokens 
+Block tokens are similar to delegation tokens in sense that they are signed by 
OzoneManager. Block tokens are created by OM (OzoneManager) when a client 
request involves interaction with DataNodes such as read/write Ozone keys. 
Unlike delegation tokens there is no client API to request block tokens. 
Instead, they are handed transparently to client along with key/block 
locations. Block tokens are validated by Datanodes when receiving read/write 
requests from clients. Block token can't be renewed explicitly by client. 
Client with expired block token will need to refetch the key/block locations to 
get new block tokens.
+ S3Token 
+Like block tokens S3Tokens are handled transparently for clients. It is signed 
by S3secret created by client. S3Gateway creates this token for every s3 client 
request. To create an S3Token user must have a S3 secret.
+
+### Certificates ###
+Apart from kerberos and tokens Ozone utilizes certificate based authentication 
for Ozone service components. To enable this, SCM (StorageContainerManager) 
bootstraps itself as an Certificate Authority when security is enabled. This 
allows all daemons inside Ozone to have an SCM signed certificate. Below is 
brief descriptions of steps involved:
+Datanodes and OzoneManagers submits a CSR (certificate signing request) to 
SCM. 
+SCM verifies identity of DN (Datanode) or OM via Kerberos and generates a 
certificate. 
+This certificate is used by OM and DN to prove their identities. 
+Datanodes use OzoneManager certificate to validate block tokens. This is 
possible because both of them trust SCM signed certificates. (i.e OzoneManager 
and Datanodes)
+
+## Authorization ##
+Ozone provides a pluggable API to control authorization of all client related 
operations. Default implementation allows every request. Clearly it is not 
meant for production environments. To configure a more fine grained policy one 
may configure Ranger plugin for Ozone. Since it is a pluggable module clients 
can also implement their own custom authorization policy and configure it using 
[ozone.acl.authorizer.class].
+
+## Audit ##
+Ozone provides ability to audit all read & write operations to OM, SCM and 
Datanodes. Ozone audit leverages the Marker feature which enables user to 
selectively audit only READ or WRITE operations by a simple config change 
without restarting the service(s).
+To enable/disable audit of READ operations, set filter.read.onMatch to NEUTRAL 
or DENY respectively. Similarly, the audit of WRITE operations can be 
controlled using filter.write.onMatch.
+
+Generating audit logs is only half the job, so Ozone also provides AuditParser 
- a sqllite based command line utility to parse/query audit logs with 
predefined templates(ex. Top 5 commands) and options for custom query. Once the 
log file has been loaded to AuditParser, one can simply run a 

[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.

2019-04-03 Thread GitBox
hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update 
documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#discussion_r271900626
 
 

 ##
 File path: hadoop-hdds/docs/content/OzoneSecurityArchitecture.md
 ##
 @@ -0,0 +1,82 @@
+---
+title: "Ozone Security Overview"
+date: "2019-April-03"
+menu:
+   main:
+   parent: Architecture
+weight: 11
+---
+
+
+# Security in Ozone #
+
+Starting with badlands release (ozone-0.4.0-alpha) ozone cluster can be 
secured against external threats. Specifically it can be configured for 
following security features:
+
+1. Authentication
+2. Authorization
+3. Audit
+4. Transparent Data Encryption (TDE)
+
+## Authentication ##
+
+### Kerberos ###
+Similar to hadoop, Ozone allows kerberos-based authentication. So one way to 
setup identities for all the daemons and clients is to create kerberos keytabs 
and configure it like any other service in hadoop.
+
+### Tokens ###
+Tokens are widely used in Hadoop to achieve lightweight authentication without 
compromising on security. Main motivation for using tokens inside Ozone is to 
prevent the unauthorized access while keeping the protocol lightweight and 
without sharing secret over the wire. Ozone utilizes three types of token:
+
+ Delegation token 
+Once client establishes their identity via kerberos they can request a 
delegation token from OzoneManager. This token can be used by a client to prove 
its identity until the token expires. Like Hadoop delegation tokens, an Ozone 
delegation token has 3 important fields:
+
+Renewer:User responsible for renewing the token.
+Issue date:  Time at which token was issued.
+Max date:Time after which token can’t be renewed. 
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16210) Update guava to 27.0-jre in hadoop-project trunk

2019-04-03 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16809130#comment-16809130
 ] 

Sean Mackrory edited comment on HADOOP-16210 at 4/3/19 7:21 PM:


{quote}-1   javac   19m 7s  root generated 15 new + 1481 unchanged - 1 
fixed = 1496 total (was 1482) {quote}

I didn't notice this one before I pushed. We should eliminate those if we can 
while the issue is still fresh. Care to follow-up on that, [~gabor.bota]? 
(edit: actually Gabor already filed HADOOP-16222 for that, and it makes the 
issue of Yetus timing out before running all the tests worse)

I should also note that there was a [DISCUSS] thread on the mailing list about 
this, but it's been no opposition, only support. I committed to trunk only - 
since you're still testing downstream, we should keep this open to backport to 
those branches as they seem to be confirmed ready.


was (Author: mackrorysd):
{quote}-1   javac   19m 7s  root generated 15 new + 1481 unchanged - 1 
fixed = 1496 total (was 1482) {quote}

I didn't notice this one before I pushed. We should eliminate those if we can 
while the issue is still fresh. Care to follow-up on that, [~gabor.bota]?

I should also note that there was a [DISCUSS] thread on the mailing list about 
this, but it's been no opposition, only support. I committed to trunk only - 
since you're still testing downstream, we should keep this open to backport to 
those branches as they seem to be confirmed ready.

> Update guava to 27.0-jre in hadoop-project trunk
> 
>
> Key: HADOOP-16210
> URL: https://issues.apache.org/jira/browse/HADOOP-16210
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HADOOP-16210.001.patch, 
> HADOOP-16210.002.findbugsfix.wip.patch, HADOOP-16210.002.patch, 
> HADOOP-16210.003.patch
>
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> CVE-2018-10237.
> This is a sub-task for trunk from HADOOP-15960 to track issues with that 
> particular branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 opened a new pull request #689: HDDS-1379. Convert all OM Volume related operations to HA model.

2019-04-03 Thread GitBox
bharatviswa504 opened a new pull request #689: HDDS-1379. Convert all OM Volume 
related operations to HA model.
URL: https://github.com/apache/hadoop/pull/689
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16210) Update guava to 27.0-jre in hadoop-project trunk

2019-04-03 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16809130#comment-16809130
 ] 

Sean Mackrory commented on HADOOP-16210:


{quote}-1   javac   19m 7s  root generated 15 new + 1481 unchanged - 1 
fixed = 1496 total (was 1482) {quote}

I didn't notice this one before I pushed. We should eliminate those if we can 
while the issue is still fresh. Care to follow-up on that, [~gabor.bota]?

I should also note that there was a [DISCUSS] thread on the mailing list about 
this, but it's been no opposition, only support. I committed to trunk only - 
since you're still testing downstream, we should keep this open to backport to 
those branches as they seem to be confirmed ready.

> Update guava to 27.0-jre in hadoop-project trunk
> 
>
> Key: HADOOP-16210
> URL: https://issues.apache.org/jira/browse/HADOOP-16210
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HADOOP-16210.001.patch, 
> HADOOP-16210.002.findbugsfix.wip.patch, HADOOP-16210.002.patch, 
> HADOOP-16210.003.patch
>
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> CVE-2018-10237.
> This is a sub-task for trunk from HADOOP-15960 to track issues with that 
> particular branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 closed pull request #688: HDDS-1379. Convert all OM Volume related operations to HA model

2019-04-03 Thread GitBox
bharatviswa504 closed pull request #688: HDDS-1379. Convert all OM Volume 
related operations to HA model
URL: https://github.com/apache/hadoop/pull/688
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lmccay commented on issue #685: HADOOP-16233. S3AFileStatus to declare that isEncrypted() is always true

2019-04-03 Thread GitBox
lmccay commented on issue #685: HADOOP-16233. S3AFileStatus to declare that 
isEncrypted() is always true
URL: https://github.com/apache/hadoop/pull/685#issuecomment-479621083
 
 
   Seems to me that this means of determining whether something is public or 
private is brittle since it is derived by assumptions about certain 
filesystems. I'd like to see a follow up JIRA for addressing this in a more 
explicit way to consider something public.
   
   Here is my +1 for the change to align S3A filesystem with those assumptions. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #687: HDDS-1329. Update documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.

2019-04-03 Thread GitBox
hadoop-yetus commented on issue #687: HDDS-1329. Update documentation for 
Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#issuecomment-479620734
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 357 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1096 | trunk passed |
   | +1 | mvnsite | 21 | trunk passed |
   | +1 | shadedclient | 1778 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 21 | the patch passed |
   | +1 | mvnsite | 17 | the patch passed |
   | -1 | whitespace | 0 | The patch has 13 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 743 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3142 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-687/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/687 |
   | Optional Tests |  dupname  asflicense  mvnsite  |
   | uname | Linux 655b993edb99 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / be488b6 |
   | maven | version: Apache Maven 3.3.9 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-687/1/artifact/out/whitespace-eol.txt
 |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/docs U: hadoop-hdds/docs |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-687/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.

2019-04-03 Thread GitBox
hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update 
documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#discussion_r271890444
 
 

 ##
 File path: hadoop-hdds/docs/content/SetupSecureOzone.md
 ##
 @@ -0,0 +1,86 @@
+---
+title: "Setup secure ozone cluster"
+date: "2019-April-03"
+menu:
+   main:
+   parent: Architecture
+weight: 11
+---
+
+
+# Setup secure ozone cluster #
+
+To enable security in ozone cluster **ozone.security.enabled** should be set 
to true.
+
+ozone.security.enabled| true
+--|--
+
+## Kerberos ##
+Configuration for service daemons:
+
+Property|Description
+|
+hdds.scm.kerberos.principal | The SCM service principal. Ex 
scm/_HOST@REALM.COM_
+hdds.scm.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+ozone.om.kerberos.principal |The OzoneManager service principal. Ex 
om/_h...@realm.com
+ozone.om.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+hdds.scm.http.kerberos.principal|SCM http server service principal. 
+hdds.scm.http.kerberos.keytab   |The keytab file used by SCM http server to 
login as its service principal.
+ozone.om.http.kerberos.principal|OzoneManager http server principal. 
+ozone.om.http.kerberos.keytab   |The keytab file used by OM http server to 
login as its service principal.
+ozone.s3g.keytab.file   |The keytab file used by S3 gateway. Ex 
/etc/security/keytabs/HTTP.keytab
+ozone.s3g.authentication.kerberos.principal|S3 Gateway principal. Ex 
HTTP/_h...@example.com
+
+## Tokens ##
+
+## Delegation token ##
+Delegation tokens are enabled by default when security is enabled.
+
+## Block Tokens ##
+hdds.block.token.enabled | true
+-|--
+
+## S3Token ##
+S3 token are enabled by default when security is enabled. 
+To use S3 tokens users need to perform following steps:
+* S3 clients should get the secret access id and user secret from OzoneManager.
+```
+ozone s3 getsecret
+```
+* Setup secret in aws configs:
+```
+aws configure set default.s3.signature_version s3v4  
+aws configure set aws_access_key_id ${accessId}  
+aws configure set aws_secret_access_key ${secret}  
+aws configure set region us-west-1  
+```
+
+## Certificates ##
+Certificates are used internally inside Ozone. Its enabled be default when 
security is enabled.
+
+## Authorization ##
+Default access authorizer for Ozone approves every request. It is not suitable 
for production environments. It is recommended that clients use ranger plugin 
for Ozone to manage authorizations.
+
+ozone.acl.authorizer.class | 
org.apache.ranger.authorization.ozone.authorizer.RangerOzoneAuthorizer
+---|
+
+## TDE ##
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.

2019-04-03 Thread GitBox
hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update 
documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#discussion_r271890400
 
 

 ##
 File path: hadoop-hdds/docs/content/SetupSecureOzone.md
 ##
 @@ -0,0 +1,86 @@
+---
+title: "Setup secure ozone cluster"
+date: "2019-April-03"
+menu:
+   main:
+   parent: Architecture
+weight: 11
+---
+
+
+# Setup secure ozone cluster #
+
+To enable security in ozone cluster **ozone.security.enabled** should be set 
to true.
+
+ozone.security.enabled| true
+--|--
+
+## Kerberos ##
+Configuration for service daemons:
+
+Property|Description
+|
+hdds.scm.kerberos.principal | The SCM service principal. Ex 
scm/_HOST@REALM.COM_
+hdds.scm.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+ozone.om.kerberos.principal |The OzoneManager service principal. Ex 
om/_h...@realm.com
+ozone.om.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+hdds.scm.http.kerberos.principal|SCM http server service principal. 
+hdds.scm.http.kerberos.keytab   |The keytab file used by SCM http server to 
login as its service principal.
+ozone.om.http.kerberos.principal|OzoneManager http server principal. 
+ozone.om.http.kerberos.keytab   |The keytab file used by OM http server to 
login as its service principal.
+ozone.s3g.keytab.file   |The keytab file used by S3 gateway. Ex 
/etc/security/keytabs/HTTP.keytab
+ozone.s3g.authentication.kerberos.principal|S3 Gateway principal. Ex 
HTTP/_h...@example.com
+
+## Tokens ##
+
+## Delegation token ##
+Delegation tokens are enabled by default when security is enabled.
+
+## Block Tokens ##
+hdds.block.token.enabled | true
+-|--
+
+## S3Token ##
+S3 token are enabled by default when security is enabled. 
+To use S3 tokens users need to perform following steps:
+* S3 clients should get the secret access id and user secret from OzoneManager.
+```
+ozone s3 getsecret
+```
+* Setup secret in aws configs:
+```
+aws configure set default.s3.signature_version s3v4  
+aws configure set aws_access_key_id ${accessId}  
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.

2019-04-03 Thread GitBox
hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update 
documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#discussion_r271890421
 
 

 ##
 File path: hadoop-hdds/docs/content/SetupSecureOzone.md
 ##
 @@ -0,0 +1,86 @@
+---
+title: "Setup secure ozone cluster"
+date: "2019-April-03"
+menu:
+   main:
+   parent: Architecture
+weight: 11
+---
+
+
+# Setup secure ozone cluster #
+
+To enable security in ozone cluster **ozone.security.enabled** should be set 
to true.
+
+ozone.security.enabled| true
+--|--
+
+## Kerberos ##
+Configuration for service daemons:
+
+Property|Description
+|
+hdds.scm.kerberos.principal | The SCM service principal. Ex 
scm/_HOST@REALM.COM_
+hdds.scm.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+ozone.om.kerberos.principal |The OzoneManager service principal. Ex 
om/_h...@realm.com
+ozone.om.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+hdds.scm.http.kerberos.principal|SCM http server service principal. 
+hdds.scm.http.kerberos.keytab   |The keytab file used by SCM http server to 
login as its service principal.
+ozone.om.http.kerberos.principal|OzoneManager http server principal. 
+ozone.om.http.kerberos.keytab   |The keytab file used by OM http server to 
login as its service principal.
+ozone.s3g.keytab.file   |The keytab file used by S3 gateway. Ex 
/etc/security/keytabs/HTTP.keytab
+ozone.s3g.authentication.kerberos.principal|S3 Gateway principal. Ex 
HTTP/_h...@example.com
+
+## Tokens ##
+
+## Delegation token ##
+Delegation tokens are enabled by default when security is enabled.
+
+## Block Tokens ##
+hdds.block.token.enabled | true
+-|--
+
+## S3Token ##
+S3 token are enabled by default when security is enabled. 
+To use S3 tokens users need to perform following steps:
+* S3 clients should get the secret access id and user secret from OzoneManager.
+```
+ozone s3 getsecret
+```
+* Setup secret in aws configs:
+```
+aws configure set default.s3.signature_version s3v4  
+aws configure set aws_access_key_id ${accessId}  
+aws configure set aws_secret_access_key ${secret}  
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.

2019-04-03 Thread GitBox
hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update 
documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#discussion_r271890390
 
 

 ##
 File path: hadoop-hdds/docs/content/SetupSecureOzone.md
 ##
 @@ -0,0 +1,86 @@
+---
+title: "Setup secure ozone cluster"
+date: "2019-April-03"
+menu:
+   main:
+   parent: Architecture
+weight: 11
+---
+
+
+# Setup secure ozone cluster #
+
+To enable security in ozone cluster **ozone.security.enabled** should be set 
to true.
+
+ozone.security.enabled| true
+--|--
+
+## Kerberos ##
+Configuration for service daemons:
+
+Property|Description
+|
+hdds.scm.kerberos.principal | The SCM service principal. Ex 
scm/_HOST@REALM.COM_
+hdds.scm.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+ozone.om.kerberos.principal |The OzoneManager service principal. Ex 
om/_h...@realm.com
+ozone.om.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+hdds.scm.http.kerberos.principal|SCM http server service principal. 
+hdds.scm.http.kerberos.keytab   |The keytab file used by SCM http server to 
login as its service principal.
+ozone.om.http.kerberos.principal|OzoneManager http server principal. 
+ozone.om.http.kerberos.keytab   |The keytab file used by OM http server to 
login as its service principal.
+ozone.s3g.keytab.file   |The keytab file used by S3 gateway. Ex 
/etc/security/keytabs/HTTP.keytab
+ozone.s3g.authentication.kerberos.principal|S3 Gateway principal. Ex 
HTTP/_h...@example.com
+
+## Tokens ##
+
+## Delegation token ##
+Delegation tokens are enabled by default when security is enabled.
+
+## Block Tokens ##
+hdds.block.token.enabled | true
+-|--
+
+## S3Token ##
+S3 token are enabled by default when security is enabled. 
+To use S3 tokens users need to perform following steps:
+* S3 clients should get the secret access id and user secret from OzoneManager.
+```
+ozone s3 getsecret
+```
+* Setup secret in aws configs:
+```
+aws configure set default.s3.signature_version s3v4  
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16210) Update guava to 27.0-jre in hadoop-project trunk

2019-04-03 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16809125#comment-16809125
 ] 

Sean Mackrory commented on HADOOP-16210:


The failure is the libprotoc version mismatch I occasionally see pop up, and 
that's always flaky when I try to look at it...

+1 and pushed, as commented on the PR. I squashed the 2 commits to push, but I 
don't immediately see how you're supposed to close a PR that was merged 
out-of-band. Maybe it's the submitter that does that?

> Update guava to 27.0-jre in hadoop-project trunk
> 
>
> Key: HADOOP-16210
> URL: https://issues.apache.org/jira/browse/HADOOP-16210
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HADOOP-16210.001.patch, 
> HADOOP-16210.002.findbugsfix.wip.patch, HADOOP-16210.002.patch, 
> HADOOP-16210.003.patch
>
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> CVE-2018-10237.
> This is a sub-task for trunk from HADOOP-15960 to track issues with that 
> particular branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.

2019-04-03 Thread GitBox
hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update 
documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#discussion_r271890352
 
 

 ##
 File path: hadoop-hdds/docs/content/OzoneSecurityArchitecture.md
 ##
 @@ -0,0 +1,83 @@
+---
+title: "Ozone Security Overview"
+date: "2019-April-03"
+menu:
+   main:
+   parent: Architecture
+weight: 11
+---
+
+
+# Security in Ozone #
+
+Starting with badlands release (ozone-0.4.0-alpha) ozone cluster can be 
secured against external threats. Specifically it can be configured for 
following security features:
+
+1. Authentication
+2. Authorization
+3. Audit
+4. Transparent Data Encryption (TDE)
+
+## Authentication ##
+
+### Kerberos ###
+Similar to hadoop, Ozone allows kerberos-based authentication. So one way to 
setup identities for all the daemons and clients is to create kerberos keytabs 
and configure it like any other service in hadoop.
+
+### Tokens ###
+Tokens are widely used in Hadoop to achieve lightweight authentication without 
compromising on security. Main motivation for using tokens inside Ozone is to 
prevent the unauthorized access while keeping the protocol lightweight and 
without sharing secret over the wire. Ozone utilizes three types of token:
+
+ Delegation token 
+Once client establishes their identity via kerberos they can request a 
delegation token from OzoneManager. This token can be used by a client to prove 
its identity until the token expires. Like Hadoop delegation tokens, an Ozone 
delegation token has 3 important fields:
+
+Renewer:User responsible for renewing the token.
+Issue date:  Time at which token was issued.
+Max date:Time after which token can’t be renewed. 
+
+Token operations like get, renew and cancel can only be performed over an 
Kerberos authenticated connection. Clients can use delegation token to 
establish connection with OzoneManager and perform any file system/object store 
related operations like, listing the objects in a bucket or creating a volume 
etc.
+
+ Block Tokens 
+Block tokens are similar to delegation tokens in sense that they are signed by 
OzoneManager. But this is where similarity between two stops. Block tokens are 
created by OM (OzoneManager) when a client request involves interaction with 
DataNodes. Unlike delegation tokens there is no client API to request block 
tokens. Instead they are handled transparently for client. Block tokens are 
embedded directly into client request responses. It means that clients don’t 
need to fetch it explicitly from Ozone Manager. This is handled implicitly 
inside ozone  client. Datanodes validates block tokens from clients for every 
client connection.
+
+ S3Token 
+Like block tokens S3Tokens are handled transparently for clients. It is signed 
by S3secret created by client. S3Gateway creates this token for every s3 client 
request. To create an S3Token user must have a S3 secret.
+
+### Certificates ###
+Apart from kerberos and tokens Ozone utilizes certificate based authentication 
for Ozone service components. To enable this, SCM (StorageContainerManager) 
bootstraps itself as an Certificate Authority when security is enabled. This 
allows all daemons inside Ozone to have an SCM signed certificate. Below is 
brief descriptions of steps involved:
+Datanodes and OzoneManagers submits a CSR (certificate signing request) to 
SCM. 
+SCM verifies identity of DN (Datanode) or OM via Kerberos and generates a 
certificate. 
+This certificate is used by OM and DN to prove their identities. 
+Datanodes use OzoneManager certificate to validate block tokens. This is 
possible because both of them trust SCM signed certificates. (i.e OzoneManager 
and Datanodes)
+
+## Authorization ##
+Ozone provides a pluggable API to control authorization of all client related 
operations. Default implementation allows every request. Clearly it is not 
meant for production environments. To configure a more fine grained policy one 
may configure Ranger plugin for Ozone. Since it is a pluggable module clients 
can also implement their own custom authorization policy and configure it using 
[ozone.acl.authorizer.class].
+
+## Audit ##
+Ozone provides ability to audit all read & write operations to OM, SCM and 
Datanodes. Ozone audit leverages the Marker feature which enables user to 
selectively audit only READ or WRITE operations by a simple config change 
without restarting the service(s).
+To enable/disable audit of READ operations, set filter.read.onMatch to NEUTRAL 
or DENY respectively. Similarly, the audit of WRITE operations can be 
controlled using filter.write.onMatch.
+
+Generating audit logs is only half the job, so Ozone also provides AuditParser 
- a sqllite based command line utility to parse/query audit logs with 
predefined templates(ex. Top 5 commands) and options for custom query. Once the 
log file has been loaded to AuditParser, one can 

[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.

2019-04-03 Thread GitBox
hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update 
documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#discussion_r271890361
 
 

 ##
 File path: hadoop-hdds/docs/content/SetupSecureOzone.md
 ##
 @@ -0,0 +1,86 @@
+---
+title: "Setup secure ozone cluster"
+date: "2019-April-03"
+menu:
+   main:
+   parent: Architecture
+weight: 11
+---
+
+
+# Setup secure ozone cluster #
+
+To enable security in ozone cluster **ozone.security.enabled** should be set 
to true.
+
+ozone.security.enabled| true
+--|--
+
+## Kerberos ##
+Configuration for service daemons:
+
+Property|Description
+|
+hdds.scm.kerberos.principal | The SCM service principal. Ex 
scm/_HOST@REALM.COM_
+hdds.scm.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+ozone.om.kerberos.principal |The OzoneManager service principal. Ex 
om/_h...@realm.com
+ozone.om.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.

2019-04-03 Thread GitBox
hadoop-yetus commented on a change in pull request #687: HDDS-1329. Update 
documentation for Ozone-0.4.0 release. Contributed By Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/687#discussion_r271890368
 
 

 ##
 File path: hadoop-hdds/docs/content/SetupSecureOzone.md
 ##
 @@ -0,0 +1,86 @@
+---
+title: "Setup secure ozone cluster"
+date: "2019-April-03"
+menu:
+   main:
+   parent: Architecture
+weight: 11
+---
+
+
+# Setup secure ozone cluster #
+
+To enable security in ozone cluster **ozone.security.enabled** should be set 
to true.
+
+ozone.security.enabled| true
+--|--
+
+## Kerberos ##
+Configuration for service daemons:
+
+Property|Description
+|
+hdds.scm.kerberos.principal | The SCM service principal. Ex 
scm/_HOST@REALM.COM_
+hdds.scm.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+ozone.om.kerberos.principal |The OzoneManager service principal. Ex 
om/_h...@realm.com
+ozone.om.kerberos.keytab.file   |The keytab file used by SCM daemon to login 
as its service principal.
+hdds.scm.http.kerberos.principal|SCM http server service principal. 
+hdds.scm.http.kerberos.keytab   |The keytab file used by SCM http server to 
login as its service principal.
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >