[GitHub] [hadoop] mukul1987 commented on issue #1226: HDDS-1610. applyTransaction failure should not be lost on restart.

2019-08-19 Thread GitBox
mukul1987 commented on issue #1226: HDDS-1610. applyTransaction failure should 
not be lost on restart.
URL: https://github.com/apache/hadoop/pull/1226#issuecomment-522860509
 
 
   Thanks for working on this @bshashikant. +1 the patch looks good to me. The 
test failures do not look related.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16391) Duplicate values in rpcDetailedMetrics

2019-08-19 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16910998#comment-16910998
 ] 

Bilwa S T commented on HADOOP-16391:


Thank you [~xkrogen] for review and committing

> Duplicate values in rpcDetailedMetrics
> --
>
> Key: HADOOP-16391
> URL: https://issues.apache.org/jira/browse/HADOOP-16391
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16391-001.patch, HADOOP-16391-002.patch, 
> HADOOP-16391-003.patch, image-2019-06-25-20-30-15-395.png, screenshot-1.png, 
> screenshot-2.png
>
>
> In RpcDetailedMetrics init is called two times . Once for deferredRpcrates 
> and other one rates metrics which causes duplicate values in RM and NM 
> metrics.
>  !image-2019-06-25-20-30-15-395.png! 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1315: HDDS-1975. Implement default acls for bucket/volume/key for OM HA code.

2019-08-19 Thread GitBox
hadoop-yetus commented on issue #1315: HDDS-1975. Implement default acls for 
bucket/volume/key for OM HA code.
URL: https://github.com/apache/hadoop/pull/1315#issuecomment-522843858
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 543 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 33 | Maven dependency ordering for branch |
   | +1 | mvninstall | 603 | trunk passed |
   | +1 | compile | 356 | trunk passed |
   | +1 | checkstyle | 64 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 804 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 150 | trunk passed |
   | 0 | spotbugs | 415 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 604 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 16 | Maven dependency ordering for patch |
   | +1 | mvninstall | 546 | the patch passed |
   | +1 | compile | 355 | the patch passed |
   | +1 | javac | 355 | the patch passed |
   | +1 | checkstyle | 62 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 654 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | the patch passed |
   | +1 | findbugs | 692 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 303 | hadoop-hdds in the patch passed. |
   | -1 | unit | 3363 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 9491 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.client.rpc.TestKeyInputStream |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1315/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1315 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ed82037be8b1 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4f925af |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1315/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1315/4/testReport/ |
   | Max. process+thread count | 3855 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager hadoop-ozone/integration-test U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1315/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1263: HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…

2019-08-19 Thread GitBox
xiaoyuyao commented on a change in pull request #1263: HDDS-1927. Consolidate 
add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#discussion_r315495833
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java
 ##
 @@ -235,123 +231,22 @@ public FileEncryptionInfo getFileEncryptionInfo() {
 return encInfo;
   }
 
-  public List getAcls() {
+  public List getAcls() {
 
 Review comment:
   bq. Raw protobuf class can't be extended easily for acl operations. We will 
have to to wrap the acl operations anyway.
   
   This is based on [protobuf 
doc](https://developers.google.com/protocol-buffers/docs/javatutorial) below. 
My understanding is that protobuf is only good for serialization/transport. Our 
acl operation specific logic should be wrapped without dependency on it so that 
it can be maintained easily for the long term.  

   "Protocol buffer classes are basically dumb data holders (like structs in 
C); they don't make good first class citizens in an object model. If you want 
to add richer behavior to a generated class, the best way to do this is to wrap 
the generated protocol buffer class in an application-specific class."
   
   bq. And also this will not only help acls But bucket/key creation also, as 
now Bucket/KeyInfo will have proto buf -> internal Ozone object. This also can 
be avoided. (So, when each key creation we don't need to convert acls set 
during the creation of key from proto to OzoneAcl Objects.)
   
   This requires a broader change in the interface itself, which is beyond the 
scope of this JIRA, i.e. consolidate the core acl ops.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16521) Subject has a contradiction between proxy user and real user

2019-08-19 Thread Yicong Cai (Jira)
Yicong Cai created HADOOP-16521:
---

 Summary: Subject has a contradiction between proxy user and real 
user
 Key: HADOOP-16521
 URL: https://issues.apache.org/jira/browse/HADOOP-16521
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Yicong Cai


In the method UserGroupInformation#loginUserFromSubject, if you specify 
ProxyUser with HADOOP_PROXY_USER, and create a Proxy UGI instance, the valid 
Credentials are included in the User's PrivateCredentials. The UGI information 
is as follows:

 
{code:java}
 proxyUGI
 |
 |--subject 1
 | |
 | |--principals
 | | |
 | | |--user
 | | |
 | |  --real user
 | |
 |  --privCredentials(all cred)
 |
  --proxy user
{code}
 

If you first login Real User and then use UserGroupInformation#createProxyUser 
to create a Proxy UGI, the valid Credentials information is included in 
RealUser's subject PrivateCredentials. The UGI information is as follows:

 
{code:java}
proxyUGI
 |
 |--subject 1
 | |
 | |--principals
 | | |
 | | |--user
 | | |
 | |  --real user
 | ||
 | | --subject 2
 | |   |
 | |--privCredentials(all cred)
 | |
 |  --privCredentials(empty)
 |
  --proxy user{code}
 

Use the proxy user in the HDFS FileSystem to perform token-related operations.
However, in the RPC Client Connection, use the token in RealUser for 
SaslRpcClient#saslConnect.

So the main contradiction is, should ProxyUser's real Credentials information 
be placed in ProxyUGI's subject, or should it be placed in RealUser's subject?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1316: HDDS-1973. Implement OM RenewDelegationToken request to use Cache and DoubleBuffer.

2019-08-19 Thread GitBox
hadoop-yetus commented on issue #1316: HDDS-1973. Implement OM 
RenewDelegationToken request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1316#issuecomment-522835104
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 36 | Maven dependency ordering for branch |
   | +1 | mvninstall | 752 | trunk passed |
   | +1 | compile | 396 | trunk passed |
   | +1 | checkstyle | 66 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 880 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 167 | trunk passed |
   | 0 | spotbugs | 418 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 619 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | +1 | mvninstall | 541 | the patch passed |
   | +1 | compile | 360 | the patch passed |
   | +1 | cc | 360 | the patch passed |
   | +1 | javac | 360 | the patch passed |
   | +1 | checkstyle | 69 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 669 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | the patch passed |
   | +1 | findbugs | 634 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 313 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1836 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 7760 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1316/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1316 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux ae6cbda13a4f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4f925af |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1316/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1316/1/testReport/ |
   | Max. process+thread count | 4820 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1316/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on a change in pull request #1307: HADOOP-15958. Revisiting LICENSE and NOTICE files

2019-08-19 Thread GitBox
aajisaka commented on a change in pull request #1307: HADOOP-15958. Revisiting 
LICENSE and NOTICE files
URL: https://github.com/apache/hadoop/pull/1307#discussion_r315488263
 
 

 ##
 File path: LICENSE-binary
 ##
 @@ -0,0 +1,531 @@
+
+ Apache License
+   Version 2.0, January 2004
+http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+  "License" shall mean the terms and conditions for use, reproduction,
+  and distribution as defined by Sections 1 through 9 of this document.
+
+  "Licensor" shall mean the copyright owner or entity authorized by
+  the copyright owner that is granting the License.
+
+  "Legal Entity" shall mean the union of the acting entity and all
+  other entities that control, are controlled by, or are under common
+  control with that entity. For the purposes of this definition,
+  "control" means (i) the power, direct or indirect, to cause the
+  direction or management of such entity, whether by contract or
+  otherwise, or (ii) ownership of fifty percent (50%) or more of the
+  outstanding shares, or (iii) beneficial ownership of such entity.
+
+  "You" (or "Your") shall mean an individual or Legal Entity
+  exercising permissions granted by this License.
+
+  "Source" form shall mean the preferred form for making modifications,
+  including but not limited to software source code, documentation
+  source, and configuration files.
+
+  "Object" form shall mean any form resulting from mechanical
+  transformation or translation of a Source form, including but
+  not limited to compiled object code, generated documentation,
+  and conversions to other media types.
+
+  "Work" shall mean the work of authorship, whether in Source or
+  Object form, made available under the License, as indicated by a
+  copyright notice that is included in or attached to the work
+  (an example is provided in the Appendix below).
+
+  "Derivative Works" shall mean any work, whether in Source or Object
+  form, that is based on (or derived from) the Work and for which the
+  editorial revisions, annotations, elaborations, or other modifications
+  represent, as a whole, an original work of authorship. For the purposes
+  of this License, Derivative Works shall not include works that remain
+  separable from, or merely link (or bind by name) to the interfaces of,
+  the Work and Derivative Works thereof.
+
+  "Contribution" shall mean any work of authorship, including
+  the original version of the Work and any modifications or additions
+  to that Work or Derivative Works thereof, that is intentionally
+  submitted to Licensor for inclusion in the Work by the copyright owner
+  or by an individual or Legal Entity authorized to submit on behalf of
+  the copyright owner. For the purposes of this definition, "submitted"
+  means any form of electronic, verbal, or written communication sent
+  to the Licensor or its representatives, including but not limited to
+  communication on electronic mailing lists, source code control systems,
+  and issue tracking systems that are managed by, or on behalf of, the
+  Licensor for the purpose of discussing and improving the Work, but
+  excluding communication that is conspicuously marked or otherwise
+  designated in writing by the copyright owner as "Not a Contribution."
+
+  "Contributor" shall mean Licensor and any individual or Legal Entity
+  on behalf of whom a Contribution has been received by Licensor and
+  subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+  this License, each Contributor hereby grants to You a perpetual,
+  worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+  copyright license to reproduce, prepare Derivative Works of,
+  publicly display, publicly perform, sublicense, and distribute the
+  Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+  this License, each Contributor hereby grants to You a perpetual,
+  worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+  (except as stated in this section) patent license to make, have made,
+  use, offer to sell, sell, import, and otherwise transfer the Work,
+  where such license applies only to those patent claims licensable
+  by such Contributor that are necessarily infringed by their
+  Contribution(s) alone or by combination of their Contribution(s)
+  with the Work to which such Contribution(s) was submitted. If You
+  institute patent litigation against any entity (including a
+  cross-claim or counterclaim in a 

[GitHub] [hadoop] aajisaka commented on a change in pull request #1307: HADOOP-15958. Revisiting LICENSE and NOTICE files

2019-08-19 Thread GitBox
aajisaka commented on a change in pull request #1307: HADOOP-15958. Revisiting 
LICENSE and NOTICE files
URL: https://github.com/apache/hadoop/pull/1307#discussion_r315486113
 
 

 ##
 File path: LICENSE-binary
 ##
 @@ -0,0 +1,531 @@
+
+ Apache License
+   Version 2.0, January 2004
+http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+  "License" shall mean the terms and conditions for use, reproduction,
+  and distribution as defined by Sections 1 through 9 of this document.
+
+  "Licensor" shall mean the copyright owner or entity authorized by
+  the copyright owner that is granting the License.
+
+  "Legal Entity" shall mean the union of the acting entity and all
+  other entities that control, are controlled by, or are under common
+  control with that entity. For the purposes of this definition,
+  "control" means (i) the power, direct or indirect, to cause the
+  direction or management of such entity, whether by contract or
+  otherwise, or (ii) ownership of fifty percent (50%) or more of the
+  outstanding shares, or (iii) beneficial ownership of such entity.
+
+  "You" (or "Your") shall mean an individual or Legal Entity
+  exercising permissions granted by this License.
+
+  "Source" form shall mean the preferred form for making modifications,
+  including but not limited to software source code, documentation
+  source, and configuration files.
+
+  "Object" form shall mean any form resulting from mechanical
+  transformation or translation of a Source form, including but
+  not limited to compiled object code, generated documentation,
+  and conversions to other media types.
+
+  "Work" shall mean the work of authorship, whether in Source or
+  Object form, made available under the License, as indicated by a
+  copyright notice that is included in or attached to the work
+  (an example is provided in the Appendix below).
+
+  "Derivative Works" shall mean any work, whether in Source or Object
+  form, that is based on (or derived from) the Work and for which the
+  editorial revisions, annotations, elaborations, or other modifications
+  represent, as a whole, an original work of authorship. For the purposes
+  of this License, Derivative Works shall not include works that remain
+  separable from, or merely link (or bind by name) to the interfaces of,
+  the Work and Derivative Works thereof.
+
+  "Contribution" shall mean any work of authorship, including
+  the original version of the Work and any modifications or additions
+  to that Work or Derivative Works thereof, that is intentionally
+  submitted to Licensor for inclusion in the Work by the copyright owner
+  or by an individual or Legal Entity authorized to submit on behalf of
+  the copyright owner. For the purposes of this definition, "submitted"
+  means any form of electronic, verbal, or written communication sent
+  to the Licensor or its representatives, including but not limited to
+  communication on electronic mailing lists, source code control systems,
+  and issue tracking systems that are managed by, or on behalf of, the
+  Licensor for the purpose of discussing and improving the Work, but
+  excluding communication that is conspicuously marked or otherwise
+  designated in writing by the copyright owner as "Not a Contribution."
+
+  "Contributor" shall mean Licensor and any individual or Legal Entity
+  on behalf of whom a Contribution has been received by Licensor and
+  subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+  this License, each Contributor hereby grants to You a perpetual,
+  worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+  copyright license to reproduce, prepare Derivative Works of,
+  publicly display, publicly perform, sublicense, and distribute the
+  Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+  this License, each Contributor hereby grants to You a perpetual,
+  worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+  (except as stated in this section) patent license to make, have made,
+  use, offer to sell, sell, import, and otherwise transfer the Work,
+  where such license applies only to those patent claims licensable
+  by such Contributor that are necessarily infringed by their
+  Contribution(s) alone or by combination of their Contribution(s)
+  with the Work to which such Contribution(s) was submitted. If You
+  institute patent litigation against any entity (including a
+  cross-claim or counterclaim in a 

[GitHub] [hadoop] hadoop-yetus commented on issue #1263: HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…

2019-08-19 Thread GitBox
hadoop-yetus commented on issue #1263: HDDS-1927. Consolidate add/remove Acl 
into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#issuecomment-522816675
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for branch |
   | +1 | mvninstall | 646 | trunk passed |
   | +1 | compile | 384 | trunk passed |
   | +1 | checkstyle | 66 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 845 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 178 | trunk passed |
   | 0 | spotbugs | 461 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 676 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 16 | Maven dependency ordering for patch |
   | +1 | mvninstall | 541 | the patch passed |
   | +1 | compile | 357 | the patch passed |
   | +1 | javac | 357 | the patch passed |
   | +1 | checkstyle | 68 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 708 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 152 | the patch passed |
   | +1 | findbugs | 630 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 286 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1902 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 7722 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1263 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ac55c9f1a8c5 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4f925af |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/11/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/11/testReport/ |
   | Max. process+thread count | 5414 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/objectstore-service 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/11/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16268) Allow custom wrapped exception to be thrown by server if RPC call queue is filled up

2019-08-19 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16910895#comment-16910895
 ] 

Wei-Chiu Chuang commented on HADOOP-16268:
--

TBH I am not familiar with either FairCallQueue or RBF. It sounds like this 
should be the preferred configuration when you have RBF. Could you update 
hdfs-default.xml, and add the description for this new config, and the expected 
behavior after it is switched on?

> Allow custom wrapped exception to be thrown by server if RPC call queue is 
> filled up
> 
>
> Key: HADOOP-16268
> URL: https://issues.apache.org/jira/browse/HADOOP-16268
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HADOOP-16268.001.patch
>
>
> In the current implementation of callqueue manager, 
> "CallQueueOverflowException" exceptions are always wrapping 
> "RetriableException". Through configs servers should be allowed to throw 
> custom exceptions based on new use cases.
> In CallQueueManager.java for backoff the below is done 
> {code:java}
>   // ideally this behavior should be controllable too.
>   private void throwBackoff() throws IllegalStateException {
> throw CallQueueOverflowException.DISCONNECT;
>   }
> {code}
> Since CallQueueOverflowException only wraps RetriableException clients would 
> end up hitting the same server for retries. In use cases that router supports 
> these overflowed requests could be handled by another router that shares the 
> same state thus distributing load across a cluster of routers better. In the 
> absence of any custom exception, current behavior should be supported.
> In CallQueueOverflowException class a new Standby exception wrap should be 
> created. Something like the below
> {code:java}
>static final CallQueueOverflowException KEEPALIVE =
> new CallQueueOverflowException(
> new RetriableException(TOO_BUSY),
> RpcStatusProto.ERROR);
> static final CallQueueOverflowException DISCONNECT =
> new CallQueueOverflowException(
> new RetriableException(TOO_BUSY + " - disconnecting"),
> RpcStatusProto.FATAL);
> static final CallQueueOverflowException DISCONNECT2 =
> new CallQueueOverflowException(
> new StandbyException(TOO_BUSY + " - disconnecting"),
> RpcStatusProto.FATAL);
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1315: HDDS-1975. Implement default acls for bucket/volume/key for OM HA code.

2019-08-19 Thread GitBox
bharatviswa504 commented on issue #1315: HDDS-1975. Implement default acls for 
bucket/volume/key for OM HA code.
URL: https://github.com/apache/hadoop/pull/1315#issuecomment-522814418
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 opened a new pull request #1316: HDDS-1975. Implement OM RenewDelegationToken request to use Cache and DoubleBuffer.

2019-08-19 Thread GitBox
bharatviswa504 opened a new pull request #1316: HDDS-1975. Implement OM 
RenewDelegationToken request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1316
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1263: HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…

2019-08-19 Thread GitBox
bharatviswa504 commented on a change in pull request #1263: HDDS-1927. 
Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#discussion_r315467845
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java
 ##
 @@ -235,123 +231,22 @@ public FileEncryptionInfo getFileEncryptionInfo() {
 return encInfo;
   }
 
-  public List getAcls() {
+  public List getAcls() {
 
 Review comment:
   > Raw protobuf class can't be extended easily for acl operations. We will 
have to to wrap the acl operations anyway.
   
   Not understood what do you mean by wrap ACL operations anyway.
   
   > Given the scatter logic of ACL ops for volume/bucket/key/prefix, I would 
prefer to have a unified code base that can be maintained/tested easily in the 
long run.
   
   I understand that instead of spreading logic across in multiple places keep 
acl logic in single place, which can be used in all bucket/key/prefix ops. I 
agree with that point. But I have not understood why same cannot be done with 
protobuf acl objects.
   
   3. Agreed thar write acl operations/ will be lesser than read Acls.
   
   But if you think this is the right way to go, I am fine with it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1263: HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…

2019-08-19 Thread GitBox
bharatviswa504 commented on a change in pull request #1263: HDDS-1927. 
Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#discussion_r315468225
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java
 ##
 @@ -235,123 +231,22 @@ public FileEncryptionInfo getFileEncryptionInfo() {
 return encInfo;
   }
 
-  public List getAcls() {
+  public List getAcls() {
 
 Review comment:
   And also this will not only help acls But bucket/key creation also, as now 
Bucket/KeyInfo will have proto buf -> internal Ozone object. This also can be 
avoided. (So, when each key creation we don't need to convert acls set during 
the creation of key from proto to OzoneAcl Objects.)
   
   Which is called during Bucket creation.
   ```
 public static OmBucketInfo getFromProtobuf(BucketInfo bucketInfo) {
   OmBucketInfo.Builder obib = OmBucketInfo.newBuilder()
   .setVolumeName(bucketInfo.getVolumeName())
   .setBucketName(bucketInfo.getBucketName())
   .setAcls(bucketInfo.getAclsList().stream().map(
   OzoneAcl::fromProtobuf).collect(Collectors.toList()))
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1263: HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…

2019-08-19 Thread GitBox
bharatviswa504 commented on a change in pull request #1263: HDDS-1927. 
Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#discussion_r315468225
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java
 ##
 @@ -235,123 +231,22 @@ public FileEncryptionInfo getFileEncryptionInfo() {
 return encInfo;
   }
 
-  public List getAcls() {
+  public List getAcls() {
 
 Review comment:
   And also this will not only help acls But bucket/key creation also, as now 
Bucket/KeyInfo will have proto buf -> internal Ozone object. This also can be 
avoided. (So, when each key creation we don't need to convert acls set during 
the creation of key from proto to OzoneAcl Objects.)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1263: HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…

2019-08-19 Thread GitBox
bharatviswa504 commented on a change in pull request #1263: HDDS-1927. 
Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#discussion_r315468225
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java
 ##
 @@ -235,123 +231,22 @@ public FileEncryptionInfo getFileEncryptionInfo() {
 return encInfo;
   }
 
-  public List getAcls() {
+  public List getAcls() {
 
 Review comment:
   And also this will not only help acls But bucket/key creation also, as now 
Bucket/KeyInfo will have proto buf -> internal Ozone object. This also can be 
avoided. (So, when each key creation I don't need to convert acls set during 
the creation of key from proto to OzoneAcl Objects.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1263: HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…

2019-08-19 Thread GitBox
bharatviswa504 commented on a change in pull request #1263: HDDS-1927. 
Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#discussion_r315467845
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java
 ##
 @@ -235,123 +231,22 @@ public FileEncryptionInfo getFileEncryptionInfo() {
 return encInfo;
   }
 
-  public List getAcls() {
+  public List getAcls() {
 
 Review comment:
   `Raw protobuf class can't be extended easily for acl operations. We will 
have to to wrap the acl operations anyway.`
   Not understood what do you mean by wrap ACL operations anyway.
   
   `Given the scatter logic of ACL ops for volume/bucket/key/prefix, I would 
prefer to have a unified code base that can be maintained/tested easily in the 
long run.`
   
   I understand that instead of spreading logic across in multiple places keep 
acl logic in single place, which can be used in all bucket/key/prefix ops. I 
agree with that point. But I have not understood why same cannot be done with 
protobuf acl objects.
   
   3. Agreed thar write acl operations/ will be lesser than read Acls.
   
   But if you think this is the right way to go, I am fine with it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1263: HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…

2019-08-19 Thread GitBox
xiaoyuyao commented on a change in pull request #1263: HDDS-1927. Consolidate 
add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#discussion_r315466190
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java
 ##
 @@ -235,123 +231,22 @@ public FileEncryptionInfo getFileEncryptionInfo() {
 return encInfo;
   }
 
-  public List getAcls() {
+  public List getAcls() {
 
 Review comment:
   1st conversion is for RPC based request.
   2nd conversion is for persistence to DB. 

   In between, we use OzoneAcl to manipulate the ACL. I understand we could 
save the conversion by using the protobuf class directly. Here are reasons I 
choose to keep Java class instead:
   
   1. Raw protobuf class can't be extended easily for acl operations. We will 
have to to wrap the acl operations anyway. 
   
   2. Given the scatter logic of ACL ops for volume/bucket/key/prefix, I would 
prefer to have a unified code base that can be maintained/tested easily in the 
long run.
   
   3. ACL write operations are far less than ACL read. I would not expect this 
to have big impact to ozone perf.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1315: HDDS-1975. Implement default acls for bucket/volume/key for OM HA code.

2019-08-19 Thread GitBox
hadoop-yetus commented on issue #1315: HDDS-1975. Implement default acls for 
bucket/volume/key for OM HA code.
URL: https://github.com/apache/hadoop/pull/1315#issuecomment-522799727
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1143 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for branch |
   | +1 | mvninstall | 630 | trunk passed |
   | +1 | compile | 388 | trunk passed |
   | +1 | checkstyle | 71 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 968 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 173 | trunk passed |
   | 0 | spotbugs | 458 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 678 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for patch |
   | +1 | mvninstall | 585 | the patch passed |
   | +1 | compile | 399 | the patch passed |
   | +1 | javac | 399 | the patch passed |
   | +1 | checkstyle | 90 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 758 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 176 | the patch passed |
   | +1 | findbugs | 708 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 379 | hadoop-hdds in the patch passed. |
   | -1 | unit | 333 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 56 | The patch does not generate ASF License warnings. |
   | | | 7750 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.om.request.s3.bucket.TestS3BucketCreateRequest |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1315/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1315 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a9c39b494fac 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4f925af |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1315/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1315/3/testReport/ |
   | Max. process+thread count | 1325 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager hadoop-ozone/integration-test U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1315/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1263: HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…

2019-08-19 Thread GitBox
bharatviswa504 commented on a change in pull request #1263: HDDS-1927. 
Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#discussion_r315452344
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java
 ##
 @@ -235,123 +231,22 @@ public FileEncryptionInfo getFileEncryptionInfo() {
 return encInfo;
   }
 
-  public List getAcls() {
+  public List getAcls() {
 
 Review comment:
   > As a result, we will only do the protobuf conversion in those places.
   
   Not sure if I am missing something here. From the above example, we are 
doing the conversion in 2 places.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1263: HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…

2019-08-19 Thread GitBox
bharatviswa504 commented on a change in pull request #1263: HDDS-1927. 
Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#discussion_r315452170
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java
 ##
 @@ -235,123 +231,22 @@ public FileEncryptionInfo getFileEncryptionInfo() {
 return encInfo;
   }
 
-  public List getAcls() {
+  public List getAcls() {
 
 Review comment:
   What I mean here is:
   Take an example OMBucketAddAclRequest
   ```
 public OMBucketAddAclRequest(OMRequest omRequest) {
   super(omRequest, bucketAddAclOp);
   OzoneManagerProtocolProtos.AddAclRequest addAclRequest =
   getOmRequest().getAddAclRequest();
   path = addAclRequest.getObj().getPath();
   ozoneAcls = Lists.newArrayList(
   OzoneAcl.fromProtobuf(addAclRequest.getAcl()));
   ```
   
   If OMBucketInfo, has direct protobuf structures we can avoid the conversion.
   
   Now, we do 1 conversion in constructor, and the other when write to DB. 
ozoneAcl -> proto -> byte Array. If we can have direct protobuf structures, we 
can save unnecessary protobuf conversions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1263: HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…

2019-08-19 Thread GitBox
xiaoyuyao commented on a change in pull request #1263: HDDS-1927. Consolidate 
add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#discussion_r315450501
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java
 ##
 @@ -235,123 +231,22 @@ public FileEncryptionInfo getFileEncryptionInfo() {
 return encInfo;
   }
 
-  public List getAcls() {
+  public List getAcls() {
 
 Review comment:
   OzoneAcl class is introduced to make acl operation such as add/remove/set 
acl consistent. Unfortunately, some of the early acl implementation uses 
OzoneAclInfo (protobuf class) directly, which this ticket attempts to unify. 
   
   The protobuf class OzoneAclInfo is good for maintain compatibility for RPC 
and DB persistence. As a result, we will only do the protobuf conversion in 
those places. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on a change in pull request #467: [HDFS-14208]fix bug that there is some a large number missingblocks after failove to active

2019-08-19 Thread GitBox
jojochuang commented on a change in pull request #467: [HDFS-14208]fix bug that 
there is some a large number missingblocks after failove to active
URL: https://github.com/apache/hadoop/pull/467#discussion_r315449101
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 ##
 @@ -2532,8 +2532,9 @@ public boolean processReport(final DatanodeID nodeID,
 return !node.hasStaleStorages();
   }
   if (context != null) {
-if (!blockReportLeaseManager.checkLease(node, startTime,
-  context.getLeaseId())) {
+if (!namesystem.isInStartupSafeMode()
 
 Review comment:
   I think this is a dup of HDFS-12914. In HDFS-12914, this part of code is 
moved to another place to fix the bug.
   
   This fix is essentially saying if namenode is in safe mode, don't check for 
lease.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang closed pull request #467: [HDFS-14208]fix bug that there is some a large number missingblocks after failove to active

2019-08-19 Thread GitBox
jojochuang closed pull request #467: [HDFS-14208]fix bug that there is some a 
large number missingblocks after failove to active
URL: https://github.com/apache/hadoop/pull/467
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1315: HDDS-1975. Implement default acls for bucket/volume/key for OM HA code.

2019-08-19 Thread GitBox
hadoop-yetus commented on issue #1315: HDDS-1975. Implement default acls for 
bucket/volume/key for OM HA code.
URL: https://github.com/apache/hadoop/pull/1315#issuecomment-522782993
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 149 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 44 | Maven dependency ordering for branch |
   | +1 | mvninstall | 773 | trunk passed |
   | +1 | compile | 473 | trunk passed |
   | +1 | checkstyle | 85 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1052 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 172 | trunk passed |
   | 0 | spotbugs | 486 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 707 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | -1 | mvninstall | 138 | hadoop-ozone in the patch failed. |
   | -1 | compile | 55 | hadoop-ozone in the patch failed. |
   | -1 | javac | 55 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 25 | The patch fails to run checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 735 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 53 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 105 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 358 | hadoop-hdds in the patch passed. |
   | -1 | unit | 107 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 5816 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1315/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1315 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 2a93514f426d 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4f925af |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1315/2/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1315/2/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1315/2/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1315/2/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1315/out/maven-patch-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1315/2/artifact/out/patch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1315/2/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1315/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1315/2/testReport/ |
   | Max. process+thread count | 390 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager hadoop-ozone/integration-test U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1315/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1263: HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…

2019-08-19 Thread GitBox
bharatviswa504 commented on a change in pull request #1263: HDDS-1927. 
Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#discussion_r315440404
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java
 ##
 @@ -235,123 +231,22 @@ public FileEncryptionInfo getFileEncryptionInfo() {
 return encInfo;
   }
 
-  public List getAcls() {
+  public List getAcls() {
 
 Review comment:
   We can leave this as OzoneAclInfo, and make OmBucketInfo OzoneAcl -> 
OzoneAclInfo.
   In this way, we can avoid conversion of OzoneAcl list during OM operations 
(like add/remove/set Acl we can avoid protobuf conversion).
   And this will be useful in HA code path, as we operate directly OMRequest 
objects. (I mean here we get the data from OMRequest, and perform operation and 
store in cache)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1263: HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…

2019-08-19 Thread GitBox
bharatviswa504 commented on a change in pull request #1263: HDDS-1927. 
Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#discussion_r315440404
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java
 ##
 @@ -235,123 +231,22 @@ public FileEncryptionInfo getFileEncryptionInfo() {
 return encInfo;
   }
 
-  public List getAcls() {
+  public List getAcls() {
 
 Review comment:
   We can leave this as OzoneAclInfo, and make OmBucketInfo OzoneAcl -> 
OzoneAclInfo.
   In this way, we can avoid conversion of OzoneAcl list during OM operations.
   And this will be useful in HA code path, as we operate directly OMRequest 
objects. (I mean here we get the data from OMRequest, and perform operation and 
store in cache)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hgadre commented on a change in pull request #1154: [HDDS-1200] Add support for checksum verification in data scrubber

2019-08-19 Thread GitBox
hgadre commented on a change in pull request #1154: [HDDS-1200] Add support for 
checksum verification in data scrubber
URL: https://github.com/apache/hadoop/pull/1154#discussion_r315426753
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
 ##
 @@ -68,11 +68,16 @@
   public static final String HDDS_CONTAINERSCRUB_ENABLED =
   "hdds.containerscrub.enabled";
   public static final boolean HDDS_CONTAINERSCRUB_ENABLED_DEFAULT = false;
+
   public static final boolean HDDS_SCM_SAFEMODE_ENABLED_DEFAULT = true;
   public static final String HDDS_SCM_SAFEMODE_MIN_DATANODE =
   "hdds.scm.safemode.min.datanode";
   public static final int HDDS_SCM_SAFEMODE_MIN_DATANODE_DEFAULT = 1;
 
+  public static final String HDDS_CONTAINER_SCANNER_VOLUME_BYTES_PER_SECOND =
+  "hdds.container.scanner.volume.bytes.per.second";
 
 Review comment:
   Updated the patch to use configuration based APIs.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hgadre commented on a change in pull request #1154: [HDDS-1200] Add support for checksum verification in data scrubber

2019-08-19 Thread GitBox
hgadre commented on a change in pull request #1154: [HDDS-1200] Add support for 
checksum verification in data scrubber
URL: https://github.com/apache/hadoop/pull/1154#discussion_r315426622
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerMetadataScanner.java
 ##
 @@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+package org.apache.hadoop.ozone.container.ozoneimpl;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.commons.net.ntp.TimeStamp;
+import org.apache.hadoop.ozone.container.common.interfaces.Container;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Iterator;
+
+/**
+ * This class is responsible to perform metadata verification of the
+ * containers.
+ */
+public class ContainerMetadataScanner extends Thread {
+  public static final Logger LOG =
+  LoggerFactory.getLogger(ContainerMetadataScanner.class);
+
+  private final ContainerController controller;
+  /**
+   * True if the thread is stopping.
+   * Protected by this object's lock.
+   */
+  private boolean stopping = false;
+
+  public ContainerMetadataScanner(ContainerController controller) {
+this.controller = controller;
+setName("ContainerMetadataScanner");
+setDaemon(true);
+  }
+
+  @Override
+  public void run() {
+/**
+ * the outer daemon loop exits on down()
+ */
+LOG.info("Background ContainerMetadataScanner starting up");
+while (!stopping) {
+  scrub();
+  if (!stopping) {
+try {
+  Thread.sleep(30); /* 5 min between scans */
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hgadre commented on a change in pull request #1154: [HDDS-1200] Add support for checksum verification in data scrubber

2019-08-19 Thread GitBox
hgadre commented on a change in pull request #1154: [HDDS-1200] Add support for 
checksum verification in data scrubber
URL: https://github.com/apache/hadoop/pull/1154#discussion_r315426293
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerCheck.java
 ##
 @@ -220,43 +229,66 @@ private void checkBlockDB() throws IOException {
   throw new IOException(dbFileErrorMsg);
 }
 
-
 onDiskContainerData.setDbFile(dbFile);
 try(ReferenceCountedDB db =
-BlockUtils.getDB(onDiskContainerData, checkConfig)) {
-  iterateBlockDB(db);
-}
-  }
+BlockUtils.getDB(onDiskContainerData, checkConfig);
+KeyValueBlockIterator kvIter = new KeyValueBlockIterator(containerID,
+new File(onDiskContainerData.getContainerPath( {
 
-  private void iterateBlockDB(ReferenceCountedDB db)
-  throws IOException {
-Preconditions.checkState(db != null);
-
-// get "normal" keys from the Block DB
-try(KeyValueBlockIterator kvIter = new KeyValueBlockIterator(containerID,
-new File(onDiskContainerData.getContainerPath( {
-
-  // ensure there is a chunk file for each key in the DB
-  while (kvIter.hasNext()) {
+  while(kvIter.hasNext()) {
 BlockData block = kvIter.nextBlock();
-
-List chunkInfoList = block.getChunks();
-for (ContainerProtos.ChunkInfo chunk : chunkInfoList) {
-  File chunkFile;
-  chunkFile = ChunkUtils.getChunkFile(onDiskContainerData,
+for(ContainerProtos.ChunkInfo chunk : block.getChunks()) {
+  File chunkFile = ChunkUtils.getChunkFile(onDiskContainerData,
   ChunkInfo.getFromProtoBuf(chunk));
-
   if (!chunkFile.exists()) {
 // concurrent mutation in Block DB? lookup the block again.
 byte[] bdata = db.getStore().get(
 Longs.toByteArray(block.getBlockID().getLocalID()));
-if (bdata == null) {
-  LOG.trace("concurrency with delete, ignoring deleted block");
-  break; // skip to next block from kvIter
-} else {
-  String errorStr = "Missing chunk file "
-  + chunkFile.getAbsolutePath();
-  throw new IOException(errorStr);
+if (bdata != null) {
+  throw new IOException("Missing chunk file "
+  + chunkFile.getAbsolutePath());
+}
+  } else if (chunk.getChecksumData().getType()
+  != ContainerProtos.ChecksumType.NONE){
+int length = chunk.getChecksumData().getChecksumsList().size();
+ChecksumData cData = new ChecksumData(
+chunk.getChecksumData().getType(),
+chunk.getChecksumData().getBytesPerChecksum(),
+chunk.getChecksumData().getChecksumsList());
+long bytesRead = 0;
+byte[] buffer = new byte[cData.getBytesPerChecksum()];
+try (InputStream fs = new FileInputStream(chunkFile)) {
+  int i = 0, v = 0;
+  for (; i < length; i++) {
+v = fs.read(buffer);
+if (v == -1) {
+  break;
+}
+bytesRead += v;
+throttler.throttle(v, canceler);
+Checksum cal = new Checksum(cData.getChecksumType(),
+cData.getBytesPerChecksum());
+ByteString expected = cData.getChecksums().get(i);
+ByteString actual = cal.computeChecksum(buffer)
+.getChecksums().get(0);
+if (!Arrays.equals(expected.toByteArray(),
+actual.toByteArray())) {
+  throw new OzoneChecksumException(String
+  .format("Inconsistent read for chunk=%s len=%d expected" 
+
+  " checksum %s actual checksum %s",
+  chunk.getChunkName(), chunk.getLen(),
+  Arrays.toString(expected.toByteArray()),
+  Arrays.toString(actual.toByteArray(;
+}
+
+  }
+  if (v == -1 && i < length) {
+throw new OzoneChecksumException(String
+.format("Inconsistent read for chunk=%s expected length=%d"
 
 Review comment:
   done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hgadre commented on a change in pull request #1154: [HDDS-1200] Add support for checksum verification in data scrubber

2019-08-19 Thread GitBox
hgadre commented on a change in pull request #1154: [HDDS-1200] Add support for 
checksum verification in data scrubber
URL: https://github.com/apache/hadoop/pull/1154#discussion_r315425380
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueContainerCheck.java
 ##
 @@ -120,10 +133,70 @@ public TestKeyValueContainerCheck(String metadataImpl) {
 container.close();
 
 // next run checks on a Closed Container
-valid = kvCheck.fullCheck();
+valid = kvCheck.fullCheck(new DataTransferThrottler(
+c.getBandwidthPerVolume()), null);
 assertTrue(valid);
   }
 
+  /**
+   * Sanity test, when there are corruptions induced.
+   * @throws Exception
+   */
+  @Test
+  public void testKeyValueContainerCheckCorruption() throws Exception {
+long containerID = 102;
+int deletedBlocks = 1;
+int normalBlocks = 3;
+int chunksPerBlock = 4;
+boolean valid = false;
+ContainerScrubberConfiguration sc = conf.getObject(
+ContainerScrubberConfiguration.class);
+
+// test Closed Container
+createContainerWithBlocks(containerID, normalBlocks, deletedBlocks, 65536,
+chunksPerBlock);
+File chunksPath = new File(containerData.getChunksPath());
+assertTrue(chunksPath.listFiles().length
+== (deletedBlocks + normalBlocks) * chunksPerBlock);
+
+container.close();
+
+KeyValueContainerCheck kvCheck =
+new KeyValueContainerCheck(containerData.getMetadataPath(), conf,
+containerID);
+
+File metaDir = new File(containerData.getMetadataPath());
+File dbFile = KeyValueContainerLocationUtil
+.getContainerDBFile(metaDir, containerID);
+containerData.setDbFile(dbFile);
+try(ReferenceCountedDB db =
+BlockUtils.getDB(containerData, conf);
+KeyValueBlockIterator kvIter = new KeyValueBlockIterator(containerID,
+new File(containerData.getContainerPath( {
+  BlockData block = kvIter.nextBlock();
+  assertTrue(!block.getChunks().isEmpty());
+  ContainerProtos.ChunkInfo c = block.getChunks().get(0);
+  File chunkFile = ChunkUtils.getChunkFile(containerData,
+  ChunkInfo.getFromProtoBuf(c));
+  long length = chunkFile.length();
+  assertTrue(length > 0);
+  // forcefully truncate the file to induce failure.
+  try (RandomAccessFile file = new RandomAccessFile(chunkFile, "rws")) {
+file.setLength(length / 2);
+  }
+  assertEquals(length/2, chunkFile.length());
+}
+
+// metadata check should pass.
+valid = kvCheck.fastCheck();
+assertTrue(valid);
+
+// checksum validation should fail.
+valid = kvCheck.fullCheck(new DataTransferThrottler(
+sc.getBandwidthPerVolume()), null);
+assertFalse(valid);
+  }
+
   /**
* Creates a container with normal and deleted blocks.
* First it will insert normal blocks, and then it will insert
 
 Review comment:
   Not sure I am following you. Can you elaborate which part do you find 
misleading? This function was present before this patch ...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1315: HDDS-1975. Implement default acls for bucket/volume/key for OM HA code.

2019-08-19 Thread GitBox
bharatviswa504 commented on issue #1315: HDDS-1975. Implement default acls for 
bucket/volume/key for OM HA code.
URL: https://github.com/apache/hadoop/pull/1315#issuecomment-522762718
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1304: HDDS-1972. Provide example ha proxy with multiple s3 servers back end.

2019-08-19 Thread GitBox
bharatviswa504 commented on issue #1304: HDDS-1972. Provide example ha proxy 
with multiple s3 servers back end.
URL: https://github.com/apache/hadoop/pull/1304#issuecomment-522752048
 
 
   > +1
   > 
   > @adoroszlai are you okay to commit this?
   
   Got a +1 from @adoroszlai already, so committed this change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #1304: HDDS-1972. Provide example ha proxy with multiple s3 servers back end.

2019-08-19 Thread GitBox
bharatviswa504 merged pull request #1304: HDDS-1972. Provide example ha proxy 
with multiple s3 servers back end.
URL: https://github.com/apache/hadoop/pull/1304
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1304: HDDS-1972. Provide example ha proxy with multiple s3 servers back end.

2019-08-19 Thread GitBox
bharatviswa504 commented on issue #1304: HDDS-1972. Provide example ha proxy 
with multiple s3 servers back end.
URL: https://github.com/apache/hadoop/pull/1304#issuecomment-522751627
 
 
   Thank You @arp7 and @adoroszlai for the review.
   I will commit this to the trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1304: HDDS-1972. Provide example ha proxy with multiple s3 servers back end.

2019-08-19 Thread GitBox
bharatviswa504 commented on a change in pull request #1304: HDDS-1972. Provide 
example ha proxy with multiple s3 servers back end.
URL: https://github.com/apache/hadoop/pull/1304#discussion_r315409086
 
 

 ##
 File path: 
hadoop-ozone/dist/src/main/compose/ozones3-haproxy/docker-compose.yaml
 ##
 @@ -0,0 +1,83 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+version: "3"
+services:
+   s3g:
+  image: haproxy:latest
+  volumes:
+ - ../..:/opt/hadoop
+ - ./haproxy-conf/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
+  ports:
+ - 9878:9878
+   datanode:
+  image: apache/ozone-runner:${HADOOP_RUNNER_VERSION}
+  volumes:
+- ../..:/opt/hadoop
+  ports:
+- 9864
+  command: ["ozone","datanode"]
+  env_file:
+- ./docker-config
+   om:
+  image: apache/ozone-runner:${HADOOP_RUNNER_VERSION}
+  volumes:
+ - ../..:/opt/hadoop
+  ports:
+ - 9874:9874
+  environment:
+ ENSURE_OM_INITIALIZED: /data/metadata/om/current/VERSION
+  env_file:
+  - ./docker-config
+  command: ["ozone","om"]
+   scm:
+  image: apache/ozone-runner:${HADOOP_RUNNER_VERSION}
+  volumes:
+ - ../..:/opt/hadoop
+  ports:
+ - 9876:9876
+  env_file:
+  - ./docker-config
+  environment:
+  ENSURE_SCM_INITIALIZED: /data/metadata/scm/current/VERSION
+  command: ["ozone","scm"]
+   s3g1:
 
 Review comment:
   Yes, we can do that, but we need to have ha-proxy image which can have some 
configuration, where we can dynamically configure s3proxy and start/stop.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 opened a new pull request #1315: HDDS-1975. Implement default acls for bucket/volume/key for OM HA code.

2019-08-19 Thread GitBox
bharatviswa504 opened a new pull request #1315: HDDS-1975. Implement default 
acls for bucket/volume/key for OM HA code.
URL: https://github.com/apache/hadoop/pull/1315
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1304: HDDS-1972. Provide example ha proxy with multiple s3 servers back end.

2019-08-19 Thread GitBox
arp7 commented on a change in pull request #1304: HDDS-1972. Provide example ha 
proxy with multiple s3 servers back end.
URL: https://github.com/apache/hadoop/pull/1304#discussion_r315393614
 
 

 ##
 File path: 
hadoop-ozone/dist/src/main/compose/ozones3-haproxy/docker-compose.yaml
 ##
 @@ -0,0 +1,83 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+version: "3"
+services:
+   s3g:
+  image: haproxy:latest
+  volumes:
+ - ../..:/opt/hadoop
+ - ./haproxy-conf/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
+  ports:
+ - 9878:9878
+   datanode:
+  image: apache/ozone-runner:${HADOOP_RUNNER_VERSION}
+  volumes:
+- ../..:/opt/hadoop
+  ports:
+- 9864
+  command: ["ozone","datanode"]
+  env_file:
+- ./docker-config
+   om:
+  image: apache/ozone-runner:${HADOOP_RUNNER_VERSION}
+  volumes:
+ - ../..:/opt/hadoop
+  ports:
+ - 9874:9874
+  environment:
+ ENSURE_OM_INITIALIZED: /data/metadata/om/current/VERSION
+  env_file:
+  - ./docker-config
+  command: ["ozone","om"]
+   scm:
+  image: apache/ozone-runner:${HADOOP_RUNNER_VERSION}
+  volumes:
+ - ../..:/opt/hadoop
+  ports:
+ - 9876:9876
+  env_file:
+  - ./docker-config
+  environment:
+  ENSURE_SCM_INITIALIZED: /data/metadata/scm/current/VERSION
+  command: ["ozone","scm"]
+   s3g1:
 
 Review comment:
   These can be collapsed into one entry, then we can use scale option. Not a 
blocker to commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14784) [KMS] Improve KeyAuthorizationKeyProvider#toString()

2019-08-19 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-14784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16910685#comment-16910685
 ] 

Hadoop QA commented on HADOOP-14784:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
36s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-14784 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882490/HADOOP-14784.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d6466df1c64d 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d69a1a0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16494/testReport/ |
| Max. process+thread count | 418 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16494/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> [KMS] Improve 

[GitHub] [hadoop] hadoop-yetus commented on issue #1304: HDDS-1972. Provide example ha proxy with multiple s3 servers back end.

2019-08-19 Thread GitBox
hadoop-yetus commented on issue #1304: HDDS-1972. Provide example ha proxy with 
multiple s3 servers back end.
URL: https://github.com/apache/hadoop/pull/1304#issuecomment-522721525
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 46 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | yamllint | 0 | yamllint was not available. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 601 | trunk passed |
   | +1 | compile | 375 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 723 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 158 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 566 | the patch passed |
   | +1 | compile | 395 | the patch passed |
   | +1 | javac | 395 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 652 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 170 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 316 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2696 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
   | | | 6969 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.dn.scrubber.TestDataScrubber |
   |   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1304/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1304 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient yamllint shellcheck shelldocs |
   | uname | Linux a05c19df5e06 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c765584 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1304/7/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1304/7/testReport/ |
   | Max. process+thread count | 3723 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1304/7/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang closed pull request #1294: HDFS-14665. HttpFS: LISTSTATUS response is missing HDFS-specific fields

2019-08-19 Thread GitBox
jojochuang closed pull request #1294: HDFS-14665. HttpFS: LISTSTATUS response 
is missing HDFS-specific fields
URL: https://github.com/apache/hadoop/pull/1294
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang opened a new pull request #1294: HDFS-14665. HttpFS: LISTSTATUS response is missing HDFS-specific fields

2019-08-19 Thread GitBox
jojochuang opened a new pull request #1294: HDFS-14665. HttpFS: LISTSTATUS 
response is missing HDFS-specific fields
URL: https://github.com/apache/hadoop/pull/1294
 
 
   Forked from PR #1291


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14784) [KMS] Improve KeyAuthorizationKeyProvider#toString()

2019-08-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-14784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16910638#comment-16910638
 ] 

Hudson commented on HADOOP-14784:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17148 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17148/])
HADOOP-14784. [KMS] Improve KeyAuthorizationKeyProvider#toString(). (weichiu: 
rev 51b65370b9457e2b9e65a630b7a15f30210a1dc4)
* (edit) 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KeyAuthorizationKeyProvider.java


> [KMS] Improve KeyAuthorizationKeyProvider#toString()
> 
>
> Key: HADOOP-14784
> URL: https://issues.apache.org/jira/browse/HADOOP-14784
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Yeliang Cang
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.3.0
>
> Attachments: HADOOP-14784.001.patch
>
>
> When KMS server starts, it loads KeyProviderCryptoExtension and print the 
> following message:
> {noformat}
> 2017-08-17 04:57:13,348 INFO 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp: Initialized 
> KeyProviderCryptoExtension EagerKeyGeneratorKeyProviderCryptoExtension: 
> KeyProviderCryptoExtension: CachingKeyProvider: 
> jceks://file@/var/lib/kms/kms.keystore
> {noformat}
> However, this is confusing as KeyAuthorizationKeyProvider is loaded by not 
> shown in this message. KeyAuthorizationKeyProvider#toString should be 
> improved, so that in addition to its internal provider, also print its own 
> class name when loaded.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14784) [KMS] Improve KeyAuthorizationKeyProvider#toString()

2019-08-19 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-14784:
-
Fix Version/s: 3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks [~dineshchitlangia] and [~Cyl]
Pushed the patch 001 to trunk.

> [KMS] Improve KeyAuthorizationKeyProvider#toString()
> 
>
> Key: HADOOP-14784
> URL: https://issues.apache.org/jira/browse/HADOOP-14784
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Yeliang Cang
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.3.0
>
> Attachments: HADOOP-14784.001.patch
>
>
> When KMS server starts, it loads KeyProviderCryptoExtension and print the 
> following message:
> {noformat}
> 2017-08-17 04:57:13,348 INFO 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp: Initialized 
> KeyProviderCryptoExtension EagerKeyGeneratorKeyProviderCryptoExtension: 
> KeyProviderCryptoExtension: CachingKeyProvider: 
> jceks://file@/var/lib/kms/kms.keystore
> {noformat}
> However, this is confusing as KeyAuthorizationKeyProvider is loaded by not 
> shown in this message. KeyAuthorizationKeyProvider#toString should be 
> improved, so that in addition to its internal provider, also print its own 
> class name when loaded.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14784) [KMS] Improve KeyAuthorizationKeyProvider#toString()

2019-08-19 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-14784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16910625#comment-16910625
 ] 

Wei-Chiu Chuang commented on HADOOP-14784:
--

+1
sorry i missed this one. 

> [KMS] Improve KeyAuthorizationKeyProvider#toString()
> 
>
> Key: HADOOP-14784
> URL: https://issues.apache.org/jira/browse/HADOOP-14784
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Yeliang Cang
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-14784.001.patch
>
>
> When KMS server starts, it loads KeyProviderCryptoExtension and print the 
> following message:
> {noformat}
> 2017-08-17 04:57:13,348 INFO 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp: Initialized 
> KeyProviderCryptoExtension EagerKeyGeneratorKeyProviderCryptoExtension: 
> KeyProviderCryptoExtension: CachingKeyProvider: 
> jceks://file@/var/lib/kms/kms.keystore
> {noformat}
> However, this is confusing as KeyAuthorizationKeyProvider is loaded by not 
> shown in this message. KeyAuthorizationKeyProvider#toString should be 
> improved, so that in addition to its internal provider, also print its own 
> class name when loaded.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14784) [KMS] Improve KeyAuthorizationKeyProvider#toString()

2019-08-19 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-14784:


Assignee: Yeliang Cang

> [KMS] Improve KeyAuthorizationKeyProvider#toString()
> 
>
> Key: HADOOP-14784
> URL: https://issues.apache.org/jira/browse/HADOOP-14784
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Yeliang Cang
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-14784.001.patch
>
>
> When KMS server starts, it loads KeyProviderCryptoExtension and print the 
> following message:
> {noformat}
> 2017-08-17 04:57:13,348 INFO 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp: Initialized 
> KeyProviderCryptoExtension EagerKeyGeneratorKeyProviderCryptoExtension: 
> KeyProviderCryptoExtension: CachingKeyProvider: 
> jceks://file@/var/lib/kms/kms.keystore
> {noformat}
> However, this is confusing as KeyAuthorizationKeyProvider is loaded by not 
> shown in this message. KeyAuthorizationKeyProvider#toString should be 
> improved, so that in addition to its internal provider, also print its own 
> class name when loaded.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16478) S3Guard bucket-info fails if the bucket location is denied to the caller

2019-08-19 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16910621#comment-16910621
 ] 

Steve Loughran commented on HADOOP-16478:
-

+have the DDB metastore print this fact out on the error

> S3Guard bucket-info fails if the bucket location is denied to the caller
> 
>
> Key: HADOOP-16478
> URL: https://issues.apache.org/jira/browse/HADOOP-16478
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Major
>
> IF you call "Hadoop s3guard bucket info" on a bucket and you don't have 
> permission to list the bucket location, then you get a stack trace, with all 
> other diagnostics being missing.
> Preferred: catch the exception, warn its unknown and only log@ debug



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16478) S3Guard bucket-info fails if the bucket location is denied to the caller

2019-08-19 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16910620#comment-16910620
 ] 

Steve Loughran commented on HADOOP-16478:
-

also doc for s3guard that you need this for DDB if you dont set 
fs.s3a.s3guard.ddb.region

> S3Guard bucket-info fails if the bucket location is denied to the caller
> 
>
> Key: HADOOP-16478
> URL: https://issues.apache.org/jira/browse/HADOOP-16478
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Major
>
> IF you call "Hadoop s3guard bucket info" on a bucket and you don't have 
> permission to list the bucket location, then you get a stack trace, with all 
> other diagnostics being missing.
> Preferred: catch the exception, warn its unknown and only log@ debug



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru merged pull request #1259: HDDS-1105 : Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone Manager

2019-08-19 Thread GitBox
hanishakoneru merged pull request #1259: HDDS-1105 : Add mechanism in Recon to 
obtain DB snapshot 'delta' updates from Ozone Manager
URL: https://github.com/apache/hadoop/pull/1259
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 edited a comment on issue #1304: HDDS-1972. Provide example ha proxy with multiple s3 servers back end.

2019-08-19 Thread GitBox
bharatviswa504 edited a comment on issue #1304: HDDS-1972. Provide example ha 
proxy with multiple s3 servers back end.
URL: https://github.com/apache/hadoop/pull/1304#issuecomment-522679586
 
 
   I see tests are passing locally on my laptop. I was able to test ha-proxy 
with aws cli. Not able to figure out why on jenkins it is showing not able to 
connect to http://s3g:9878. 
   
   For now, I have disabled the test. But this will give an example for users 
to use ha-proxy setup with S3Gateway Server. (I will open a new 
Jira(https://issues.apache.org/jira/browse/HDDS-1983) to enable s3 test suite 
for s3 proxy)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1304: HDDS-1972. Provide example ha proxy with multiple s3 servers back end.

2019-08-19 Thread GitBox
bharatviswa504 commented on issue #1304: HDDS-1972. Provide example ha proxy 
with multiple s3 servers back end.
URL: https://github.com/apache/hadoop/pull/1304#issuecomment-522679586
 
 
   I see tests are passing locally on my laptop. I was able to test ha-proxy 
with aws cli. Not able to figure out why on jenkins it is showing not able to 
connect to http://s3g:9878. 
   
   For now, I have disabled the test. But this will give an example for users 
to use ha-proxy setup with S3Gateway Server. (I will open a new Jira to enable 
s3 test suite for s3 proxy)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on issue #1259: HDDS-1105 : Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone Manager

2019-08-19 Thread GitBox
hanishakoneru commented on issue #1259: HDDS-1105 : Add mechanism in Recon to 
obtain DB snapshot 'delta' updates from Ozone Manager
URL: https://github.com/apache/hadoop/pull/1259#issuecomment-522674063
 
 
   Thank you @avijayanhwx. 
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sunilgovind commented on issue #1297: HDFS-14729. Upgrade Bootstrap and jQuery versions used in HDFS UIs

2019-08-19 Thread GitBox
sunilgovind commented on issue #1297: HDFS-14729. Upgrade Bootstrap and jQuery 
versions used in HDFS UIs
URL: https://github.com/apache/hadoop/pull/1297#issuecomment-522656664
 
 
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2019-08-19 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16910512#comment-16910512
 ] 

Sean Busbey commented on HADOOP-15998:
--

the current qa result looks promising. I haven't reviewed since December 2018. 
Presuming v4 fixes the integration tests to actually fail and the issue that 
was failing before, it looks like just a few shellcheck warnings to clean up 
before this is good to go.

> Jar validation bash scripts don't work on Windows due to platform differences 
> (colons in paths, \r\n)
> -
>
> Key: HADOOP-15998
> URL: https://issues.apache.org/jira/browse/HADOOP-15998
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.2.0, 3.3.0
> Environment: Windows 10
> Visual Studio 2017
>Reporter: Brian Grunkemeyer
>Assignee: Brian Grunkemeyer
>Priority: Blocker
>  Labels: build, windows
> Fix For: 3.3.0
>
> Attachments: HADOOP-15998.v4.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Building Hadoop fails on Windows due to a few shell scripts that make invalid 
> assumptions:
> 1) Colon shouldn't be used to separate multiple paths in command line 
> parameters. Colons occur in Windows paths.
> 2) Shell scripts that rely on running external tools need to deal with 
> carriage return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on issue #1226: HDDS-1610. applyTransaction failure should not be lost on restart.

2019-08-19 Thread GitBox
bshashikant commented on issue #1226: HDDS-1610. applyTransaction failure 
should not be lost on restart.
URL: https://github.com/apache/hadoop/pull/1226#issuecomment-522623256
 
 
   The test failures are not related to the patch. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] leosunli opened a new pull request #1314: HDFS-14748. Make DataNodePeerMetrics#minOutlierDetectionSamples configurable

2019-08-19 Thread GitBox
leosunli opened a new pull request #1314: HDFS-14748. Make 
DataNodePeerMetrics#minOutlierDetectionSamples configurable
URL: https://github.com/apache/hadoop/pull/1314
 
 
   Signed-off-by: sunlisheng 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 commented on a change in pull request #1310: HDDS-1978. Create helper script to run blockade tests.

2019-08-19 Thread GitBox
nandakumar131 commented on a change in pull request #1310: HDDS-1978. Create 
helper script to run blockade tests.
URL: https://github.com/apache/hadoop/pull/1310#discussion_r315247298
 
 

 ##
 File path: hadoop-ozone/dev-support/checks/blockade.sh
 ##
 @@ -0,0 +1,28 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+cd "$DIR/../../.." || exit 1
+
+OZONE_VERSION=$(grep "" "$DIR/../../pom.xml" | sed 
's/<[^>]*>//g'|  sed 's/^[ \t]*//')
+cd "$DIR/../../dist/target/ozone-$OZONE_VERSION/tests" || exit 1
+
+source ../compose/ozoneblockade/.env
+export HADOOP_RUNNER_VERSION
 
 Review comment:
   The variables are set by `source ../compose/ozoneblockade/.env`
   The `.env` file will contains proper values.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukul1987 commented on a change in pull request #1310: HDDS-1978. Create helper script to run blockade tests.

2019-08-19 Thread GitBox
mukul1987 commented on a change in pull request #1310: HDDS-1978. Create helper 
script to run blockade tests.
URL: https://github.com/apache/hadoop/pull/1310#discussion_r315245144
 
 

 ##
 File path: hadoop-ozone/dev-support/checks/blockade.sh
 ##
 @@ -0,0 +1,28 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+cd "$DIR/../../.." || exit 1
+
+OZONE_VERSION=$(grep "" "$DIR/../../pom.xml" | sed 
's/<[^>]*>//g'|  sed 's/^[ \t]*//')
+cd "$DIR/../../dist/target/ozone-$OZONE_VERSION/tests" || exit 1
+
+source ../compose/ozoneblockade/.env
+export HADOOP_RUNNER_VERSION
 
 Review comment:
   How are these variables populated ? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims commented on a change in pull request #1307: HADOOP-15958. Revisiting LICENSE and NOTICE files

2019-08-19 Thread GitBox
iwasakims commented on a change in pull request #1307: HADOOP-15958. Revisiting 
LICENSE and NOTICE files
URL: https://github.com/apache/hadoop/pull/1307#discussion_r315193836
 
 

 ##
 File path: LICENSE-binary
 ##
 @@ -0,0 +1,531 @@
+
+ Apache License
+   Version 2.0, January 2004
+http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+  "License" shall mean the terms and conditions for use, reproduction,
+  and distribution as defined by Sections 1 through 9 of this document.
+
+  "Licensor" shall mean the copyright owner or entity authorized by
+  the copyright owner that is granting the License.
+
+  "Legal Entity" shall mean the union of the acting entity and all
+  other entities that control, are controlled by, or are under common
+  control with that entity. For the purposes of this definition,
+  "control" means (i) the power, direct or indirect, to cause the
+  direction or management of such entity, whether by contract or
+  otherwise, or (ii) ownership of fifty percent (50%) or more of the
+  outstanding shares, or (iii) beneficial ownership of such entity.
+
+  "You" (or "Your") shall mean an individual or Legal Entity
+  exercising permissions granted by this License.
+
+  "Source" form shall mean the preferred form for making modifications,
+  including but not limited to software source code, documentation
+  source, and configuration files.
+
+  "Object" form shall mean any form resulting from mechanical
+  transformation or translation of a Source form, including but
+  not limited to compiled object code, generated documentation,
+  and conversions to other media types.
+
+  "Work" shall mean the work of authorship, whether in Source or
+  Object form, made available under the License, as indicated by a
+  copyright notice that is included in or attached to the work
+  (an example is provided in the Appendix below).
+
+  "Derivative Works" shall mean any work, whether in Source or Object
+  form, that is based on (or derived from) the Work and for which the
+  editorial revisions, annotations, elaborations, or other modifications
+  represent, as a whole, an original work of authorship. For the purposes
+  of this License, Derivative Works shall not include works that remain
+  separable from, or merely link (or bind by name) to the interfaces of,
+  the Work and Derivative Works thereof.
+
+  "Contribution" shall mean any work of authorship, including
+  the original version of the Work and any modifications or additions
+  to that Work or Derivative Works thereof, that is intentionally
+  submitted to Licensor for inclusion in the Work by the copyright owner
+  or by an individual or Legal Entity authorized to submit on behalf of
+  the copyright owner. For the purposes of this definition, "submitted"
+  means any form of electronic, verbal, or written communication sent
+  to the Licensor or its representatives, including but not limited to
+  communication on electronic mailing lists, source code control systems,
+  and issue tracking systems that are managed by, or on behalf of, the
+  Licensor for the purpose of discussing and improving the Work, but
+  excluding communication that is conspicuously marked or otherwise
+  designated in writing by the copyright owner as "Not a Contribution."
+
+  "Contributor" shall mean Licensor and any individual or Legal Entity
+  on behalf of whom a Contribution has been received by Licensor and
+  subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+  this License, each Contributor hereby grants to You a perpetual,
+  worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+  copyright license to reproduce, prepare Derivative Works of,
+  publicly display, publicly perform, sublicense, and distribute the
+  Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+  this License, each Contributor hereby grants to You a perpetual,
+  worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+  (except as stated in this section) patent license to make, have made,
+  use, offer to sell, sell, import, and otherwise transfer the Work,
+  where such license applies only to those patent claims licensable
+  by such Contributor that are necessarily infringed by their
+  Contribution(s) alone or by combination of their Contribution(s)
+  with the Work to which such Contribution(s) was submitted. If You
+  institute patent litigation against any entity (including a
+  cross-claim or counterclaim in 

[GitHub] [hadoop] iwasakims commented on a change in pull request #1307: HADOOP-15958. Revisiting LICENSE and NOTICE files

2019-08-19 Thread GitBox
iwasakims commented on a change in pull request #1307: HADOOP-15958. Revisiting 
LICENSE and NOTICE files
URL: https://github.com/apache/hadoop/pull/1307#discussion_r315192775
 
 

 ##
 File path: LICENSE-binary
 ##
 @@ -0,0 +1,531 @@
+
+ Apache License
+   Version 2.0, January 2004
+http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+  "License" shall mean the terms and conditions for use, reproduction,
+  and distribution as defined by Sections 1 through 9 of this document.
+
+  "Licensor" shall mean the copyright owner or entity authorized by
+  the copyright owner that is granting the License.
+
+  "Legal Entity" shall mean the union of the acting entity and all
+  other entities that control, are controlled by, or are under common
+  control with that entity. For the purposes of this definition,
+  "control" means (i) the power, direct or indirect, to cause the
+  direction or management of such entity, whether by contract or
+  otherwise, or (ii) ownership of fifty percent (50%) or more of the
+  outstanding shares, or (iii) beneficial ownership of such entity.
+
+  "You" (or "Your") shall mean an individual or Legal Entity
+  exercising permissions granted by this License.
+
+  "Source" form shall mean the preferred form for making modifications,
+  including but not limited to software source code, documentation
+  source, and configuration files.
+
+  "Object" form shall mean any form resulting from mechanical
+  transformation or translation of a Source form, including but
+  not limited to compiled object code, generated documentation,
+  and conversions to other media types.
+
+  "Work" shall mean the work of authorship, whether in Source or
+  Object form, made available under the License, as indicated by a
+  copyright notice that is included in or attached to the work
+  (an example is provided in the Appendix below).
+
+  "Derivative Works" shall mean any work, whether in Source or Object
+  form, that is based on (or derived from) the Work and for which the
+  editorial revisions, annotations, elaborations, or other modifications
+  represent, as a whole, an original work of authorship. For the purposes
+  of this License, Derivative Works shall not include works that remain
+  separable from, or merely link (or bind by name) to the interfaces of,
+  the Work and Derivative Works thereof.
+
+  "Contribution" shall mean any work of authorship, including
+  the original version of the Work and any modifications or additions
+  to that Work or Derivative Works thereof, that is intentionally
+  submitted to Licensor for inclusion in the Work by the copyright owner
+  or by an individual or Legal Entity authorized to submit on behalf of
+  the copyright owner. For the purposes of this definition, "submitted"
+  means any form of electronic, verbal, or written communication sent
+  to the Licensor or its representatives, including but not limited to
+  communication on electronic mailing lists, source code control systems,
+  and issue tracking systems that are managed by, or on behalf of, the
+  Licensor for the purpose of discussing and improving the Work, but
+  excluding communication that is conspicuously marked or otherwise
+  designated in writing by the copyright owner as "Not a Contribution."
+
+  "Contributor" shall mean Licensor and any individual or Legal Entity
+  on behalf of whom a Contribution has been received by Licensor and
+  subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+  this License, each Contributor hereby grants to You a perpetual,
+  worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+  copyright license to reproduce, prepare Derivative Works of,
+  publicly display, publicly perform, sublicense, and distribute the
+  Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+  this License, each Contributor hereby grants to You a perpetual,
+  worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+  (except as stated in this section) patent license to make, have made,
+  use, offer to sell, sell, import, and otherwise transfer the Work,
+  where such license applies only to those patent claims licensable
+  by such Contributor that are necessarily infringed by their
+  Contribution(s) alone or by combination of their Contribution(s)
+  with the Work to which such Contribution(s) was submitted. If You
+  institute patent litigation against any entity (including a
+  cross-claim or counterclaim in 

[GitHub] [hadoop] ehiggs commented on issue #1313: HDFS-13118. SnapshotDiffReport should provide the INode type.

2019-08-19 Thread GitBox
ehiggs commented on issue #1313: HDFS-13118. SnapshotDiffReport should provide 
the INode type.
URL: https://github.com/apache/hadoop/pull/1313#issuecomment-522478887
 
 
   Updated MR to fix findbugs and checkstyle issues.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] asagjj edited a comment on issue #974: HDFS-9913. DistCp to add -useTrash to move deleted files to Trash

2019-08-19 Thread GitBox
asagjj edited a comment on issue #974: HDFS-9913. DistCp to add -useTrash to 
move deleted files to Trash
URL: https://github.com/apache/hadoop/pull/974#issuecomment-512247741
 
 
   @steveloughran  Sorry about that, I didn't reproduce this failure in my 
enviroment. And since this pr is closed ,I open a new one to update the commit 
https://github.com/apache/hadoop/pull//


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Cosss7 closed pull request #1129: HDFS-14509 DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 3.x

2019-08-19 Thread GitBox
Cosss7 closed pull request #1129: HDFS-14509 DN throws InvalidToken due to 
inequality of password when upgrade NN 2.x to 3.x
URL: https://github.com/apache/hadoop/pull/1129
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org