[GitHub] [hadoop] DadanielZ commented on issue #1923: Hadoop 16857. ABFS: Stop CustomTokenProvider retry logic to depend on AbfsRestOp retry policy

2020-04-21 Thread GitBox


DadanielZ commented on issue #1923:
URL: https://github.com/apache/hadoop/pull/1923#issuecomment-617545247


   Looks good to me, +1.
   Merged into trunk



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] DadanielZ commented on a change in pull request #1923: Hadoop 16857. ABFS: Stop CustomTokenProvider retry logic to depend on AbfsRestOp retry policy

2020-04-21 Thread GitBox


DadanielZ commented on a change in pull request #1923:
URL: https://github.com/apache/hadoop/pull/1923#discussion_r412662675



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/CustomTokenProviderAdapter.java
##
@@ -46,16 +48,53 @@
*
* @param adaptee the custom token provider
*/
-  public CustomTokenProviderAdapter(CustomTokenProviderAdaptee adaptee) {
+  public CustomTokenProviderAdapter(CustomTokenProviderAdaptee adaptee, int 
customTokenFetchRetryCount) {
 Preconditions.checkNotNull(adaptee, "adaptee");
 this.adaptee = adaptee;
+fetchTokenRetryCount = customTokenFetchRetryCount;
   }
 
   protected AzureADToken refreshToken() throws IOException {
 LOG.debug("AADToken: refreshing custom based token");
 
 AzureADToken azureADToken = new AzureADToken();
-azureADToken.setAccessToken(adaptee.getAccessToken());
+
+String accessToken = null;
+
+Exception ex;
+boolean succeeded = false;
+// Custom token providers should have their own retry policies,
+// Providing a linear retry option for the the retry count
+// mentioned in config "fs.azure.custom.token.fetch.retry.count"
+int retryCount = fetchTokenRetryCount;
+do {
+  ex = null;
+  try {
+accessToken = adaptee.getAccessToken();
+LOG.trace("CustomTokenProvider Access token fetch was successful with 
retry count {}", retryCount);

Review comment:
   I see, this is not string concatenation.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] DadanielZ commented on a change in pull request #1923: Hadoop 16857. ABFS: Stop CustomTokenProvider retry logic to depend on AbfsRestOp retry policy

2020-04-21 Thread GitBox


DadanielZ commented on a change in pull request #1923:
URL: https://github.com/apache/hadoop/pull/1923#discussion_r412662007



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/CustomTokenProviderAdapter.java
##
@@ -46,16 +48,53 @@
*
* @param adaptee the custom token provider
*/
-  public CustomTokenProviderAdapter(CustomTokenProviderAdaptee adaptee) {
+  public CustomTokenProviderAdapter(CustomTokenProviderAdaptee adaptee, int 
customTokenFetchRetryCount) {
 Preconditions.checkNotNull(adaptee, "adaptee");
 this.adaptee = adaptee;
+fetchTokenRetryCount = customTokenFetchRetryCount;
   }
 
   protected AzureADToken refreshToken() throws IOException {
 LOG.debug("AADToken: refreshing custom based token");
 
 AzureADToken azureADToken = new AzureADToken();
-azureADToken.setAccessToken(adaptee.getAccessToken());
+
+String accessToken = null;
+
+Exception ex;
+boolean succeeded = false;
+// Custom token providers should have their own retry policies,
+// Providing a linear retry option for the the retry count
+// mentioned in config "fs.azure.custom.token.fetch.retry.count"
+int retryCount = fetchTokenRetryCount;
+do {
+  ex = null;
+  try {
+accessToken = adaptee.getAccessToken();
+LOG.trace("CustomTokenProvider Access token fetch was successful with 
retry count {}", retryCount);
+  } catch (Exception e) {

Review comment:
   yea Exception should be enough 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] DadanielZ commented on a change in pull request #1969: HADOOP-17002. ABFS: Adding config to determine if the account is HNS enabled or not

2020-04-21 Thread GitBox


DadanielZ commented on a change in pull request #1969:
URL: https://github.com/apache/hadoop/pull/1969#discussion_r412655425



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
##
@@ -732,6 +740,11 @@ void setListMaxResults(int listMaxResults) {
 this.listMaxResults = listMaxResults;
   }
 
+  @VisibleForTesting
+  void setIsNamespaceEnabledAccount(String isNamespaceEnabledAccount) {
+this.isNamespaceEnabledAccount = isNamespaceEnabledAccount;

Review comment:
   the method signature should be simpler, it is better to declare it as 
`boolean` and do the string conversion inside this method, or declare the 
`isNamespaceEnabledAccount` as `Enum` to differentiate between  `NOT_SET, TRUE 
and FALSE.`

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
##
@@ -255,6 +260,20 @@ public boolean getIsNamespaceEnabled() throws 
AzureBlobFileSystemException {
 return isNamespaceEnabled;
   }
 
+  @VisibleForTesting
+  boolean isNameSpaceEnabledSetFromConfig() {
+final String hnsEnabledConfig = abfsConfiguration
+.getIsNamespaceEnabledAccount();

Review comment:
   is line break needed?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] DadanielZ commented on a change in pull request #1969: HADOOP-17002. ABFS: Adding config to determine if the account is HNS enabled or not

2020-04-21 Thread GitBox


DadanielZ commented on a change in pull request #1969:
URL: https://github.com/apache/hadoop/pull/1969#discussion_r412652364



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
##
@@ -234,6 +236,9 @@ String decodeAttribute(byte[] value) throws 
UnsupportedEncodingException {
 
   public boolean getIsNamespaceEnabled() throws AzureBlobFileSystemException {
 if (!isNamespaceEnabledSet) {
+  if (isNameSpaceEnabledSetFromConfig()) {

Review comment:
   these two if can be merged.
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16671) Optimize InnerNodeImpl#getLeaf

2020-04-21 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16671:
-
Status: Patch Available  (was: Reopened)

> Optimize InnerNodeImpl#getLeaf
> --
>
> Key: HADOOP-16671
> URL: https://issues.apache.org/jira/browse/HADOOP-16671
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16671.001.patch
>
>
> {code:java}
> @Override
> public Node getLeaf(int leafIndex, Node excludedNode) {
>   int count=0;
>   // check if the excluded node a leaf
>   boolean isLeaf = !(excludedNode instanceof InnerNode);
>   // calculate the total number of excluded leaf nodes
>   int numOfExcludedLeaves =
>   isLeaf ? 1 : ((InnerNode)excludedNode).getNumOfLeaves();
>   if (isLeafParent()) { // children are leaves
> if (isLeaf) { // excluded node is a leaf node
>   if (excludedNode != null &&
>   childrenMap.containsKey(excludedNode.getName())) {
> int excludedIndex = children.indexOf(excludedNode);
> if (excludedIndex != -1 && leafIndex >= 0) {
>   // excluded node is one of the children so adjust the leaf index
>   leafIndex = leafIndex>=excludedIndex ? leafIndex+1 : leafIndex;
> }
>   }
> }
> // range check
> if (leafIndex<0 || leafIndex>=this.getNumOfChildren()) {
>   return null;
> }
> return children.get(leafIndex);
>   } else {
> {code}
> the code InnerNodeImpl#getLeaf() as above
> i think it has two problems:
> 1.if childrenMap.containsKey(excludedNode.getName()) return true, 
> children.indexOf(excludedNode) must return > -1, so if (excludedIndex != -1) 
> is it necessary?
> 2. if excludedindex = children.size() -1
> as current code:
> leafIndex = leafIndex>=excludedIndex ? leafIndex+1 : leafIndex;
> leafIndex will be out of index and return null. Actually there are nodes that 
> can be returned.
> i think it should add the judgement excludedIndex == children.size() -1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17005) Add capability in hadoop-client to automatically login from a client/service keytab

2020-04-21 Thread Maziar Mirzazad (Jira)
Maziar Mirzazad created HADOOP-17005:


 Summary: Add capability in hadoop-client to automatically login 
from a client/service keytab
 Key: HADOOP-17005
 URL: https://issues.apache.org/jira/browse/HADOOP-17005
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Maziar Mirzazad
 Fix For: 2.9.2


With existing Hadoop client implementation, client applications for services 
that are using kerberized clusters, need to handle Keytab based login in their 
code, before doing HDFS or M/R API calls.

To avoid that, we are proposing adding Keytab based auto login to hadoop client 
library with configurable and default paths for Keytabs. 
This functionality helps new service owners as well as those transitioning from 
non-kerberized cluster to kerberized ones.

Auto login, should avoid extra login attempts in case a valid TGT is already 
available.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1923: Hadoop 16857. ABFS: Stop CustomTokenProvider retry logic to depend on AbfsRestOp retry policy

2020-04-21 Thread GitBox


hadoop-yetus commented on issue #1923:
URL: https://github.com/apache/hadoop/pull/1923#issuecomment-617459239


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  18m 44s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 23s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 53s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 52s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 51s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 11s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 16s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 38s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   0m 54s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 24s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 32s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  56m 45s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1923/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1923 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 2058f7100ba8 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 264e49c |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1923/6/testReport/ |
   | Max. process+thread count | 400 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1923/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya commented on a change in pull request #1923: Hadoop 16857. ABFS: Stop CustomTokenProvider retry logic to depend on AbfsRestOp retry policy

2020-04-21 Thread GitBox


snvijaya commented on a change in pull request #1923:
URL: https://github.com/apache/hadoop/pull/1923#discussion_r412520627



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/CustomTokenProviderAdapter.java
##
@@ -46,16 +48,53 @@
*
* @param adaptee the custom token provider
*/
-  public CustomTokenProviderAdapter(CustomTokenProviderAdaptee adaptee) {
+  public CustomTokenProviderAdapter(CustomTokenProviderAdaptee adaptee, int 
customTokenFetchRetryCount) {
 Preconditions.checkNotNull(adaptee, "adaptee");
 this.adaptee = adaptee;
+fetchTokenRetryCount = customTokenFetchRetryCount;
   }
 
   protected AzureADToken refreshToken() throws IOException {
 LOG.debug("AADToken: refreshing custom based token");
 
 AzureADToken azureADToken = new AzureADToken();
-azureADToken.setAccessToken(adaptee.getAccessToken());
+
+String accessToken = null;
+
+Exception ex;
+boolean succeeded = false;
+// Custom token providers should have their own retry policies,
+// Providing a linear retry option for the the retry count
+// mentioned in config "fs.azure.custom.token.fetch.retry.count"
+int retryCount = fetchTokenRetryCount;
+do {
+  ex = null;
+  try {
+accessToken = adaptee.getAccessToken();
+LOG.trace("CustomTokenProvider Access token fetch was successful with 
retry count {}", retryCount);
+  } catch (Exception e) {

Review comment:
   I checked across few articles and recommendation seems to be to not 
catch throwable. At this point we need to catch any exception coming from the 
CustomTokenProvider.
   
   Pasting couple of links I referenced:
   - https://stackify.com/best-practices-exceptions-java/ => scroll to "6. 
Don’t Catch Throwable"
   - 
https://www.baeldung.com/java-catch-throwable-bad-practice#catching-throwable
   
   Please let me know if you still think this needs to be changed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya commented on a change in pull request #1923: Hadoop 16857. ABFS: Stop CustomTokenProvider retry logic to depend on AbfsRestOp retry policy

2020-04-21 Thread GitBox


snvijaya commented on a change in pull request #1923:
URL: https://github.com/apache/hadoop/pull/1923#discussion_r412518187



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/CustomTokenProviderAdapter.java
##
@@ -46,16 +48,53 @@
*
* @param adaptee the custom token provider
*/
-  public CustomTokenProviderAdapter(CustomTokenProviderAdaptee adaptee) {
+  public CustomTokenProviderAdapter(CustomTokenProviderAdaptee adaptee, int 
customTokenFetchRetryCount) {
 Preconditions.checkNotNull(adaptee, "adaptee");
 this.adaptee = adaptee;
+fetchTokenRetryCount = customTokenFetchRetryCount;
   }
 
   protected AzureADToken refreshToken() throws IOException {
 LOG.debug("AADToken: refreshing custom based token");
 
 AzureADToken azureADToken = new AzureADToken();
-azureADToken.setAccessToken(adaptee.getAccessToken());
+
+String accessToken = null;
+
+Exception ex;
+boolean succeeded = false;
+// Custom token providers should have their own retry policies,
+// Providing a linear retry option for the the retry count
+// mentioned in config "fs.azure.custom.token.fetch.retry.count"
+int retryCount = fetchTokenRetryCount;
+do {
+  ex = null;
+  try {
+accessToken = adaptee.getAccessToken();
+LOG.trace("CustomTokenProvider Access token fetch was successful with 
retry count {}", retryCount);

Review comment:
   Retaining the trace log as we have had couple of debugging scenarios 
where customTokenProvider implementation hung on execution. Trace log right 
after the adapter.getAccessToken() will provide quick indication of the issue.
   From previous PRs, have confirmation from Steve too that its not costly. 
   https://github.com/apache/hadoop/pull/1842#discussion_r384066843
   
   Have fixed the wrong retry count getting displayed in log.
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya commented on issue #1923: Hadoop 16857. ABFS: Stop CustomTokenProvider retry logic to depend on AbfsRestOp retry policy

2020-04-21 Thread GitBox


snvijaya commented on issue #1923:
URL: https://github.com/apache/hadoop/pull/1923#issuecomment-617432357


   Latest test results:
   Test results:
   
   East US 2 Account - With HNS
   
   [INFO] Tests run: 54, Failures: 0, Errors: 0, Skipped: 0
   [WARNING] Tests run: 416, Failures: 0, Errors: 0, Skipped: 66
   [WARNING] Tests run: 206, Failures: 0, Errors: 0, Skipped: 140
   
   East US 2 Account - Without HNS
   [WARNING] Tests run: 206, Failures: 0, Errors: 0, Skipped: 140
   [INFO] Tests run: 54, Failures: 0, Errors: 0, Skipped: 0
   [WARNING] Tests run: 416, Failures: 0, Errors: 0, Skipped: 240



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1969: HADOOP-17002. ABFS: Adding config to determine if the account is HNS enabled or not

2020-04-21 Thread GitBox


hadoop-yetus commented on issue #1969:
URL: https://github.com/apache/hadoop/pull/1969#issuecomment-617421026


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 17s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  23m 14s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 52s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 56s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 54s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 24s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 16s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 47s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m  0s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 17s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 30s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  66m  2s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1969 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux bcfed1f12a89 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 264e49c |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/8/testReport/ |
   | Max. process+thread count | 307 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16922) ABFS: Change in User-Agent header

2020-04-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17089015#comment-17089015
 ] 

Hudson commented on HADOOP-16922:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18171 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18171/])
HADOOP-16922. ABFS: Change User-Agent header (#1938) (github: rev 
264e49c8f2cfd15826655bbc1847f378f60ad8c7)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/AbfsHttpConstants.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsClient.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java


> ABFS: Change in User-Agent header
> -
>
> Key: HADOOP-16922
> URL: https://issues.apache.org/jira/browse/HADOOP-16922
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 3.4.0
>
>
> * Add more inforrmation to the User-Agent header like cluster name, cluster 
> type, java vendor etc.
> * Add APN/1.0 in the begining



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16965) Introduce StreamContext for Abfs Input and Output streams.

2020-04-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17089014#comment-17089014
 ] 

Hudson commented on HADOOP-16965:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18171 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18171/])
HADOOP-16965. Refactor abfs stream configuration. (#1956) (github: rev 
8031c66295b530dcaae9e00d4f656330bc3b3952)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStreamContext.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStreamContext.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsStreamContext.java


> Introduce StreamContext for Abfs Input and Output streams.
> --
>
> Key: HADOOP-16965
> URL: https://issues.apache.org/jira/browse/HADOOP-16965
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>
> The number of configuration keeps growing in AbfsOutputStream and 
> AbfsInputStream as we keep on adding new features. It is time to refactor the 
> configurations in a separate class like StreamContext and pass them around. 
> This is will improve the readability of code and reduce cherry-pick-backport 
> pain. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16989) Update JaegerTracing

2020-04-21 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16989:
-
Target Version/s: thirdparty-1.1.0

> Update JaegerTracing
> 
>
> Key: HADOOP-16989
> URL: https://issues.apache.org/jira/browse/HADOOP-16989
> Project: Hadoop Common
>  Issue Type: Task
>  Components: hadoop-thirdparty
>Affects Versions: thirdparty-1.0.0
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> We currently use JaegerTracing 0.34.0. The latest is 1.2.0. We are several 
> versions behind and should update. Note this update requires the latest 
> version fo OpenTracing and has several breaking changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16916) ABFS: Delegation SAS generator for integration with Ranger

2020-04-21 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088986#comment-17088986
 ] 

Hadoop QA commented on HADOOP-16916:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue}  0m  
0s{color} | {color:blue} markdownlint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m  
2s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 2 
new + 9 unchanged - 0 fixed = 11 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
32s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1965/4/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/1965 |
| JIRA Issue | HADOOP-16916 |
| Optional Tests | dupname asflicense xml compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
| uname | Linux 148baa8fa3e6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 264e49c |

[GitHub] [hadoop] hadoop-yetus commented on issue #1965: HADOOP-16916: ABFS: Delegation SAS generator for integration with Ranger

2020-04-21 Thread GitBox


hadoop-yetus commented on issue #1965:
URL: https://github.com/apache/hadoop/pull/1965#issuecomment-617357370


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
10 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m 45s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m  2s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  2s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  1s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 28s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 19s |  hadoop-tools/hadoop-azure: The 
patch generated 2 new + 9 unchanged - 0 fixed = 11 total (was 9)  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  18m 37s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m  7s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 32s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 33s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  68m  2s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1965/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1965 |
   | JIRA Issue | HADOOP-16916 |
   | Optional Tests | dupname asflicense xml compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 148baa8fa3e6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 264e49c |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1965/4/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1965/4/testReport/ |
   | Max. process+thread count | 412 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1965/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1969: HADOOP-17002. ABFS: Adding config to determine if the account is HNS enabled or not

2020-04-21 Thread GitBox


hadoop-yetus commented on issue #1969:
URL: https://github.com/apache/hadoop/pull/1969#issuecomment-617344317


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m 15s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 14s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 51s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 49s |  trunk passed  |
   | -0 :warning: |  patch  |   1m  8s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 39s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   0m 54s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 17s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 32s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  57m 29s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1969 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux f21cb121edf0 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 264e49c |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/7/testReport/ |
   | Max. process+thread count | 403 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ThomasMarquardt commented on issue #1965: HADOOP-16916: ABFS: Delegation SAS generator for integration with Ranger

2020-04-21 Thread GitBox


ThomasMarquardt commented on issue #1965:
URL: https://github.com/apache/hadoop/pull/1965#issuecomment-617331272


   Thanks for the update, I merged and pushed the update.
   
   All tests passing against my eastus2euap account:
   
   $ mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
   [INFO] Tests run: 58, Failures: 0, Errors: 0, Skipped: 0
   [WARNING] Tests run: 424, Failures: 0, Errors: 0, Skipped: 33
   [WARNING] Tests run: 206, Failures: 0, Errors: 0, Skipped: 24



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17001) The suffix name of the unified compression class

2020-04-21 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088899#comment-17088899
 ] 

Hadoop QA commented on HADOOP-17001:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
21s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 24m 
19s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
11s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 19m 
36s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 19m 36s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
42s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16904/artifact/out/Dockerfile
 |
| JIRA Issue | HADOOP-17001 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13000723/HADOOP-17001.005.patch
 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux f2b45093af3b 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 60fa153 |
| Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16904/artifact/out/branch-compile-root.txt
 |
| compile | 

[GitHub] [hadoop] bilaharith commented on a change in pull request #1969: HADOOP-17002. ABFS: Adding config to determine if the account is HNS enabled or not

2020-04-21 Thread GitBox


bilaharith commented on a change in pull request #1969:
URL: https://github.com/apache/hadoop/pull/1969#discussion_r412349747



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
##
@@ -27,6 +27,14 @@
 @InterfaceAudience.Public
 @InterfaceStability.Evolving
 public final class ConfigurationKeys {
+
+  /**
+   * Config to specify if the configured account is HNS enabled or not.
+   * If this config is not set,

Review comment:
   Done

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
##
@@ -110,6 +110,7 @@
 import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.SINGLE_WHITE_SPACE;
 import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.TOKEN_VERSION;
 import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.AZURE_ABFS_ENDPOINT;
+import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.FS_AZURE_ACCOUNT_IS_HNS_ENABLED;

Review comment:
   Done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith removed a comment on issue #1969: HADOOP-17002. ABFS: Adding config to determine if the account is HNS enabled or not

2020-04-21 Thread GitBox


bilaharith removed a comment on issue #1969:
URL: https://github.com/apache/hadoop/pull/1969#issuecomment-617260280


   **Driver test results using accounts in Central India**
   mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
   
   **Account with HNS Support**
   [INFO] Tests run: 52, Failures: 0, Errors: 0, Skipped: 0
   [WARNING] Tests run: 424, Failures: 0, Errors: 0, Skipped: 66
   [WARNING] Tests run: 206, Failures: 0, Errors: 0, Skipped: 24
   
   **Account without HNS support**
   [INFO] Tests run: 52, Failures: 0, Errors: 0, Skipped: 0
   [WARNING] Tests run: 424, Failures: 0, Errors: 0, Skipped: 240
   [WARNING] Tests run: 206, Failures: 0, Errors: 0, Skipped: 24
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya commented on a change in pull request #1969: HADOOP-17002. ABFS: Adding config to determine if the account is HNS enabled or not

2020-04-21 Thread GitBox


snvijaya commented on a change in pull request #1969:
URL: https://github.com/apache/hadoop/pull/1969#discussion_r412343848



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
##
@@ -110,6 +110,7 @@
 import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.SINGLE_WHITE_SPACE;
 import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.TOKEN_VERSION;
 import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.AZURE_ABFS_ENDPOINT;
+import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.FS_AZURE_ACCOUNT_IS_HNS_ENABLED;

Review comment:
   Checkstyle bug:  Unused import - 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.FS_AZURE_ACCOUNT_IS_HNS_ENABLED.

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
##
@@ -27,6 +27,14 @@
 @InterfaceAudience.Public
 @InterfaceStability.Evolving
 public final class ConfigurationKeys {
+
+  /**
+   * Config to specify if the configured account is HNS enabled or not.
+   * If this config is not set,

Review comment:
   fix newline in middle of sentence. There is more to get to max line char 
count





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16965) Introduce StreamContext for Abfs Input and Output streams.

2020-04-21 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1705#comment-1705
 ] 

Steve Loughran commented on HADOOP-16965:
-

(got some git repo sync issues between amazon and github; can't cherry pick 
right now)

> Introduce StreamContext for Abfs Input and Output streams.
> --
>
> Key: HADOOP-16965
> URL: https://issues.apache.org/jira/browse/HADOOP-16965
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>
> The number of configuration keeps growing in AbfsOutputStream and 
> AbfsInputStream as we keep on adding new features. It is time to refactor the 
> configurations in a separate class like StreamContext and pass them around. 
> This is will improve the readability of code and reduce cherry-pick-backport 
> pain. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16965) Introduce StreamContext for Abfs Input and Output streams.

2020-04-21 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1704#comment-1704
 ] 

Steve Loughran commented on HADOOP-16965:
-

merged in to trunk; about to merge into branch-3.3 if I can


> Introduce StreamContext for Abfs Input and Output streams.
> --
>
> Key: HADOOP-16965
> URL: https://issues.apache.org/jira/browse/HADOOP-16965
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>
> The number of configuration keeps growing in AbfsOutputStream and 
> AbfsInputStream as we keep on adding new features. It is time to refactor the 
> configurations in a separate class like StreamContext and pass them around. 
> This is will improve the readability of code and reduce cherry-pick-backport 
> pain. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1916: MAPREDUCE-7267 Merge paths with multi threads during commit job in FileOutputCommitter

2020-04-21 Thread GitBox


steveloughran commented on a change in pull request #1916:
URL: https://github.com/apache/hadoop/pull/1916#discussion_r412321478



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java
##
@@ -454,6 +471,27 @@ protected void commitJobInternal(JobContext context) 
throws IOException {
*/
   private void mergePaths(FileSystem fs, final FileStatus from,
   final Path to, JobContext context) throws IOException {
+final List> futures = new LinkedList<>();
+final ExecutorService pool = mergeThreadNum > 1 ?
+  Executors.newFixedThreadPool(Math.min(mergeThreadNum, 128)) : null;
+
+try {
+  doMergePaths(fs, from, to, context, pool, futures);
+  if (null != pool) {
+for (Future future: futures) {
+  FutureIOSupport.awaitFuture(future);

Review comment:
   CompletableFuture.allOf() gives some aggregate future you can block on, 
so there's no need to wait in the specific order. Not sure if that makes a 
different performance wise. We're still evolving our understanding about how to 
best use futures, so any suggestions are welcome

##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java
##
@@ -466,31 +504,50 @@ private void mergePaths(FileSystem fs, final FileStatus 
from,
 }
 
 if (from.isFile()) {
-  if (toStat != null) {
-if (!fs.delete(to, true)) {
-  throw new IOException("Failed to delete " + to);
+  if (null != pool) {
+FileStatus finalToStat = toStat;
+futures.add(pool.submit(new Callable() {
+  @Override
+  public Void call() throws Exception {
+if (finalToStat != null) {
+  if (!fs.delete(to, true)) {
+throw new IOException("Failed to delete " + to);
+  }
+}
+
+if (!fs.rename(from.getPath(), to)) {
+  throw new IOException("Failed to rename " + from + " to " + to);
+}
+return null;
+  }
+}));
+  } else {
+if (toStat != null) {
+  if (!fs.delete(to, true)) {
+throw new IOException("Failed to delete " + to);
+  }
 }
-  }
 
-  if (!fs.rename(from.getPath(), to)) {
-throw new IOException("Failed to rename " + from + " to " + to);
+if (!fs.rename(from.getPath(), to)) {
+  throw new IOException("Failed to rename " + from + " to " + to);

Review comment:
   though rename/2 is broken that way.
   
   now, if we moved to FileContext, you'd get a proper rename

##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java
##
@@ -466,31 +504,50 @@ private void mergePaths(FileSystem fs, final FileStatus 
from,
 }
 
 if (from.isFile()) {
-  if (toStat != null) {
-if (!fs.delete(to, true)) {
-  throw new IOException("Failed to delete " + to);
+  if (null != pool) {
+FileStatus finalToStat = toStat;
+futures.add(pool.submit(new Callable() {
+  @Override
+  public Void call() throws Exception {
+if (finalToStat != null) {
+  if (!fs.delete(to, true)) {
+throw new IOException("Failed to delete " + to);
+  }
+}
+
+if (!fs.rename(from.getPath(), to)) {
+  throw new IOException("Failed to rename " + from + " to " + to);
+}
+return null;
+  }
+}));
+  } else {
+if (toStat != null) {
+  if (!fs.delete(to, true)) {
+throw new IOException("Failed to delete " + to);

Review comment:
   delete only returns false if to isn't found you don't need this safety 
check

##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java
##
@@ -99,11 +106,18 @@
   public static final boolean
   FILEOUTPUTCOMMITTER_TASK_CLEANUP_ENABLED_DEFAULT = false;
 
+  // The thread num use to merge paths during commitJob. If it is bigger than 
1,
+  // a thread pool would be created to merge paths, which has better 
performance.
+  public static final String FILEOUTPUTCOMMITTER_MERGE_THREADS =

Review comment:
   this is going to need some docs in the markdown too, I'm afraid

##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java
##
@@ -454,6 +471,27 @@ protected void commitJobInternal(JobContext context) 
throws IOException {
*/
   private void 

[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1952: HDFS-1820. FTPFileSystem attempts to close the outputstream even when it is not initialised.

2020-04-21 Thread GitBox


hadoop-yetus removed a comment on issue #1952:
URL: https://github.com/apache/hadoop/pull/1952#issuecomment-612602618


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m 10s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m  4s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 28s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   2m  7s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m  4s |  trunk passed  |
   | -0 :warning: |  patch  |   2m 25s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 51s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 17s |  the patch passed  |
   | +1 :green_heart: |  javac  |  16m 17s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 16s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   2m 17s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 17s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 52s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 109m  3s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.8 Server=19.03.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1952/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1952 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux e2a7c82e97d7 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 8d49229 |
   | Default Java | 1.8.0_242 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1952/3/testReport/ |
   | Max. process+thread count | 2620 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1952/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1952: HDFS-1820. FTPFileSystem attempts to close the outputstream even when it is not initialised.

2020-04-21 Thread GitBox


steveloughran commented on a change in pull request #1952:
URL: https://github.com/apache/hadoop/pull/1952#discussion_r412336936



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java
##
@@ -110,7 +111,9 @@ public void initialize(URI uri, Configuration conf) throws 
IOException { // get
 
 // get port information from uri, (overrides info in conf)
 int port = uri.getPort();
-port = (port == -1) ? FTP.DEFAULT_PORT : port;
+if(port == -1){
+  port = conf.getInt(FS_FTP_HOST_PORT, FTP.DEFAULT_PORT);

Review comment:
   that's got you out of trouble :)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1952: HDFS-1820. FTPFileSystem attempts to close the outputstream even when it is not initialised.

2020-04-21 Thread GitBox


hadoop-yetus removed a comment on issue #1952:
URL: https://github.com/apache/hadoop/pull/1952#issuecomment-612242452







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17001) The suffix name of the unified compression class

2020-04-21 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088867#comment-17088867
 ] 

Hadoop QA commented on HADOOP-17001:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
23s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 18m 
30s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
29s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 18m 
43s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 18m 43s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
58s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16903/artifact/out/Dockerfile
 |
| JIRA Issue | HADOOP-17001 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13000721/HADOOP-17001.004.patch
 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 665932d45588 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 60fa153 |
| Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16903/artifact/out/branch-compile-root.txt
 |
| compile | 

[GitHub] [hadoop] steveloughran commented on issue #1965: HADOOP-16916: ABFS: Delegation SAS generator for integration with Ranger

2020-04-21 Thread GitBox


steveloughran commented on issue #1965:
URL: https://github.com/apache/hadoop/pull/1965#issuecomment-617275425


   ok, conflicting patch is in



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1946: HADOOP-16961. ABFS: Adding metrics to AbfsInputStream

2020-04-21 Thread GitBox


steveloughran commented on issue #1946:
URL: https://github.com/apache/hadoop/pull/1946#issuecomment-617275059


   gabor, see if I've broken this building; if so rebase and resubmit. thanks



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on issue #1969: HADOOP-17002. ABFS: Adding config to determine if the account is HNS enabled or not

2020-04-21 Thread GitBox


bilaharith commented on issue #1969:
URL: https://github.com/apache/hadoop/pull/1969#issuecomment-617260280


   **Driver test results using accounts in Central India**
   mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
   
   **Account with HNS Support**
   [INFO] Tests run: 52, Failures: 0, Errors: 0, Skipped: 0
   [WARNING] Tests run: 424, Failures: 0, Errors: 0, Skipped: 66
   [WARNING] Tests run: 206, Failures: 0, Errors: 0, Skipped: 24
   
   **Account without HNS support**
   [INFO] Tests run: 52, Failures: 0, Errors: 0, Skipped: 0
   [WARNING] Tests run: 424, Failures: 0, Errors: 0, Skipped: 240
   [WARNING] Tests run: 206, Failures: 0, Errors: 0, Skipped: 24
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17001) The suffix name of the unified compression class

2020-04-21 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088828#comment-17088828
 ] 

Wei-Chiu Chuang commented on HADOOP-17001:
--

LGTM

Github PR is now preferred but attaching patch files works too.

> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HADOOP-17001.003.patch, HADOOP-17001.004.patch, 
> HADOOP-17001.005.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17001) The suffix name of the unified compression class

2020-04-21 Thread bianqi (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088827#comment-17088827
 ] 

bianqi commented on HADOOP-17001:
-

update patch, fix whitespace  checkstyle 

> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HADOOP-17001.003.patch, HADOOP-17001.004.patch, 
> HADOOP-17001.005.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on issue #1969: HADOOP-17002. ABFS: Adding config to determine if the account is HNS enabled or not

2020-04-21 Thread GitBox


bilaharith commented on issue #1969:
URL: https://github.com/apache/hadoop/pull/1969#issuecomment-617258947


   Review comments addressed.
   
   **Driver test results using accounts in Central India**
   mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
   
   **Account with HNS Support**
   [INFO] Tests run: 52, Failures: 0, Errors: 0, Skipped: 0
   [WARNING] Tests run: 424, Failures: 0, Errors: 0, Skipped: 66
   [WARNING] Tests run: 206, Failures: 0, Errors: 0, Skipped: 24
   
   **Account without HNS support**
   [INFO] Tests run: 52, Failures: 0, Errors: 0, Skipped: 0
   [WARNING] Tests run: 424, Failures: 0, Errors: 0, Skipped: 240
   [WARNING] Tests run: 206, Failures: 0, Errors: 0, Skipped: 24
   
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17001) The suffix name of the unified compression class

2020-04-21 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HADOOP-17001:

Attachment: HADOOP-17001.005.patch
Status: Patch Available  (was: Open)

> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HADOOP-17001.003.patch, HADOOP-17001.004.patch, 
> HADOOP-17001.005.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17001) The suffix name of the unified compression class

2020-04-21 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HADOOP-17001:

Attachment: (was: HADOOP-17001-001.patch)

> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HADOOP-17001.003.patch, HADOOP-17001.004.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17001) The suffix name of the unified compression class

2020-04-21 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HADOOP-17001:

Status: Open  (was: Patch Available)

> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HADOOP-17001.003.patch, HADOOP-17001.004.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17001) The suffix name of the unified compression class

2020-04-21 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HADOOP-17001:

Attachment: (was: HADOOP-17001-002.patch)

> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HADOOP-17001.003.patch, HADOOP-17001.004.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on a change in pull request #1969: HADOOP-17002. ABFS: Adding config to determine if the account is HNS enabled or not

2020-04-21 Thread GitBox


bilaharith commented on a change in pull request #1969:
URL: https://github.com/apache/hadoop/pull/1969#discussion_r412263543



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
##
@@ -253,6 +259,20 @@ public boolean getIsNamespaceEnabled() throws 
AzureBlobFileSystemException {
 return isNamespaceEnabled;
   }
 
+  @VisibleForTesting
+  boolean isNameSpaceEnabledSetFromConfig() {

Review comment:
   Done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17001) The suffix name of the unified compression class

2020-04-21 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088761#comment-17088761
 ] 

Hadoop QA commented on HADOOP-17001:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m  
4s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
19s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 19m 
58s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
55s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 22m 
50s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 22m 50s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 
13s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 4s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16901/artifact/out/Dockerfile
 |
| JIRA Issue | HADOOP-17001 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13000701/HADOOP-17001.003.patch
 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux a0f164c1b86d 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 60fa153 |
| Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
| compile | 

[jira] [Updated] (HADOOP-17001) The suffix name of the unified compression class

2020-04-21 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HADOOP-17001:

Attachment: (was: HADOOP-17001.004.patch)

> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HADOOP-17001-001.patch, HADOOP-17001-002.patch, 
> HADOOP-17001.003.patch, HADOOP-17001.004.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17001) The suffix name of the unified compression class

2020-04-21 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HADOOP-17001:

Status: Open  (was: Patch Available)

> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HADOOP-17001-001.patch, HADOOP-17001-002.patch, 
> HADOOP-17001.003.patch, HADOOP-17001.004.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17001) The suffix name of the unified compression class

2020-04-21 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HADOOP-17001:

Attachment: HADOOP-17001.004.patch
Status: Patch Available  (was: Open)

> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HADOOP-17001-001.patch, HADOOP-17001-002.patch, 
> HADOOP-17001.003.patch, HADOOP-17001.004.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17001) The suffix name of the unified compression class

2020-04-21 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HADOOP-17001:

Attachment: HADOOP-17001.004.patch

> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HADOOP-17001-001.patch, HADOOP-17001-002.patch, 
> HADOOP-17001.003.patch, HADOOP-17001.004.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1956: HADOOP-16965 Refactor abfs stream configuration.

2020-04-21 Thread GitBox


hadoop-yetus commented on issue #1956:
URL: https://github.com/apache/hadoop/pull/1956#issuecomment-617227728


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  20m 59s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 15s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 54s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 52s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 12s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 25s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   0m 54s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 23s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 32s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  61m 21s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1956/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1956 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 00a114c67f04 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 60fa153 |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1956/3/testReport/ |
   | Max. process+thread count | 414 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1956/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17001) The suffix name of the unified compression class

2020-04-21 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088740#comment-17088740
 ] 

Hadoop QA commented on HADOOP-17001:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
12s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 17m 
51s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
11s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 17m 
12s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m 12s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
42s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16900/artifact/out/Dockerfile
 |
| JIRA Issue | HADOOP-17001 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13000699/HADOOP-17001-003.patch
 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 9e0e80998546 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 60fa153 |
| Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
| compile | 

[jira] [Commented] (HADOOP-16977) in javaApi, UGI params should be overidden through FileSystem conf

2020-04-21 Thread Hongbing Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088738#comment-17088738
 ] 

Hongbing Wang commented on HADOOP-16977:


We have disabled kerberos. The username can be passed to YARN by setting the 
env var HADOOP_USER_NAME in the submission service (tomcat) and it takes 
effect. But if we use multiple HADOOP_USER_NAME in the submission service to 
submit the corresponding distcp, the username set with 
`System.setProperty(HADOOP_USER_NAME, value)` will override the previous 
settings. So, this may be a problem. 

> in javaApi, UGI params should be overidden through FileSystem conf
> --
>
> Key: HADOOP-16977
> URL: https://issues.apache.org/jira/browse/HADOOP-16977
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 3.2.0
>Reporter: Hongbing Wang
>Priority: Major
> Attachments: HADOOP-16977.001.patch, HADOOP-16977.002.patch
>
>
> org.apache.hadoop.security.UserGroupInformation#ensureInitialized,will always 
> get the configure from the configuration files. Like below:
> {code:java}
> private static void ensureInitialized() {
>   if (conf == null) {
> synchronized(UserGroupInformation.class) {
>   if (conf == null) { // someone might have beat us
> initialize(new Configuration(), false);
>   }
> }
>   }
> }{code}
> So that, if FileSystem is created through FileSystem#get or 
> FileSystem#newInstance with conf, the conf values different from the 
> configuration files will not take effect in UserGroupInformation.  E.g:
> {code:java}
> Configuration conf = new Configuration();
> conf.set("k1","v1");
> conf.set("k2","v2");
> FileSystem fs = FileSystem.get(uri, conf);{code}
> "k1" or "k2" will not work in UserGroupInformation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1969: HADOOP-17002. ABFS: Adding config to determine if the account is HNS enabled or not

2020-04-21 Thread GitBox


hadoop-yetus commented on issue #1969:
URL: https://github.com/apache/hadoop/pull/1969#issuecomment-617208584


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  1s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  23m 48s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 15s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m 11s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  8s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 35s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 28s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 18s |  hadoop-tools/hadoop-azure: The 
patch generated 1 new + 3 unchanged - 0 fixed = 4 total (was 3)  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 49s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m  2s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 20s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  68m  5s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1969 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux e145ae462183 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 60fa153 |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/5/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/5/testReport/ |
   | Max. process+thread count | 414 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1892: HADOOP-16769 LocalDirAllocator to provide diagnostics when file creat…

2020-04-21 Thread GitBox


steveloughran commented on a change in pull request #1892:
URL: https://github.com/apache/hadoop/pull/1892#discussion_r412220807



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java
##
@@ -441,9 +443,16 @@ public Path getLocalPathForWrite(String pathStr, long size,
 int dirNum = ctx.getAndIncrDirNumLastAccessed(randomInc);
 while (numDirsSearched < numDirs) {
   long capacity = ctx.dirDF[dirNum].getAvailable();
+  if (capacity > maxCapacity) {
+maxCapacity = capacity;
+  }
   if (capacity > size) {
-returnPath =
-createPath(ctx.localDirs[dirNum], pathStr, checkWrite);
+try {
+  returnPath = createPath(ctx.localDirs[dirNum], pathStr,
+  checkWrite);
+} catch (Exception e) {
+  errorText = e.getMessage();

Review comment:
   1. Log the exception @ debug
   2. store the caught exception  in a variable alongside errorText

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java
##
@@ -459,8 +468,13 @@ public Path getLocalPathForWrite(String pathStr, long size,
   }
   
   //no path found
-  throw new DiskErrorException("Could not find any valid local " +
-  "directory for " + pathStr);
+  String newErrorText = "Could not find any valid local directory for " +
+  pathStr + " with requested size " + size +
+  " as the max capacity in any directory is " + maxCapacity;
+  if (errorText != null) {
+newErrorText = newErrorText + " due to " + errorText;
+  }
+  throw new DiskErrorException(newErrorText);

Review comment:
   if any exception had been caught and stored in L484, use it in the 
constructor/setCause here. Stack traces are too important to lose.

##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalDirAllocator.java
##
@@ -532,4 +533,19 @@ public void testGetLocalPathForWriteForInvalidPaths() 
throws Exception {
 }
   }
 
+  /**
+   * Test to check the LocalDirAllocation for the less space HADOOP-16769.
+   *
+   * @throws Exception
+   */
+  @Test(timeout = 3)
+  public void testGetLocalPathForWriteForLessSpace() throws Exception {
+String dir0 = buildBufferDir(ROOT, 0);
+String dir1 = buildBufferDir(ROOT, 1);
+conf.set(CONTEXT, dir0 + "," + dir1);
+LambdaTestUtils.intercept(DiskErrorException.class, "as the max capacity" +

Review comment:
   minor nit, move the "as the max" capacity string onto a line on its own' 
merge it back to a single string, 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-16769) LocalDirAllocator to provide diagnostics when file creation fails

2020-04-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16769:

Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
13s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 6 new + 39 unchanged - 0 fixed = 45 total (was 39) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
51s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 38s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Invocation of toString on stackTrace in 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(String,
 long, Configuration, boolean)  At LocalDirAllocator.java:in 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(String,
 long, Configuration, boolean)  At LocalDirAllocator.java:[line 475] |
| Failed junit tests | hadoop.fs.TestHarFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HADOOP-16769 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12991486/HADOOP-16769.6.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1ecaa403ca66 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d40d7cc |
| maven | version: Apache Maven 3.3.9 |
| 

[jira] [Issue Comment Deleted] (HADOOP-16769) LocalDirAllocator to provide diagnostics when file creation fails

2020-04-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16769:

Comment: was deleted

(was: | (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m  
4s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 43s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 39 unchanged - 0 fixed = 41 total (was 39) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
58s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:e573ea49085 |
| JIRA Issue | HADOOP-16769 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12989212/HADOOP-16769.5.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux eb34dfb7c6af 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ef59ffd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16711/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16711/testReport/ |
| Max. process+thread count | 1350 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16711/console |
| Powered by | Apache Yetus 

[jira] [Issue Comment Deleted] (HADOOP-16769) LocalDirAllocator to provide diagnostics when file creation fails

2020-04-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16769:

Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
50s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 39s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}111m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Invocation of toString on stackTrace in 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(String,
 long, Configuration, boolean)  At LocalDirAllocator.java:in 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(String,
 long, Configuration, boolean)  At LocalDirAllocator.java:[line 477] |
| Failed junit tests | hadoop.fs.TestLocalDirAllocator |
|   | hadoop.fs.TestHarFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HADOOP-16769 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12991559/HADOOP-16769.7.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c9e908e26b01 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9520b2ad |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
| 

[jira] [Issue Comment Deleted] (HADOOP-16769) LocalDirAllocator to provide diagnostics when file creation fails

2020-04-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16769:

Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 29m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 39 unchanged - 0 fixed = 41 total (was 39) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 43s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
55s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}142m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
|   | hadoop.fs.TestLocalDirAllocator |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:e573ea49085 |
| JIRA Issue | HADOOP-16769 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12989207/HADOOP-16769.4.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5a8835bd1f80 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 52d7b74 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16710/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16710/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16710/testReport/ |
| Max. 

[jira] [Issue Comment Deleted] (HADOOP-16769) LocalDirAllocator to provide diagnostics when file creation fails

2020-04-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16769:

Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 11s{color} 
| {color:red} HADOOP-16769 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-16769 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12989164/HADOOP-16769.3.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16707/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.

)

> LocalDirAllocator to provide diagnostics when file creation fails
> -
>
> Key: HADOOP-16769
> URL: https://issues.apache.org/jira/browse/HADOOP-16769
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Ramesh Kumar Thangarajan
>Priority: Minor
> Attachments: HADOOP-16769.1.patch, HADOOP-16769.3.patch, 
> HADOOP-16769.4.patch, HADOOP-16769.5.patch, HADOOP-16769.6.patch, 
> HADOOP-16769.7.patch, HADOOP-16769.8.patch
>
>
> Log details of requested size and available capacity when file creation is 
> not successuful



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-16769) LocalDirAllocator to provide diagnostics when file creation fails

2020-04-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16769:

Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
20s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:e573ea49085 |
| JIRA Issue | HADOOP-16769 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12989036/HADOOP-16769.1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 295232e3477a 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f47dcf2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16704/testReport/ |
| Max. process+thread count | 1345 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16704/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.

)

> LocalDirAllocator to provide 

[jira] [Issue Comment Deleted] (HADOOP-16769) LocalDirAllocator to provide diagnostics when file creation fails

2020-04-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16769:

Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HADOOP-16769 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-16769 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12989203/HADOOP-16769.4.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16709/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.

)

> LocalDirAllocator to provide diagnostics when file creation fails
> -
>
> Key: HADOOP-16769
> URL: https://issues.apache.org/jira/browse/HADOOP-16769
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Ramesh Kumar Thangarajan
>Priority: Minor
> Attachments: HADOOP-16769.1.patch, HADOOP-16769.3.patch, 
> HADOOP-16769.4.patch, HADOOP-16769.5.patch, HADOOP-16769.6.patch, 
> HADOOP-16769.7.patch, HADOOP-16769.8.patch
>
>
> Log details of requested size and available capacity when file creation is 
> not successuful



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1969: HADOOP-17002. ABFS: Adding config to determine if the account is HNS enabled or not

2020-04-21 Thread GitBox


hadoop-yetus commented on issue #1969:
URL: https://github.com/apache/hadoop/pull/1969#issuecomment-617190479


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 14s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  23m 26s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 45s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 59s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 56s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 43s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 30s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 18s |  hadoop-tools/hadoop-azure: The 
patch generated 1 new + 3 unchanged - 0 fixed = 4 total (was 3)  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  18m 23s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m  3s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   1m  9s |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  69m 36s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.azure.TestClientThrottlingAnalyzer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1969 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 33a33791814e 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 60fa153 |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/4/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/4/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/4/testReport/ |
   | Max. process+thread count | 311 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17003) No Log compression and retention in Hadoop

2020-04-21 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17003:
---
Description: 
Hadoop logging lacks several important features. Logs generated end up eating 
disk space
We need an implementation that satisfies the following three features:  1) 
time-based rolling, 2) retention and 3) compression.

For example KMS logs have no retention or compression. 
{code:bash}
-rw-r--r-- 1 hkms users 704M Mar 20 23:59 kms.log.2020-03-20
-rw-r--r-- 1 hkms users 731M Mar 21 23:59 kms.log.2020-03-21
-rw-r--r-- 1 hkms users 750M Mar 22 23:59 kms.log.2020-03-22
-rw-r--r-- 1 hkms users 757M Mar 23 23:59 kms.log.2020-03-23
-rw-r--r-- 1 hkms users 805M Mar 24 23:59 kms.log.2020-03-24
-rw-r--r-- 1 hkms users 858M Mar 25 23:59 kms.log.2020-03-25
-rw-r--r-- 1 hkms users 875M Mar 26 23:59 kms.log.2020-03-26
-rw-r--r-- 1 hkms users 754M Mar 27 23:59 kms.log.2020-03-27
{code}


  was:

{code:bash}
-rw-r--r-- 1 hkms users 704M Mar 20 23:59 kms.log.2020-03-20
-rw-r--r-- 1 hkms users 731M Mar 21 23:59 kms.log.2020-03-21
-rw-r--r-- 1 hkms users 750M Mar 22 23:59 kms.log.2020-03-22
-rw-r--r-- 1 hkms users 757M Mar 23 23:59 kms.log.2020-03-23
-rw-r--r-- 1 hkms users 805M Mar 24 23:59 kms.log.2020-03-24
-rw-r--r-- 1 hkms users 858M Mar 25 23:59 kms.log.2020-03-25
-rw-r--r-- 1 hkms users 875M Mar 26 23:59 kms.log.2020-03-26
-rw-r--r-- 1 hkms users 754M Mar 27 23:59 kms.log.2020-03-27
{code}

KMS logs have no retention or compression.
They are eating up space generating disk space alerts


> No Log compression and retention in Hadoop
> --
>
> Key: HADOOP-17003
> URL: https://issues.apache.org/jira/browse/HADOOP-17003
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> Hadoop logging lacks several important features. Logs generated end up eating 
> disk space
> We need an implementation that satisfies the following three features:  1) 
> time-based rolling, 2) retention and 3) compression.
> For example KMS logs have no retention or compression. 
> {code:bash}
> -rw-r--r-- 1 hkms users 704M Mar 20 23:59 kms.log.2020-03-20
> -rw-r--r-- 1 hkms users 731M Mar 21 23:59 kms.log.2020-03-21
> -rw-r--r-- 1 hkms users 750M Mar 22 23:59 kms.log.2020-03-22
> -rw-r--r-- 1 hkms users 757M Mar 23 23:59 kms.log.2020-03-23
> -rw-r--r-- 1 hkms users 805M Mar 24 23:59 kms.log.2020-03-24
> -rw-r--r-- 1 hkms users 858M Mar 25 23:59 kms.log.2020-03-25
> -rw-r--r-- 1 hkms users 875M Mar 26 23:59 kms.log.2020-03-26
> -rw-r--r-- 1 hkms users 754M Mar 27 23:59 kms.log.2020-03-27
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a change in pull request #1956: HADOOP-16965 Refactor abfs stream configuration.

2020-04-21 Thread GitBox


mukund-thakur commented on a change in pull request #1956:
URL: https://github.com/apache/hadoop/pull/1956#discussion_r412185071



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStreamContext.java
##
@@ -0,0 +1,68 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+/**
+ * Class to hold extra output stream configs.
+ */
+public class AbfsOutputStreamContext extends AbfsStreamContext {
+
+  private int writeBufferSize;
+
+  private boolean enableFlush;
+
+  private boolean disableOutputStreamFlush;
+
+  public AbfsOutputStreamContext() {
+  }
+
+  public AbfsOutputStreamContext withWriteBufferSize(
+  final int writeBufferSize) {
+this.writeBufferSize = writeBufferSize;
+return this;
+  }
+
+  public AbfsOutputStreamContext enableFlush(final boolean enableFlush) {
+this.enableFlush = enableFlush;
+return this;
+  }
+
+  public AbfsOutputStreamContext disableOutputStreamFlush(
+  final boolean disableOutputStreamFlush) {
+this.disableOutputStreamFlush = disableOutputStreamFlush;
+return this;
+  }
+
+  public AbfsOutputStreamContext build() {
+// Validation of parameters to be done here.
+return this;

Review comment:
   Now we don't have any validation yet but we can add the new ones here in 
future if required.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17003) No Log compression and retention in Hadoop

2020-04-21 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17003:
---
Summary: No Log compression and retention in Hadoop  (was: No Log 
compression and retention at KMS)

> No Log compression and retention in Hadoop
> --
>
> Key: HADOOP-17003
> URL: https://issues.apache.org/jira/browse/HADOOP-17003
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> {code:bash}
> -rw-r--r-- 1 hkms users 704M Mar 20 23:59 kms.log.2020-03-20
> -rw-r--r-- 1 hkms users 731M Mar 21 23:59 kms.log.2020-03-21
> -rw-r--r-- 1 hkms users 750M Mar 22 23:59 kms.log.2020-03-22
> -rw-r--r-- 1 hkms users 757M Mar 23 23:59 kms.log.2020-03-23
> -rw-r--r-- 1 hkms users 805M Mar 24 23:59 kms.log.2020-03-24
> -rw-r--r-- 1 hkms users 858M Mar 25 23:59 kms.log.2020-03-25
> -rw-r--r-- 1 hkms users 875M Mar 26 23:59 kms.log.2020-03-26
> -rw-r--r-- 1 hkms users 754M Mar 27 23:59 kms.log.2020-03-27
> {code}
> KMS logs have no retention or compression.
> They are eating up space generating disk space alerts



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16998) WASB : NativeAzureFsOutputStream#close() throwing java.lang.IllegalArgumentException instead of IOE which causes HBase RS to get aborted

2020-04-21 Thread ramkrishna.s.vasudevan (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088658#comment-17088658
 ] 

ramkrishna.s.vasudevan commented on HADOOP-16998:
-

+1 on patch's approach. (non binding). We need test anyway. 

> WASB : NativeAzureFsOutputStream#close() throwing 
> java.lang.IllegalArgumentException instead of IOE which causes HBase RS to 
> get aborted
> 
>
> Key: HADOOP-16998
> URL: https://issues.apache.org/jira/browse/HADOOP-16998
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Major
> Attachments: HADOOP-16998.patch
>
>
> During HFile create, at the end when called close() on the OutputStream, 
> there is some pending data to get flushed. When this flush happens, an 
> Exception is thrown back from Storage. The Azure-storage SDK layer will throw 
> back IOE. (Even if it is a StorageException thrown from the Storage, the SDK 
> converts it to IOE.) But at HBase, we end up getting IllegalArgumentException 
> which causes the RS to get aborted. If we get back IOE, the flush will get 
> retried instead of aborting RS.
> The reason is this
> NativeAzureFsOutputStream uses Azure-storage SDK's BlobOutputStreamInternal. 
> But the BlobOutputStreamInternal is wrapped within a SyncableDataOutputStream 
> which is a FilterOutputStream. During the close op, NativeAzureFsOutputStream 
> calls close on SyncableDataOutputStream and it uses below method from 
> FilterOutputStream
> {code}
> public void close() throws IOException {
>   try (OutputStream ostream = out) {
>   flush();
>   }
> }
> {code}
> Here the flush call caused an IOE to be thrown to here. The finally will 
> issue close call on ostream (Which is an instance of BlobOutputStreamInternal)
> When BlobOutputStreamInternal#close() is been called, if there was any 
> exception already occured on that Stream, it will throw back the same 
> Exception
> {code}
> public synchronized void close() throws IOException {
>   try {
>   // if the user has already closed the stream, this will throw a 
> STREAM_CLOSED exception
>   // if an exception was thrown by any thread in the 
> threadExecutor, realize it now
>   this.checkStreamState();
>   ...
> }
> private void checkStreamState() throws IOException {
>   if (this.lastError != null) {
>   throw this.lastError;
>   }
> }
> {code}
> So here both try and finally block getting Exceptions and Java uses 
> Throwable#addSuppressed() 
> Within this method if both Exceptions are same objects, it throws back 
> IllegalArgumentException
> {code}
> public final synchronized void addSuppressed(Throwable exception) {
>   if (exception == this)
>  throw new 
> IllegalArgumentException(SELF_SUPPRESSION_MESSAGE, exception);
>   
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17001) The suffix name of the unified compression class

2020-04-21 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HADOOP-17001:

Attachment: HADOOP-17001.003.patch
Status: Patch Available  (was: Open)

> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HADOOP-17001-001.patch, HADOOP-17001-002.patch, 
> HADOOP-17001.003.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17001) The suffix name of the unified compression class

2020-04-21 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HADOOP-17001:

Attachment: HADOOP-17001.003.patch

> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HADOOP-17001-001.patch, HADOOP-17001-002.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17001) The suffix name of the unified compression class

2020-04-21 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HADOOP-17001:

Attachment: (was: HADOOP-17001.003.patch)

> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HADOOP-17001-001.patch, HADOOP-17001-002.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17001) The suffix name of the unified compression class

2020-04-21 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HADOOP-17001:

Status: Open  (was: Patch Available)

> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HADOOP-17001-001.patch, HADOOP-17001-002.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17001) The suffix name of the unified compression class

2020-04-21 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HADOOP-17001:

Attachment: (was: HADOOP-17001-003.patch)

> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HADOOP-17001-001.patch, HADOOP-17001-002.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on a change in pull request #1969: HADOOP-17002. ABFS: Adding config to determine if the account is HNS enabled or not

2020-04-21 Thread GitBox


bilaharith commented on a change in pull request #1969:
URL: https://github.com/apache/hadoop/pull/1969#discussion_r412131351



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
##
@@ -253,6 +259,20 @@ public boolean getIsNamespaceEnabled() throws 
AzureBlobFileSystemException {
 return isNamespaceEnabled;
   }
 
+  @VisibleForTesting
+  boolean isNameSpaceEnabledSetFromConfig() {

Review comment:
   No.
   Though the possible values for the config is true/false, we cannot have a 
default value for the same. So we accept the config value as String and if it 
is valid(true/false) we set it to the 
AzureBlobFileSystemStore.isNamespaceEnabled.
   
   If the config is not present or the value specified is invalid, this method 
returns false, indicating it could not set the 
AzureBlobFileSystemStore.isNamespaceEnabled field.
   
   And according to the return value of this method, the caller method returns 
AzureBlobFileSystemStore.isNamespaceEnabled field or falls back to the default 
behaviour.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17001) The suffix name of the unified compression class

2020-04-21 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HADOOP-17001:

Attachment: HADOOP-17001-003.patch
Status: Patch Available  (was: In Progress)

> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HADOOP-17001-001.patch, HADOOP-17001-002.patch, 
> HADOOP-17001-003.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17001) The suffix name of the unified compression class

2020-04-21 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HADOOP-17001:

Attachment: (was: HADOOP-17001-003.patch)

> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HADOOP-17001-001.patch, HADOOP-17001-002.patch, 
> HADOOP-17001-003.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-17001) The suffix name of the unified compression class

2020-04-21 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17001 started by bianqi.
---
> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HADOOP-17001-001.patch, HADOOP-17001-002.patch, 
> HADOOP-17001-003.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17001) The suffix name of the unified compression class

2020-04-21 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HADOOP-17001:

Attachment: HADOOP-17001-003.patch

> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HADOOP-17001-001.patch, HADOOP-17001-002.patch, 
> HADOOP-17001-003.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16944) Use Yetus 0.12.0 in GitHub PR

2020-04-21 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16944:
---
Fix Version/s: 2.10.1
   3.2.2
   3.1.4
   2.9.3
   3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Cherry-picked to branch-3.3, branch-3.2, branch-3.1, branch-2.10, and 
branch-2.9.
This change does not have any effect on the create-release script for 3.3.0, so 
it's safe.

> Use Yetus 0.12.0 in GitHub PR
> -
>
> Key: HADOOP-16944
> URL: https://issues.apache.org/jira/browse/HADOOP-16944
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0, 2.9.3, 3.1.4, 3.2.2, 2.10.1, 3.4.0
>
>
> HADOOP-16054 wants to upgrade the ubuntu version of the docker image from 
> 16.04 to 18.04. However, ubuntu 18.04 brings maven 3.6.0 by default and the 
> pre-commit jobs fail to add comments to GitHub and JIRA. The issue was fixed 
> by YETUS-957 and upgrading the Yetus version to 0.12.0-SNAPSHOT (or 0.12.0, 
> if released) will fix the problem.
> How to upgrade Yetus version in the pre-commit jobs:
> * GitHub PR (hadoop-multibranch): Upgrade Jenkinsfile
> * JIRA (PreCommit--Build): Manually update the config in builds.apache.org



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16944) Use Yetus 0.12.0 in GitHub PR

2020-04-21 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16944:
---
Fix Version/s: (was: 3.4.0)

> Use Yetus 0.12.0 in GitHub PR
> -
>
> Key: HADOOP-16944
> URL: https://issues.apache.org/jira/browse/HADOOP-16944
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0, 2.9.3, 3.1.4, 3.2.2, 2.10.1
>
>
> HADOOP-16054 wants to upgrade the ubuntu version of the docker image from 
> 16.04 to 18.04. However, ubuntu 18.04 brings maven 3.6.0 by default and the 
> pre-commit jobs fail to add comments to GitHub and JIRA. The issue was fixed 
> by YETUS-957 and upgrading the Yetus version to 0.12.0-SNAPSHOT (or 0.12.0, 
> if released) will fix the problem.
> How to upgrade Yetus version in the pre-commit jobs:
> * GitHub PR (hadoop-multibranch): Upgrade Jenkinsfile
> * JIRA (PreCommit--Build): Manually update the config in builds.apache.org



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on a change in pull request #1969: HADOOP-17002. ABFS: Adding config to determine if the account is HNS enabled or not

2020-04-21 Thread GitBox


bilaharith commented on a change in pull request #1969:
URL: https://github.com/apache/hadoop/pull/1969#discussion_r412131351



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
##
@@ -253,6 +259,20 @@ public boolean getIsNamespaceEnabled() throws 
AzureBlobFileSystemException {
 return isNamespaceEnabled;
   }
 
+  @VisibleForTesting
+  boolean isNameSpaceEnabledSetFromConfig() {

Review comment:
   No.
   Though the possible values for the config is true/false, we cannot have a 
default value for the same. So we accept the config value as String and if it 
is valid(true/false) we set it to the 
AzureBlobFileSystemStore.isNamespaceEnabled.
   
   If the config is not present or the value specified is invalid, this method 
returns false, indicating it could not set the 
AzureBlobFileSystemStore.isNamespaceEnabled field.
   
   And according to the return value of this method, the caller method returns 
AzureBlobFileSystemStore.isNamespaceEnabled field or falls back to the default 
behaviour.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on a change in pull request #1969: HADOOP-17002. ABFS: Adding config to determine if the account is HNS enabled or not

2020-04-21 Thread GitBox


bilaharith commented on a change in pull request #1969:
URL: https://github.com/apache/hadoop/pull/1969#discussion_r412127358



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
##
@@ -27,6 +27,15 @@
 @InterfaceAudience.Public
 @InterfaceStability.Evolving
 public final class ConfigurationKeys {
+
+  /**
+   * Each time a FS instance is created a Getacl call is made. If the call
+   * fails with 400 Bad request, the account is determined to be a non-HNS
+   * account.
+   * If this config is present, use that to determine account HNS status. If
+   * config is not present, default behaviour will be calling getAcl.

Review comment:
   Done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya commented on a change in pull request #1969: HADOOP-17002. ABFS: Adding config to determine if the account is HNS enabled or not

2020-04-21 Thread GitBox


snvijaya commented on a change in pull request #1969:
URL: https://github.com/apache/hadoop/pull/1969#discussion_r412108329



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
##
@@ -27,6 +27,15 @@
 @InterfaceAudience.Public
 @InterfaceStability.Evolving
 public final class ConfigurationKeys {
+
+  /**
+   * Each time a FS instance is created a Getacl call is made. If the call
+   * fails with 400 Bad request, the account is determined to be a non-HNS
+   * account.
+   * If this config is present, use that to determine account HNS status. If
+   * config is not present, default behaviour will be calling getAcl.

Review comment:
   Minor. Change comment to :
 /**
  * Config to specify if the configured account is HNS enabled or not. If 
this config is not set, 
  * getacl call is made on account filesystem root path to determine HNS 
status.
  */

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
##
@@ -253,6 +259,20 @@ public boolean getIsNamespaceEnabled() throws 
AzureBlobFileSystemException {
 return isNamespaceEnabled;
   }
 
+  @VisibleForTesting
+  boolean isNameSpaceEnabledSetFromConfig() {

Review comment:
   This method should be in AbfsConfiguration.java ?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1970: HADOOP-17004. ABFS: Improve the ABFS driver documentation

2020-04-21 Thread GitBox


hadoop-yetus commented on issue #1970:
URL: https://github.com/apache/hadoop/pull/1970#issuecomment-617126273


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 14s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  23m 28s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  39m 58s |  branch has no errors when 
building and testing our client artifacts.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 11s |  patch has no errors when 
building and testing our client artifacts.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  60m 39s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1970/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1970 |
   | Optional Tests | dupname asflicense mvnsite markdownlint |
   | uname | Linux 09595dae32df 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 60fa153 |
   | Max. process+thread count | 308 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1970/2/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya commented on a change in pull request #1970: HADOOP-17004. ABFS: Improve the ABFS driver documentation

2020-04-21 Thread GitBox


snvijaya commented on a change in pull request #1970:
URL: https://github.com/apache/hadoop/pull/1970#discussion_r412092762



##
File path: hadoop-tools/hadoop-azure/src/site/markdown/abfs.md
##
@@ -665,6 +665,79 @@ performance issues.
 Config `fs.azure.enable.check.access` needs to be set true to enable
  the AzureBlobFileSystem.access().
 
+###  Auth Options
+`fs.azure.account.key`: To set the account access key. Access keys can be used 
to authenticate the requests made from the ABFS driver to the Azure storage 
account.

Review comment:
   account key config is already explained with an example in the above 
shared key section. Please cross check with the details above and if needed 
more comments can be added in respective place.

##
File path: hadoop-tools/hadoop-azure/src/site/markdown/abfs.md
##
@@ -665,6 +665,79 @@ performance issues.
 Config `fs.azure.enable.check.access` needs to be set true to enable
  the AzureBlobFileSystem.access().
 
+###  Auth Options
+`fs.azure.account.key`: To set the account access key. Access keys can be used 
to authenticate the requests made from the ABFS driver to the Azure storage 
account.

Review comment:
   Comment holds for other configs too, please check.

##
File path: hadoop-tools/hadoop-azure/src/site/markdown/abfs.md
##
@@ -665,6 +665,79 @@ performance issues.
 Config `fs.azure.enable.check.access` needs to be set true to enable
  the AzureBlobFileSystem.access().
 
+###  Auth Options
+`fs.azure.account.key`: To set the account access key. Access keys can be used 
to authenticate the requests made from the ABFS driver to the Azure storage 
account.
+
+`fs.azure.account.keyprovider`: If a key provider class is provided the same 
will be used to get Storage Account key. Else the Simple key provider will be 
used which will use the given key from the config.
+
+`fs.azure.shellkeyprovider.script`: ShellDecryptionKeyProvider class invokes 
an external script that will perform the key decryption. The script path has to 
be set via this config.
+
+`fs.azure.enable.delegation.token`: To enable delegation token manager. 
Instantiates the class declared in fs.azure.delegation.token.provider.type and 
issues tokens from the same.
+
+`fs.azure.delegation.token.provider.type`: In case delegation token manager is 
enabled the AbfsDelegationTokenManager implementation specified in this config 
will be used as the AbfsDelegationTokenManager implementation.
+
+`fs.azure.sas.token.provider.type`: If the auth type is AuthType.SAS, 
instantiates the class declared in fs.azure.sas.token.provider.type and issues 
tokens from it.
+
+`fs.azure.account.auth.type`: To set the auth type to be used. Possible 
values:   SharedKey, OAuth, Custom, SAS.
+
+`fs.azure.account.oauth.provider.type`: To set the auth provider class to be 
used.
+
+`fs.azure.account.oauth2.client.id`: To set the OAuth AAD client id when 
ClientCredsTokenProvider is used.
+
+`fs.azure.account.oauth2.client.secret`: To set the OAuth AAD client secret 
when ClientCredsTokenProvider is used.
+
+`fs.azure.account.oauth2.client.endpoint`: To set the OAuth AAD client 
endpoint when ClientCredsTokenProvider is used.
+
+`fs.azure.account.oauth2.msi.tenant`: To set OAuth MSI tenant id when 
MSITokenProvider is used.
+
+`fs.azure.account.oauth2.msi.endpoint`: To set OAuth MSI endpoint when 
MSITokenProvider is used.
+
+`fs.azure.account.oauth2.msi.authority`: To set OAuth MSI authority when 
MSITokenProvider is used.
+
+`fs.azure.account.oauth2.user.name`: To set username when 
UserPasswordTokenProvider is used.
+
+`fs.azure.account.oauth2.user.password`: To set password when 
UserPasswordTokenProvider is used.
+
+`fs.azure.account.oauth2.refresh.token`: To set OAuth refreshtoken when 
RefreshTokenBasedTokenProvider is used.
+
+`fs.azure.account.oauth2.refresh.token.endpoint`: To set OAuth refresh token 
end point when RefreshTokenBasedTokenProvider is used.
+
+###  Feature Options

Review comment:
   Maybe provide more context to the feature and then list the relevant 
configs.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mpryahin edited a comment on issue #1952: HDFS-1820. FTPFileSystem attempts to close the outputstream even when it is not initialised.

2020-04-21 Thread GitBox


mpryahin edited a comment on issue #1952:
URL: https://github.com/apache/hadoop/pull/1952#issuecomment-616799192


   I've looked through the console output which reports both trunk and patch 
builds failure due to the following module compilation failure:
   
   `[INFO] Apache Hadoop Tencent COS Support .. FAILURE [  
0.096 s]`
   
   The failure seems to be caused by [this 
commit](https://github.com/apache/hadoop/commit/82ff7bc9abc8f3ad549db898953d98ef142ab02d)
 
   This PR introduces no changes in the failing module. I've just incorporated 
all the latest changes from trunk into my PR branch before committing PR code 
improvements. 
   
   @ChenSammi  could you please check if trunk compilation succeeds?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on issue #1938: HADOOP-16922. ABFS: Changing User-Agent header

2020-04-21 Thread GitBox


bilaharith commented on issue #1938:
URL: https://github.com/apache/hadoop/pull/1938#issuecomment-617102677


   At present the User-Agent header would look like:
   Azure Blob FS/3.4.0-SNAPSHOT (JavaJRE 1.8.0_241; Linux 5.3.0-46-generic; 
SunJSSE-1.8) MSFT 
   
   With this PR it would be:
   APN/1.0 Azure Blob FS/3.4.0-SNAPSHOT (OracleCorporation JavaJRE 1.8.0_241; 
Linux 5.3.0-46-generic/amd64; SunJSSE-1.8; UNKNOWN/UNKNOWN) MSFT 
   

   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16963) HADOOP-16582 changed mkdirs() behavior

2020-04-21 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088480#comment-17088480
 ] 

Steve Loughran commented on HADOOP-16963:
-

What HADOOP-16582 does is guarantee that filtering/viewing filesystems pass 
mkdirs(path) all the way through; the standard FileSystem.mkdirs(path) does the 
expansion. 

How/why does hive break here? 

is the problem that hive has a filterFS which overrides FilterFS.mkdir(path, 
perm) but not FilterFS.mkdir(path)? If so, I think
-hive needs to add the mkdirs(path) call. This can be done and still compile 
against older hadoop versions
-we tag the previous patch as an incompatible for filesystem extending FilterFS.



> HADOOP-16582 changed mkdirs() behavior
> --
>
> Key: HADOOP-16963
> URL: https://issues.apache.org/jira/browse/HADOOP-16963
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.10.0, 3.3.0, 2.8.6, 2.9.3, 3.1.3, 3.2.2
>Reporter: Wei-Chiu Chuang
>Priority: Critical
>
> HADOOP-16582 changed behavior of {{mkdirs()}}
> Some Hive tests depend on the old behavior and they fail miserably.
> {quote}
> earlier:
> all plain mkdirs(somePath) were fast-tracked to FileSystem.mkdirs which have 
> rerouted them to mkdirs(somePath, somePerm) method with some defaults (which 
> were static)
> an implementation of FileSystem have only needed implement "mkdirs(somePath, 
> somePerm)" - because the other was not neccessarily called if it was always 
> in a FilterFileSystem or something like that
> now:
> especially FilterFileSystem forwards the call of mkdirs(p) to the actual fs 
> implementation...which may skip overriden mkdirs(somPath,somePerm) methods
> ...and could cause issues for existing FileSystem implementations
> {quote}
> File this jira to address this problem.
> [~kgyrtkirk] [~ste...@apache.org] [~kihwal]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16794) S3A reverts KMS encryption to the bucket's default KMS key in rename/copy

2020-04-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16794.
-
Fix Version/s: 3.3.0
   Resolution: Fixed

> S3A reverts KMS encryption to the bucket's default KMS key in rename/copy
> -
>
> Key: HADOOP-16794
> URL: https://issues.apache.org/jira/browse/HADOOP-16794
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.1
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
> Fix For: 3.3.0
>
>
> When using (bucket-level) S3 Default Encryption with SSE-KMS and a CMK, all 
> files uploaded via the HDFS {{FileSystem}} {{s3a://}} scheme receive the 
> wrong encryption key, always falling back to the region-specific AWS-managed 
> KMS key for S3, instead of retaining the custom CMK.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16794) S3A reverts KMS encryption to the bucket's default KMS key in rename/copy

2020-04-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16794:

Parent Issue: HADOOP-15620  (was: HADOOP-16829)

> S3A reverts KMS encryption to the bucket's default KMS key in rename/copy
> -
>
> Key: HADOOP-16794
> URL: https://issues.apache.org/jira/browse/HADOOP-16794
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.1
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>
> When using (bucket-level) S3 Default Encryption with SSE-KMS and a CMK, all 
> files uploaded via the HDFS {{FileSystem}} {{s3a://}} scheme receive the 
> wrong encryption key, always falling back to the region-specific AWS-managed 
> KMS key for S3, instead of retaining the custom CMK.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13887) Encrypt S3A data client-side with AWS SDK (S3-CSE)

2020-04-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13887:

Summary: Encrypt S3A data client-side with AWS SDK (S3-CSE)  (was: Encrypt 
S3A data client-side with AWS SDK)

> Encrypt S3A data client-side with AWS SDK (S3-CSE)
> --
>
> Key: HADOOP-13887
> URL: https://issues.apache.org/jira/browse/HADOOP-13887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Jeeyoung Kim
>Assignee: Igor Mazur
>Priority: Minor
> Attachments: HADOOP-13887-002.patch, HADOOP-13887-007.patch, 
> HADOOP-13887-branch-2-003.patch, HADOOP-13897-branch-2-004.patch, 
> HADOOP-13897-branch-2-005.patch, HADOOP-13897-branch-2-006.patch, 
> HADOOP-13897-branch-2-008.patch, HADOOP-13897-branch-2-009.patch, 
> HADOOP-13897-branch-2-010.patch, HADOOP-13897-branch-2-012.patch, 
> HADOOP-13897-branch-2-014.patch, HADOOP-13897-trunk-011.patch, 
> HADOOP-13897-trunk-013.patch, HADOOP-14171-001.patch, S3-CSE Proposal.pdf
>
>
> Expose the client-side encryption option documented in Amazon S3 
> documentation  - 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html
> Currently this is not exposed in Hadoop but it is exposed as an option in AWS 
> Java SDK, which Hadoop currently includes. It should be trivial to propagate 
> this as a parameter passed to the S3client used in S3AFileSystem.java



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-11264) Common side changes for HDFS Erasure coding support

2020-04-21 Thread Uma Maheswara Rao G (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G resolved HADOOP-11264.
--
Hadoop Flags: Reviewed
  Resolution: Fixed

All sub tasks closed.

> Common side changes for HDFS Erasure coding support
> ---
>
> Key: HADOOP-11264
> URL: https://issues.apache.org/jira/browse/HADOOP-11264
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: io
>Affects Versions: HDFS-7285
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>
> This is umbrella JIRA for tracking the common side changes for HDFS Erasure 
> Coding support.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1971: MAPREDUCE-7276 fix hadoop job fast fail not working issue

2020-04-21 Thread GitBox


hadoop-yetus commented on issue #1971:
URL: https://github.com/apache/hadoop/pull/1971#issuecomment-617023677


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  20m  5s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m  0s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 55s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 54s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  1s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 33s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   0m 58s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   8m 15s |  hadoop-mapreduce-client-app in the 
patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  66m 33s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1971/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1971 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux cdf0e59de363 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 60fa153 |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1971/1/testReport/ |
   | Max. process+thread count | 719 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app U: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1971/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-04-21 Thread GitBox


hadoop-yetus commented on issue #1899:
URL: https://github.com/apache/hadoop/pull/1899#issuecomment-617021845


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  30m  4s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
4 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  22m 10s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 23s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 11s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 53s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 50s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 22s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 14s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 35s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   0m 55s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 18s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 27s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  92m 30s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1899/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1899 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 2743c444e985 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 60fa153 |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1899/12/testReport/ |
   | Max. process+thread count | 325 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1899/12/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tiana528 opened a new pull request #1971: MAPREDUCE-7276 fix hadoop job fast fail not working issue

2020-04-21 Thread GitBox


tiana528 opened a new pull request #1971:
URL: https://github.com/apache/hadoop/pull/1971


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org