[jira] [Work logged] (HADOOP-17165) Implement service-user feature in DecayRPCScheduler

2020-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17165?focusedWorklogId=479469=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479469
 ]

ASF GitHub Bot logged work on HADOOP-17165:
---

Author: ASF GitHub Bot
Created on: 07/Sep/20 04:25
Start Date: 07/Sep/20 04:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2240:
URL: https://github.com/apache/hadoop/pull/2240#issuecomment-688024624


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  2s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 15s |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m  1s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m 32s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 46s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m  5s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 14s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 12s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 50s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 13s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  20m 14s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 37s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 37s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 45s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 40s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   2m 19s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 39s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 46s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 169m 20s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2240/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2240 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml markdownlint |
   | uname | Linux e3071abad3ef 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1841a5bb03f |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2240/2/testReport/ |
   | Max. process+thread count | 1346 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2240/2/console |
   | versions | git=2.17.1 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2240: HADOOP-17165. Implement service-user feature in DecayRPCScheduler.

2020-09-06 Thread GitBox


hadoop-yetus commented on pull request #2240:
URL: https://github.com/apache/hadoop/pull/2240#issuecomment-688024624


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  2s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 15s |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m  1s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m 32s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 46s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m  5s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 14s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 12s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 50s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 13s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  20m 14s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 37s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 37s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 45s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 40s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   2m 19s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 39s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 46s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 169m 20s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2240/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2240 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml markdownlint |
   | uname | Linux e3071abad3ef 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1841a5bb03f |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2240/2/testReport/ |
   | Max. process+thread count | 1346 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2240/2/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, 

[jira] [Work logged] (HADOOP-17165) Implement service-user feature in DecayRPCScheduler

2020-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17165?focusedWorklogId=479455=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479455
 ]

ASF GitHub Bot logged work on HADOOP-17165:
---

Author: ASF GitHub Bot
Created on: 07/Sep/20 01:42
Start Date: 07/Sep/20 01:42
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on a change in pull request #2240:
URL: https://github.com/apache/hadoop/pull/2240#discussion_r484145154



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java
##
@@ -483,6 +501,12 @@ private void recomputeScheduleCache() {
 
 for (Map.Entry> entry : callCosts.entrySet()) {
   Object id = entry.getKey();
+  // The priority for service users is always 0
+  if (isServiceUser((String)id)) {

Review comment:
   Thanks for your review, @sunchao! I agreed. Updated the PR addressing it.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 479455)
Time Spent: 1h 10m  (was: 1h)

> Implement service-user feature in DecayRPCScheduler
> ---
>
> Key: HADOOP-17165
> URL: https://issues.apache.org/jira/browse/HADOOP-17165
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-17165.001.patch, HADOOP-17165.002.patch, 
> after.png, before.png
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> In our cluster, we want to use FairCallQueue to limit heavy users, but not 
> want to restrict certain users who are submitting important requests. This 
> jira proposes to implement the service-user feature that the user is always 
> scheduled high-priority queue.
> According to HADOOP-9640, the initial concept of FCQ has this feature, but 
> not implemented finally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tasanuma commented on a change in pull request #2240: HADOOP-17165. Implement service-user feature in DecayRPCScheduler.

2020-09-06 Thread GitBox


tasanuma commented on a change in pull request #2240:
URL: https://github.com/apache/hadoop/pull/2240#discussion_r484145154



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java
##
@@ -483,6 +501,12 @@ private void recomputeScheduleCache() {
 
 for (Map.Entry> entry : callCosts.entrySet()) {
   Object id = entry.getKey();
+  // The priority for service users is always 0
+  if (isServiceUser((String)id)) {

Review comment:
   Thanks for your review, @sunchao! I agreed. Updated the PR addressing it.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17249) Upgrade jackson-databind to 2.10 on branch-2.10

2020-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17249?focusedWorklogId=479452=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479452
 ]

ASF GitHub Bot logged work on HADOOP-17249:
---

Author: ASF GitHub Bot
Created on: 07/Sep/20 01:11
Start Date: 07/Sep/20 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2279:
URL: https://github.com/apache/hadoop/pull/2279#issuecomment-687958351


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  16m  3s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ branch-2.10 Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 26s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  15m  0s |  branch-2.10 passed  |
   | +1 :green_heart: |  compile  |  14m 12s |  branch-2.10 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 10s |  branch-2.10 passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  branch-2.10 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 49s |  the patch passed  |
   | +1 :green_heart: |  compile  |  13m  0s |  the patch passed  |
   | +1 :green_heart: |  javac  |  13m  0s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m  9s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 23s |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   4m 32s |  
hadoop-yarn-server-applicationhistoryservice in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  75m 19s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2279/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2279 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 3ee0e13517c4 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-2.10 / e5bd8d2 |
   | Default Java | Oracle Corporation-1.7.0_95-b00 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2279/1/testReport/ |
   | Max. process+thread count | 126 (vs. ulimit of 5500) |
   | modules | C: hadoop-project 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice
 U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2279/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 479452)
Time Spent: 20m  (was: 10m)

> Upgrade jackson-databind to 2.10 on branch-2.10
> ---
>
> Key: HADOOP-17249
> URL: https://issues.apache.org/jira/browse/HADOOP-17249
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.10.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This is filed to test backporting HADOOP-16905 to branch-2.10.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2279: HADOOP-17249. Upgrade jackson-databind to 2.10 on branch-2.10.

2020-09-06 Thread GitBox


hadoop-yetus commented on pull request #2279:
URL: https://github.com/apache/hadoop/pull/2279#issuecomment-687958351


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  16m  3s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ branch-2.10 Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 26s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  15m  0s |  branch-2.10 passed  |
   | +1 :green_heart: |  compile  |  14m 12s |  branch-2.10 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 10s |  branch-2.10 passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  branch-2.10 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 49s |  the patch passed  |
   | +1 :green_heart: |  compile  |  13m  0s |  the patch passed  |
   | +1 :green_heart: |  javac  |  13m  0s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m  9s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 23s |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   4m 32s |  
hadoop-yarn-server-applicationhistoryservice in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  75m 19s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2279/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2279 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 3ee0e13517c4 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-2.10 / e5bd8d2 |
   | Default Java | Oracle Corporation-1.7.0_95-b00 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2279/1/testReport/ |
   | Max. process+thread count | 126 (vs. ulimit of 5500) |
   | modules | C: hadoop-project 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice
 U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2279/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17249) Upgrade jackson-databind to 2.10 on branch-2.10

2020-09-06 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-17249:
--
Status: Patch Available  (was: Open)

> Upgrade jackson-databind to 2.10 on branch-2.10
> ---
>
> Key: HADOOP-17249
> URL: https://issues.apache.org/jira/browse/HADOOP-17249
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.10.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This is filed to test backporting HADOOP-16905 to branch-2.10.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17249) Upgrade jackson-databind to 2.10 on branch-2.10

2020-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17249:

Labels: pull-request-available  (was: )

> Upgrade jackson-databind to 2.10 on branch-2.10
> ---
>
> Key: HADOOP-17249
> URL: https://issues.apache.org/jira/browse/HADOOP-17249
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.10.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This is filed to test backporting HADOOP-16905 to branch-2.10.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims opened a new pull request #2279: HADOOP-17249. Upgrade jackson-databind to 2.10 on branch-2.10.

2020-09-06 Thread GitBox


iwasakims opened a new pull request #2279:
URL: https://github.com/apache/hadoop/pull/2279


   This is backporting 
[HADOOP-16905](https://issues.apache.org/jira/browse/HADOOP-16905) to 
branch-2.10. I used the same version of jackson-databind and maven-shade-plugin 
used in the current trunk.
   
   maven-shade-plugin must be updated to 3.2.0 or above to fix error on 
creating jar of hadoop-yarn-server-applicationhistoryservice.
   
   ```
   [ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-shade-plugin:2.4.3:shade (default) on project 
hadoop-yarn-server-applicationhistoryservice: Error creating shaded jar: null: 
IllegalArgumentException -> [Help 1]
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17249) Upgrade jackson-databind to 2.10 on branch-2.10

2020-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17249?focusedWorklogId=479447=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479447
 ]

ASF GitHub Bot logged work on HADOOP-17249:
---

Author: ASF GitHub Bot
Created on: 06/Sep/20 23:47
Start Date: 06/Sep/20 23:47
Worklog Time Spent: 10m 
  Work Description: iwasakims opened a new pull request #2279:
URL: https://github.com/apache/hadoop/pull/2279


   This is backporting 
[HADOOP-16905](https://issues.apache.org/jira/browse/HADOOP-16905) to 
branch-2.10. I used the same version of jackson-databind and maven-shade-plugin 
used in the current trunk.
   
   maven-shade-plugin must be updated to 3.2.0 or above to fix error on 
creating jar of hadoop-yarn-server-applicationhistoryservice.
   
   ```
   [ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-shade-plugin:2.4.3:shade (default) on project 
hadoop-yarn-server-applicationhistoryservice: Error creating shaded jar: null: 
IllegalArgumentException -> [Help 1]
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 479447)
Remaining Estimate: 0h
Time Spent: 10m

> Upgrade jackson-databind to 2.10 on branch-2.10
> ---
>
> Key: HADOOP-17249
> URL: https://issues.apache.org/jira/browse/HADOOP-17249
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.10.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This is filed to test backporting HADOOP-16905 to branch-2.10.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17249) Upgrade jackson-databind to 2.10 on branch-2.10

2020-09-06 Thread Masatake Iwasaki (Jira)
Masatake Iwasaki created HADOOP-17249:
-

 Summary: Upgrade jackson-databind to 2.10 on branch-2.10
 Key: HADOOP-17249
 URL: https://issues.apache.org/jira/browse/HADOOP-17249
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.10.0
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki


This is filed to test backporting HADOOP-16905 to branch-2.10.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15171) native ZLIB decompressor produces 0 bytes on the 2nd call; also incorrrectly handles some zlib errors

2020-09-06 Thread Michael South (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17191374#comment-17191374
 ] 

Michael South commented on HADOOP-15171:


Apology: A raw "closed, unfounded" is too brusque. [~sershe] did a really 
excellent job analyzing the bug, creating a minimal testcase, and identifying 
where the issue is. I can't imagine how many hours he spent plowing through the 
Orc and Zlib codebases and rerunning tests.

> native ZLIB decompressor produces 0 bytes on the 2nd call; also incorrrectly 
> handles some zlib errors
> -
>
> Key: HADOOP-15171
> URL: https://issues.apache.org/jira/browse/HADOOP-15171
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Sergey Shelukhin
>Assignee: Lokesh Jain
>Priority: Blocker
>
> While reading some ORC file via direct buffers, Hive gets a 0-sized buffer 
> for a particular compressed segment of the file. We narrowed it down to 
> Hadoop native ZLIB codec; when the data is copied to heap-based buffer and 
> the JDK Inflater is used, it produces correct output. Input is only 127 bytes 
> so I can paste it here.
> All the other (many) blocks of the file are decompressed without problems by 
> the same code.
> {noformat}
> 2018-01-13T02:47:40,815 TRACE [IO-Elevator-Thread-0 
> (1515637158315_0079_1_00_00_0)] encoded.EncodedReaderImpl: Decompressing 
> 127 bytes to dest buffer pos 524288, limit 786432
> 2018-01-13T02:47:40,816  WARN [IO-Elevator-Thread-0 
> (1515637158315_0079_1_00_00_0)] encoded.EncodedReaderImpl: The codec has 
> produced 0 bytes for 127 bytes at pos 0, data hash 1719565039: [e3 92 e1 62 
> 66 60 60 10 12 e5 98 e0 27 c4 c7 f1 e8 12 8f 40 c3 7b 5e 89 09 7f 6e 74 73 04 
> 30 70 c9 72 b1 30 14 4d 60 82 49 37 bd e7 15 58 d0 cd 2f 31 a1 a1 e3 35 4c fa 
> 15 a3 02 4c 7a 51 37 bf c0 81 e5 02 12 13 5a b6 9f e2 04 ea 96 e3 62 65 b8 c3 
> b4 01 ae fd d0 72 01 81 07 87 05 25 26 74 3c 5b c9 05 35 fd 0a b3 03 50 7b 83 
> 11 c8 f2 c3 82 02 0f 96 0b 49 34 7c fa ff 9f 2d 80 01 00
> 2018-01-13T02:47:40,816  WARN [IO-Elevator-Thread-0 
> (1515637158315_0079_1_00_00_0)] encoded.EncodedReaderImpl: Fell back to 
> JDK decompressor with memcopy; got 155 bytes
> {noformat}
> Hadoop version is based on 3.1 snapshot.
> The size of libhadoop.so is 824403 bytes, and libgplcompression is 78273 
> FWIW. Not sure how to extract versions from those. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17191) ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?focusedWorklogId=479421=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479421
 ]

ASF GitHub Bot logged work on HADOOP-17191:
---

Author: ASF GitHub Bot
Created on: 06/Sep/20 18:25
Start Date: 06/Sep/20 18:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2278:
URL: https://github.com/apache/hadoop/pull/2278#issuecomment-687862577


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 27s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 48s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 31s |  branch has no errors when 
building and testing our client artifacts.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 39s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  the patch passed  |
   | -1 :x: |  shellcheck  |   0m  0s |  The patch generated 23 new + 0 
unchanged - 0 fixed = 23 total (was 0)  |
   | +1 :green_heart: |  shelldocs  |   0m 15s |  The patch generated 0 new + 
104 unchanged - 132 fixed = 104 total (was 236)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 52s |  patch has no errors when 
building and testing our client artifacts.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 36s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 40s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  80m  6s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2278/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2278 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs 
markdownlint |
   | uname | Linux c962aaf34582 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1841a5bb03f |
   | shellcheck | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2278/2/artifact/out/diff-patch-shellcheck.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2278/2/testReport/ |
   | Max. process+thread count | 308 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2278/2/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 479421)
Time Spent: 40m  (was: 0.5h)

> ABFS: Run the integration tests with various combinations of configurations 
> and publish a consolidated results
> --
>
> Key: HADOOP-17191
> URL: https://issues.apache.org/jira/browse/HADOOP-17191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
> ABFS driver supports various authorization mechanisms like OAuth, haredKey, 
> Shared 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2278: HADOOP-17191. ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-06 Thread GitBox


hadoop-yetus commented on pull request #2278:
URL: https://github.com/apache/hadoop/pull/2278#issuecomment-687862577


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 27s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 48s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 31s |  branch has no errors when 
building and testing our client artifacts.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 39s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  the patch passed  |
   | -1 :x: |  shellcheck  |   0m  0s |  The patch generated 23 new + 0 
unchanged - 0 fixed = 23 total (was 0)  |
   | +1 :green_heart: |  shelldocs  |   0m 15s |  The patch generated 0 new + 
104 unchanged - 132 fixed = 104 total (was 236)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 52s |  patch has no errors when 
building and testing our client artifacts.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 36s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 40s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  80m  6s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2278/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2278 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs 
markdownlint |
   | uname | Linux c962aaf34582 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1841a5bb03f |
   | shellcheck | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2278/2/artifact/out/diff-patch-shellcheck.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2278/2/testReport/ |
   | Max. process+thread count | 308 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2278/2/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree

2020-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15891?focusedWorklogId=479417=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479417
 ]

ASF GitHub Bot logged work on HADOOP-15891:
---

Author: ASF GitHub Bot
Created on: 06/Sep/20 16:58
Start Date: 06/Sep/20 16:58
Worklog Time Spent: 10m 
  Work Description: umamaheswararao commented on a change in pull request 
#2185:
URL: https://github.com/apache/hadoop/pull/2185#discussion_r484092172



##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestRegexMountPointResolvedDstPathReplaceInterceptor.java
##
@@ -0,0 +1,104 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+
+import org.apache.hadoop.fs.Path;
+import org.junit.Assert;
+import org.junit.Test;
+
+import static 
org.apache.hadoop.fs.viewfs.RegexMountPointInterceptorType.REPLACE_RESOLVED_DST_PATH;
+
+/**
+ * Test RegexMountPointResolvedDstPathReplaceInterceptor.
+ */
+public class TestRegexMountPointResolvedDstPathReplaceInterceptor {
+
+  public String createSerializedString(String regex, String replaceString) {
+return REPLACE_RESOLVED_DST_PATH.getConfigName()
++ RegexMountPoint.INTERCEPTOR_INTERNAL_SEP + regex
++ RegexMountPoint.INTERCEPTOR_INTERNAL_SEP + replaceString;
+  }
+
+  @Test
+  public void testDeserializeFromStringNormalCase() throws IOException {
+String srcRegex = "-";
+String replaceString = "_";
+String serializedString = createSerializedString(srcRegex, replaceString);
+RegexMountPointResolvedDstPathReplaceInterceptor interceptor =
+RegexMountPointResolvedDstPathReplaceInterceptor
+.deserializeFromString(serializedString);
+Assert.assertTrue(interceptor.getSrcRegexString().equals(srcRegex));
+Assert.assertTrue(interceptor.getReplaceString().equals(replaceString));
+Assert.assertTrue(interceptor.getSrcRegexPattern() == null);
+interceptor.initialize();
+Assert.assertTrue(
+interceptor.getSrcRegexPattern().toString().equals(srcRegex));
+  }
+
+  @Test
+  public void testDeserializeFromStringBadCase() throws IOException {
+String srcRegex = "-";
+String replaceString = "_";
+String serializedString = createSerializedString(srcRegex, replaceString);
+serializedString = serializedString + ":ddd";
+RegexMountPointResolvedDstPathReplaceInterceptor interceptor =
+RegexMountPointResolvedDstPathReplaceInterceptor
+.deserializeFromString(serializedString);
+Assert.assertEquals(interceptor, null);
+  }
+
+  @Test
+  public void testSerialization() {
+String srcRegex = "word1";
+String replaceString = "word2";
+String serializedString = createSerializedString(srcRegex, replaceString);
+RegexMountPointResolvedDstPathReplaceInterceptor interceptor =
+new RegexMountPointResolvedDstPathReplaceInterceptor(srcRegex,
+replaceString);
+Assert.assertEquals(interceptor.serializeToString(), serializedString);
+  }
+
+  @Test

Review comment:
   No worries. Thanks for addressing. :-)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 479417)
Time Spent: 4h 20m  (was: 4h 10m)

> Provide Regex Based Mount Point In Inode Tree
> -
>
> Key: HADOOP-15891
> URL: https://issues.apache.org/jira/browse/HADOOP-15891
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: viewfs
>Reporter: zhenzhao wang
>Assignee: zhenzhao wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15891.015.patch, HDFS-13948.001.patch, 
> 

[GitHub] [hadoop] umamaheswararao commented on a change in pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree

2020-09-06 Thread GitBox


umamaheswararao commented on a change in pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#discussion_r484092172



##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestRegexMountPointResolvedDstPathReplaceInterceptor.java
##
@@ -0,0 +1,104 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+
+import org.apache.hadoop.fs.Path;
+import org.junit.Assert;
+import org.junit.Test;
+
+import static 
org.apache.hadoop.fs.viewfs.RegexMountPointInterceptorType.REPLACE_RESOLVED_DST_PATH;
+
+/**
+ * Test RegexMountPointResolvedDstPathReplaceInterceptor.
+ */
+public class TestRegexMountPointResolvedDstPathReplaceInterceptor {
+
+  public String createSerializedString(String regex, String replaceString) {
+return REPLACE_RESOLVED_DST_PATH.getConfigName()
++ RegexMountPoint.INTERCEPTOR_INTERNAL_SEP + regex
++ RegexMountPoint.INTERCEPTOR_INTERNAL_SEP + replaceString;
+  }
+
+  @Test
+  public void testDeserializeFromStringNormalCase() throws IOException {
+String srcRegex = "-";
+String replaceString = "_";
+String serializedString = createSerializedString(srcRegex, replaceString);
+RegexMountPointResolvedDstPathReplaceInterceptor interceptor =
+RegexMountPointResolvedDstPathReplaceInterceptor
+.deserializeFromString(serializedString);
+Assert.assertTrue(interceptor.getSrcRegexString().equals(srcRegex));
+Assert.assertTrue(interceptor.getReplaceString().equals(replaceString));
+Assert.assertTrue(interceptor.getSrcRegexPattern() == null);
+interceptor.initialize();
+Assert.assertTrue(
+interceptor.getSrcRegexPattern().toString().equals(srcRegex));
+  }
+
+  @Test
+  public void testDeserializeFromStringBadCase() throws IOException {
+String srcRegex = "-";
+String replaceString = "_";
+String serializedString = createSerializedString(srcRegex, replaceString);
+serializedString = serializedString + ":ddd";
+RegexMountPointResolvedDstPathReplaceInterceptor interceptor =
+RegexMountPointResolvedDstPathReplaceInterceptor
+.deserializeFromString(serializedString);
+Assert.assertEquals(interceptor, null);
+  }
+
+  @Test
+  public void testSerialization() {
+String srcRegex = "word1";
+String replaceString = "word2";
+String serializedString = createSerializedString(srcRegex, replaceString);
+RegexMountPointResolvedDstPathReplaceInterceptor interceptor =
+new RegexMountPointResolvedDstPathReplaceInterceptor(srcRegex,
+replaceString);
+Assert.assertEquals(interceptor.serializeToString(), serializedString);
+  }
+
+  @Test

Review comment:
   No worries. Thanks for addressing. :-)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15171) native ZLIB decompressor produces 0 bytes on the 2nd call; also incorrrectly handles some zlib errors

2020-09-06 Thread Michael South (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17191286#comment-17191286
 ] 

Michael South edited comment on HADOOP-15171 at 9/6/20, 3:18 PM:
-

Issue should be closed, unfounded. The Hive Orc driver creates a decompression 
object and repeatedly calling it to deflate Orc blocks. Its treating each block 
as an entirely separate chunk (stream), completely decompressing each with one 
call to ...{{_inflateBytesDirect()}}. However, it wasn't calling 
{{inflateReset()}} or {{inflateEnd()}} / {{inflateInit()}} between the streams, 
which naturally left things in a confused state. It appears to be fixed in 
trunk Hive.

Also, returning 0 for {{Z_BUF_ERROR}} or {{Z_NEED_DICT}} is correct, and should 
not throw an error. The Java decompression object is agnostic as to whether the 
application is working in stream or all-at-once mode. The only determination of 
which mode is active is whether the application (Hive Orc driver in this case) 
is passing the entire input in one chunk and is allocating sufficient space for 
all of the output. Therefore, the application must check for a zero return. If 
no-progress (zero return) is an impossible situation then it can throw an 
exception; otherwise it needs to look at one or more of ...{{_finished()}}, 
...{{_getRemaining()}}, and/or ...{{_needDict()}} to figure out what's needed 
to make further progress. (It would be nice if JNI exposed the {{avail_out}} 
field, but if it's not an input or dictionary issue it must be a full output 
buffer.)

There *is* a very minor bug in ...{{inflateBytesDirect()}}. It's calling 
{{inflate()}} with {{Z_PARTIAL_FLUSH}}, which only applies to {{deflate()}}. It 
should be {{Z_NO_FLUSH}}. However, in the current zlib code (1.2.11) the 
{{flush}} parameter only affects the return code, and it only checks whether or 
not it is {{Z_FINISH}}.

 

Edit: The Zlib docs (overall, very excellent) kind of assume you realize that 
the internals, allocated in ...{{init()}}, are only valid for one stream run. 
The docs *do* say that ...{{end()}} deallocates these internal buffers; 
therefore to reuse the base compressor / decompressor you need to call 
...{{init()}} again to re-allocate them (otherwise NPE). The docs also state 
that ...{{reset()}} is equivalent to calling ...{{end()}} followed by 
...{{init()}}, only by resetting the internals and not deallocating and 
reallocating them.


was (Author: michael south):
Issue should be closed, unfounded. The Hive Orc driver creates a decompression 
object and repeatedly calling it to deflate Orc blocks. Its treating each block 
as an entirely separate chunk (stream), completely decompressing each with one 
call to ...{{_inflateBytesDirect()}}. However, it wasn't calling 
{{inflateReset()}} or {{inflateEnd()}} / {{inflateInit()}} between the streams, 
which naturally left things in a confused state. It appears to be fixed in 
trunk Hive.

Also, returning 0 for {{Z_BUF_ERROR}} or {{Z_NEED_DICT}} is correct, and should 
not throw an error. The Java decompression object is agnostic as to whether the 
application is working in stream or all-at-once mode. The only determination of 
which mode is active is whether the application (Hive Orc driver in this case) 
is passing the entire input in one chunk and is allocating sufficient space for 
all of the output. Therefore, the application must check for a zero return. If 
no-progress (zero return) is an impossible situation then it can throw an 
exception; otherwise it needs to look at one or more of ...{{_finished()}}, 
...{{_getRemaining()}}, and/or ...{{_needDict()}} to figure out what's needed 
to make further progress. (It would be nice if JNI exposed the {{avail_out}} 
field, but if it's not an input or dictionary issue it must be a full output 
buffer.)

There *is* a very minor bug in ...{{inflateBytesDirect()}}. It's calling 
{{inflate()}} with {{Z_PARTIAL_FLUSH}}, which only applies to {{deflate()}}. It 
should be {{Z_NO_FLUSH}}. However, in the current zlib code (1.2.11) the 
{{flush}} parameter only affects the return code, and it only checks whether or 
not it is {{Z_FINISH}}.

> native ZLIB decompressor produces 0 bytes on the 2nd call; also incorrrectly 
> handles some zlib errors
> -
>
> Key: HADOOP-15171
> URL: https://issues.apache.org/jira/browse/HADOOP-15171
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Sergey Shelukhin
>Assignee: Lokesh Jain
>Priority: Blocker
>
> While reading some ORC file via direct buffers, Hive gets a 0-sized buffer 
> for a particular compressed segment of the file. We narrowed it down to 
> Hadoop native ZLIB codec; when the data is copied to 

[jira] [Work logged] (HADOOP-17246) Fix build the hadoop-build Docker image failed

2020-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17246?focusedWorklogId=479412=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479412
 ]

ASF GitHub Bot logged work on HADOOP-17246:
---

Author: ASF GitHub Bot
Created on: 06/Sep/20 15:17
Start Date: 06/Sep/20 15:17
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2277:
URL: https://github.com/apache/hadoop/pull/2277#issuecomment-687817052


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 11s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 36s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  shadedclient  |  17m 24s |  branch has no errors when 
building and testing our client artifacts.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 18s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  hadolint  |   0m  4s |  There were no new hadolint 
issues.  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  There were no new shellcheck 
issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 16s |  There were no new shelldocs 
issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 20s |  patch has no errors when 
building and testing our client artifacts.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 39s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  39m 38s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2277/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2277 |
   | Optional Tests | dupname asflicense hadolint shellcheck shelldocs |
   | uname | Linux afc087efce6a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1841a5bb03f |
   | Max. process+thread count | 413 (vs. ulimit of 5500) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2277/2/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 
hadolint=1.11.1-0-g0e692dd |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 479412)
Time Spent: 50m  (was: 40m)

> Fix build the hadoop-build Docker image failed
> --
>
> Key: HADOOP-17246
> URL: https://issues.apache.org/jira/browse/HADOOP-17246
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
>  Labels: dockerfile, pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> When I build the docker-build image under macOS, it failed caused by:
> {code:java}
> 
> Command "/usr/bin/python -u -c "import setuptools, 
> tokenize;__file__='/tmp/pip-build-vKHcWu/isort/setup.py';exec(compile(getattr(tokenize,
>  'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" 
> install --record /tmp/pip-odL0bY-record/install-record.txt 
> --single-version-externally-managed --compile" failed with error code 1 in 
> /tmp/pip-build-vKHcWu/isort/
> You are using pip version 8.1.1, however version 20.2.2 is available.
> You should consider upgrading via the 'pip install --upgrade pip' command.
> The command '/bin/bash -o pipefail -c pip2 install configparser==4.0.2
>  pylint==1.9.2' returned a non-zero code: 1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2277: HADOOP-17246. Fix build the hadoop-build Docker image failed

2020-09-06 Thread GitBox


hadoop-yetus commented on pull request #2277:
URL: https://github.com/apache/hadoop/pull/2277#issuecomment-687817052


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 11s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 36s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  shadedclient  |  17m 24s |  branch has no errors when 
building and testing our client artifacts.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 18s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  hadolint  |   0m  4s |  There were no new hadolint 
issues.  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  There were no new shellcheck 
issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 16s |  There were no new shelldocs 
issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 20s |  patch has no errors when 
building and testing our client artifacts.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 39s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  39m 38s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2277/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2277 |
   | Optional Tests | dupname asflicense hadolint shellcheck shelldocs |
   | uname | Linux afc087efce6a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1841a5bb03f |
   | Max. process+thread count | 413 (vs. ulimit of 5500) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2277/2/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 
hadolint=1.11.1-0-g0e692dd |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15171) native ZLIB decompressor produces 0 bytes on the 2nd call; also incorrrectly handles some zlib errors

2020-09-06 Thread Michael South (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17191286#comment-17191286
 ] 

Michael South commented on HADOOP-15171:


Issue should be closed, unfounded. The Hive Orc driver creates a decompression 
object and repeatedly calling it to deflate Orc blocks. Its treating each block 
as an entirely separate chunk (stream), completely decompressing each with one 
call to ...{{_inflateBytesDirect()}}. However, it wasn't calling 
{{inflateReset()}} or {{inflateEnd()}} / {{inflateInit()}} between the streams, 
which naturally left things in a confused state. It appears to be fixed in 
trunk Hive.

Also, returning 0 for {{Z_BUF_ERROR}} or {{Z_NEED_DICT}} is correct, and should 
not throw an error. The Java decompression object is agnostic as to whether the 
application is working in stream or all-at-once mode. The only determination of 
which mode is active is whether the application (Hive Orc driver in this case) 
is passing the entire input in one chunk and is allocating sufficient space for 
all of the output. Therefore, the application must check for a zero return. If 
no-progress (zero return) is an impossible situation then it can throw an 
exception; otherwise it needs to look at one or more of ...{{_finished()}}, 
...{{_getRemaining()}}, and/or ...{{_needDict()}} to figure out what's needed 
to make further progress. (It would be nice if JNI exposed the {{avail_out}} 
field, but if it's not an input or dictionary issue it must be a full output 
buffer.)

There *is* a very minor bug in ...{{inflateBytesDirect()}}. It's calling 
{{inflate()}} with {{Z_PARTIAL_FLUSH}}, which only applies to {{deflate()}}. It 
should be {{Z_NO_FLUSH}}. However, in the current zlib code (1.2.11) the 
{{flush}} parameter only affects the return code, and it only checks whether or 
not it is {{Z_FINISH}}.

> native ZLIB decompressor produces 0 bytes on the 2nd call; also incorrrectly 
> handles some zlib errors
> -
>
> Key: HADOOP-15171
> URL: https://issues.apache.org/jira/browse/HADOOP-15171
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Sergey Shelukhin
>Assignee: Lokesh Jain
>Priority: Blocker
>
> While reading some ORC file via direct buffers, Hive gets a 0-sized buffer 
> for a particular compressed segment of the file. We narrowed it down to 
> Hadoop native ZLIB codec; when the data is copied to heap-based buffer and 
> the JDK Inflater is used, it produces correct output. Input is only 127 bytes 
> so I can paste it here.
> All the other (many) blocks of the file are decompressed without problems by 
> the same code.
> {noformat}
> 2018-01-13T02:47:40,815 TRACE [IO-Elevator-Thread-0 
> (1515637158315_0079_1_00_00_0)] encoded.EncodedReaderImpl: Decompressing 
> 127 bytes to dest buffer pos 524288, limit 786432
> 2018-01-13T02:47:40,816  WARN [IO-Elevator-Thread-0 
> (1515637158315_0079_1_00_00_0)] encoded.EncodedReaderImpl: The codec has 
> produced 0 bytes for 127 bytes at pos 0, data hash 1719565039: [e3 92 e1 62 
> 66 60 60 10 12 e5 98 e0 27 c4 c7 f1 e8 12 8f 40 c3 7b 5e 89 09 7f 6e 74 73 04 
> 30 70 c9 72 b1 30 14 4d 60 82 49 37 bd e7 15 58 d0 cd 2f 31 a1 a1 e3 35 4c fa 
> 15 a3 02 4c 7a 51 37 bf c0 81 e5 02 12 13 5a b6 9f e2 04 ea 96 e3 62 65 b8 c3 
> b4 01 ae fd d0 72 01 81 07 87 05 25 26 74 3c 5b c9 05 35 fd 0a b3 03 50 7b 83 
> 11 c8 f2 c3 82 02 0f 96 0b 49 34 7c fa ff 9f 2d 80 01 00
> 2018-01-13T02:47:40,816  WARN [IO-Elevator-Thread-0 
> (1515637158315_0079_1_00_00_0)] encoded.EncodedReaderImpl: Fell back to 
> JDK decompressor with memcopy; got 155 bytes
> {noformat}
> Hadoop version is based on 3.1 snapshot.
> The size of libhadoop.so is 824403 bytes, and libgplcompression is 78273 
> FWIW. Not sure how to extract versions from those. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17246) Fix build the hadoop-build Docker image failed

2020-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17246?focusedWorklogId=479408=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479408
 ]

ASF GitHub Bot logged work on HADOOP-17246:
---

Author: ASF GitHub Bot
Created on: 06/Sep/20 14:39
Start Date: 06/Sep/20 14:39
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2277:
URL: https://github.com/apache/hadoop/pull/2277#issuecomment-687807203


   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2277/2/console in 
case of problems.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 479408)
Time Spent: 40m  (was: 0.5h)

> Fix build the hadoop-build Docker image failed
> --
>
> Key: HADOOP-17246
> URL: https://issues.apache.org/jira/browse/HADOOP-17246
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
>  Labels: dockerfile, pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When I build the docker-build image under macOS, it failed caused by:
> {code:java}
> 
> Command "/usr/bin/python -u -c "import setuptools, 
> tokenize;__file__='/tmp/pip-build-vKHcWu/isort/setup.py';exec(compile(getattr(tokenize,
>  'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" 
> install --record /tmp/pip-odL0bY-record/install-record.txt 
> --single-version-externally-managed --compile" failed with error code 1 in 
> /tmp/pip-build-vKHcWu/isort/
> You are using pip version 8.1.1, however version 20.2.2 is available.
> You should consider upgrading via the 'pip install --upgrade pip' command.
> The command '/bin/bash -o pipefail -c pip2 install configparser==4.0.2
>  pylint==1.9.2' returned a non-zero code: 1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2277: HADOOP-17246. Fix build the hadoop-build Docker image failed

2020-09-06 Thread GitBox


hadoop-yetus commented on pull request #2277:
URL: https://github.com/apache/hadoop/pull/2277#issuecomment-687807203


   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2277/2/console in 
case of problems.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14176) distcp reports beyond physical memory limits on 2.X

2020-09-06 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-14176:
-
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

> distcp reports beyond physical memory limits on 2.X
> ---
>
> Key: HADOOP-14176
> URL: https://issues.apache.org/jira/browse/HADOOP-14176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-14176-branch-2.001.patch, 
> HADOOP-14176-branch-2.002.patch, HADOOP-14176-branch-2.003.patch, 
> HADOOP-14176-branch-2.004.patch
>
>
> When i run distcp,  i get some errors as follow
> {quote}
> 17/02/21 15:31:18 INFO mapreduce.Job: Task Id : 
> attempt_1487645941615_0037_m_03_0, Status : FAILED
> Container [pid=24661,containerID=container_1487645941615_0037_01_05] is 
> running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical 
> memory used; 4.0 GB of 5 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1487645941615_0037_01_05 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> |- 24661 24659 24661 24661 (bash) 0 0 108650496 301 /bin/bash -c 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN  -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5 
> 1>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stdout
>  
> 2>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stderr
> |- 24665 24661 24661 24661 (java) 1766 336 4235558912 280699 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
> {quote}
> Deep into the code , i find that because distcp configuration covers 
> mapred-site.xml
> {code}
> 
> mapred.job.map.memory.mb
> 1024
> 
> 
> mapred.job.reduce.memory.mb
> 1024
> 
> {code}
> When mapreduce.map.java.opts and mapreduce.map.memory.mb is setting in 
> mapred-default.xml, and the value is larger than setted in 
> distcp-default.xml, the error maybe occur.
> we should remove those two configurations in distcp-default.xml 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14176) distcp reports beyond physical memory limits on 2.X

2020-09-06 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17191270#comment-17191270
 ] 

Hadoop QA commented on HADOOP-14176:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 10s{color} 
| {color:red} HADOOP-14176 does not apply to branch-2. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-14176 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859412/HADOOP-14176-branch-2.004.patch
 |
| Console output | 
https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/55/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |


This message was automatically generated.



> distcp reports beyond physical memory limits on 2.X
> ---
>
> Key: HADOOP-14176
> URL: https://issues.apache.org/jira/browse/HADOOP-14176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-14176-branch-2.001.patch, 
> HADOOP-14176-branch-2.002.patch, HADOOP-14176-branch-2.003.patch, 
> HADOOP-14176-branch-2.004.patch
>
>
> When i run distcp,  i get some errors as follow
> {quote}
> 17/02/21 15:31:18 INFO mapreduce.Job: Task Id : 
> attempt_1487645941615_0037_m_03_0, Status : FAILED
> Container [pid=24661,containerID=container_1487645941615_0037_01_05] is 
> running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical 
> memory used; 4.0 GB of 5 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1487645941615_0037_01_05 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> |- 24661 24659 24661 24661 (bash) 0 0 108650496 301 /bin/bash -c 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN  -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5 
> 1>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stdout
>  
> 2>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stderr
> |- 24665 24661 24661 24661 (java) 1766 336 4235558912 280699 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
> {quote}
> Deep into the code , i find that because distcp configuration covers 
> mapred-site.xml
> {code}
> 
> mapred.job.map.memory.mb
> 1024
> 
> 
> mapred.job.reduce.memory.mb
> 1024
> 
> {code}
> When mapreduce.map.java.opts and mapreduce.map.memory.mb is setting in 
> mapred-default.xml, and the value is larger than setted in 
> distcp-default.xml, the error maybe occur.
> we should remove those two configurations in distcp-default.xml 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16039) backport HADOOP-15965, "Upgrade ADLS SDK to 2.3.3" to branch-2

2020-09-06 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17191268#comment-17191268
 ] 

Hadoop QA commented on HADOOP-16039:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 11s{color} 
| {color:red} HADOOP-16039 does not apply to branch-2. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-16039 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12955833/HADOOP-16039-branch-2-001.patch
 |
| Console output | 
https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/54/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |


This message was automatically generated.



> backport HADOOP-15965, "Upgrade ADLS SDK to 2.3.3" to branch-2
> --
>
> Key: HADOOP-16039
> URL: https://issues.apache.org/jira/browse/HADOOP-16039
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.9.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16039-001.patch, HADOOP-16039-branch-2-001.patch
>
>
> Backport the ADLS SDK 2.3.3 update to branch-2, retest. etc



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13091) DistCp masks potential CRC check failures

2020-09-06 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17191267#comment-17191267
 ] 

Hadoop QA commented on HADOOP-13091:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 10s{color} 
| {color:red} HADOOP-13091 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-13091 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12803138/HADOOP-13091.004.patch
 |
| Console output | 
https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/53/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |


This message was automatically generated.



> DistCp masks potential CRC check failures
> -
>
> Key: HADOOP-13091
> URL: https://issues.apache.org/jira/browse/HADOOP-13091
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Elliot West
>Assignee: Yiqun Lin
>Priority: Major
>  Labels: distcp
> Attachments: HADOOP-13091.003.patch, HADOOP-13091.004.patch, 
> HDFS-10338.001.patch, HDFS-10338.002.patch
>
>
> There appear to be edge cases whereby CRC checks may be circumvented when 
> requests for checksums from the source or target file system fail. In this 
> event CRCs could differ between the source and target and yet the DistCp copy 
> would succeed, even when the 'skip CRC check' option is not being used.
> The code in question is contained in the method 
> [{{org.apache.hadoop.tools.util.DistCpUtils#checksumsAreEqual(...)}}|https://github.com/apache/hadoop/blob/release-2.7.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java#L457]
> Specifically this code block suggests that if there is a failure when trying 
> to read the source or target checksum then the method will return {{true}} 
> (i.e.  the checksums are equal), implying that the check succeeded. In actual 
> fact we just failed to obtain the checksum and could not perform the check.
> {code}
> try {
>   sourceChecksum = sourceChecksum != null ? sourceChecksum : 
> sourceFS.getFileChecksum(source);
>   targetChecksum = targetFS.getFileChecksum(target);
> } catch (IOException e) {
>   LOG.error("Unable to retrieve checksum for " + source + " or "
> + target, e);
> }
> return (sourceChecksum == null || targetChecksum == null ||
>   sourceChecksum.equals(targetChecksum));
> {code}
> I believe that at the very least the caught {{IOException}} should be 
> re-thrown. If this is not deemed desirable then I believe an option 
> ({{--strictCrc}}?) should be added to enforce a strict check where we require 
> that both the source and target CRCs are retrieved, are not null, and are 
> then compared for equality. If for any reason either of the CRCs retrievals 
> fail then an exception is thrown.
> Clearly some {{FileSystems}} do not support CRCs and invocations to 
> {{FileSystem.getFileChecksum(...)}} return {{null}} in these instances. I 
> would suggest that these should fail a strict CRC check to prevent users 
> developing a false sense of security in their copy pipeline.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10738) Dynamically adjust distcp configuration by adding distcp-site.xml into code base

2020-09-06 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17191269#comment-17191269
 ] 

Hadoop QA commented on HADOOP-10738:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 14s{color} 
| {color:red} HADOOP-10738 does not apply to branch-2. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-10738 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859700/HADOOP-10738-branch-2.001.patch
 |
| Console output | 
https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/56/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |


This message was automatically generated.



> Dynamically adjust distcp configuration by adding distcp-site.xml into code 
> base
> 
>
> Key: HADOOP-10738
> URL: https://issues.apache.org/jira/browse/HADOOP-10738
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Siqi Li
>Priority: Major
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10738-branch-2.001.patch, HADOOP-10738.v1.patch, 
> HADOOP-10738.v2.patch
>
>
> For now, the configuration of distcp resides in hadoop-distcp.jar. This makes 
> it difficult to adjust the configuration dynamically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16039) backport HADOOP-15965, "Upgrade ADLS SDK to 2.3.3" to branch-2

2020-09-06 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17191264#comment-17191264
 ] 

Masatake Iwasaki commented on HADOOP-16039:
---

updated the target version for preparing 2.10.1 release.

> backport HADOOP-15965, "Upgrade ADLS SDK to 2.3.3" to branch-2
> --
>
> Key: HADOOP-16039
> URL: https://issues.apache.org/jira/browse/HADOOP-16039
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.9.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16039-001.patch, HADOOP-16039-branch-2-001.patch
>
>
> Backport the ADLS SDK 2.3.3 update to branch-2, retest. etc



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16039) backport HADOOP-15965, "Upgrade ADLS SDK to 2.3.3" to branch-2

2020-09-06 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-16039:
--
Target Version/s: 2.10.2  (was: 2.10.1)

> backport HADOOP-15965, "Upgrade ADLS SDK to 2.3.3" to branch-2
> --
>
> Key: HADOOP-16039
> URL: https://issues.apache.org/jira/browse/HADOOP-16039
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.9.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16039-001.patch, HADOOP-16039-branch-2-001.patch
>
>
> Backport the ADLS SDK 2.3.3 update to branch-2, retest. etc



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13091) DistCp masks potential CRC check failures

2020-09-06 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-13091:
--
Target Version/s: 2.10.2  (was: 2.10.1)

> DistCp masks potential CRC check failures
> -
>
> Key: HADOOP-13091
> URL: https://issues.apache.org/jira/browse/HADOOP-13091
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Elliot West
>Assignee: Yiqun Lin
>Priority: Major
>  Labels: distcp
> Attachments: HADOOP-13091.003.patch, HADOOP-13091.004.patch, 
> HDFS-10338.001.patch, HDFS-10338.002.patch
>
>
> There appear to be edge cases whereby CRC checks may be circumvented when 
> requests for checksums from the source or target file system fail. In this 
> event CRCs could differ between the source and target and yet the DistCp copy 
> would succeed, even when the 'skip CRC check' option is not being used.
> The code in question is contained in the method 
> [{{org.apache.hadoop.tools.util.DistCpUtils#checksumsAreEqual(...)}}|https://github.com/apache/hadoop/blob/release-2.7.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java#L457]
> Specifically this code block suggests that if there is a failure when trying 
> to read the source or target checksum then the method will return {{true}} 
> (i.e.  the checksums are equal), implying that the check succeeded. In actual 
> fact we just failed to obtain the checksum and could not perform the check.
> {code}
> try {
>   sourceChecksum = sourceChecksum != null ? sourceChecksum : 
> sourceFS.getFileChecksum(source);
>   targetChecksum = targetFS.getFileChecksum(target);
> } catch (IOException e) {
>   LOG.error("Unable to retrieve checksum for " + source + " or "
> + target, e);
> }
> return (sourceChecksum == null || targetChecksum == null ||
>   sourceChecksum.equals(targetChecksum));
> {code}
> I believe that at the very least the caught {{IOException}} should be 
> re-thrown. If this is not deemed desirable then I believe an option 
> ({{--strictCrc}}?) should be added to enforce a strict check where we require 
> that both the source and target CRCs are retrieved, are not null, and are 
> then compared for equality. If for any reason either of the CRCs retrievals 
> fail then an exception is thrown.
> Clearly some {{FileSystems}} do not support CRCs and invocations to 
> {{FileSystem.getFileChecksum(...)}} return {{null}} in these instances. I 
> would suggest that these should fail a strict CRC check to prevent users 
> developing a false sense of security in their copy pipeline.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13091) DistCp masks potential CRC check failures

2020-09-06 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17191263#comment-17191263
 ] 

Masatake Iwasaki commented on HADOOP-13091:
---

updated the target version for preparing 2.10.1 release.

> DistCp masks potential CRC check failures
> -
>
> Key: HADOOP-13091
> URL: https://issues.apache.org/jira/browse/HADOOP-13091
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Elliot West
>Assignee: Yiqun Lin
>Priority: Major
>  Labels: distcp
> Attachments: HADOOP-13091.003.patch, HADOOP-13091.004.patch, 
> HDFS-10338.001.patch, HDFS-10338.002.patch
>
>
> There appear to be edge cases whereby CRC checks may be circumvented when 
> requests for checksums from the source or target file system fail. In this 
> event CRCs could differ between the source and target and yet the DistCp copy 
> would succeed, even when the 'skip CRC check' option is not being used.
> The code in question is contained in the method 
> [{{org.apache.hadoop.tools.util.DistCpUtils#checksumsAreEqual(...)}}|https://github.com/apache/hadoop/blob/release-2.7.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java#L457]
> Specifically this code block suggests that if there is a failure when trying 
> to read the source or target checksum then the method will return {{true}} 
> (i.e.  the checksums are equal), implying that the check succeeded. In actual 
> fact we just failed to obtain the checksum and could not perform the check.
> {code}
> try {
>   sourceChecksum = sourceChecksum != null ? sourceChecksum : 
> sourceFS.getFileChecksum(source);
>   targetChecksum = targetFS.getFileChecksum(target);
> } catch (IOException e) {
>   LOG.error("Unable to retrieve checksum for " + source + " or "
> + target, e);
> }
> return (sourceChecksum == null || targetChecksum == null ||
>   sourceChecksum.equals(targetChecksum));
> {code}
> I believe that at the very least the caught {{IOException}} should be 
> re-thrown. If this is not deemed desirable then I believe an option 
> ({{--strictCrc}}?) should be added to enforce a strict check where we require 
> that both the source and target CRCs are retrieved, are not null, and are 
> then compared for equality. If for any reason either of the CRCs retrievals 
> fail then an exception is thrown.
> Clearly some {{FileSystems}} do not support CRCs and invocations to 
> {{FileSystem.getFileChecksum(...)}} return {{null}} in these instances. I 
> would suggest that these should fail a strict CRC check to prevent users 
> developing a false sense of security in their copy pipeline.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10738) Dynamically adjust distcp configuration by adding distcp-site.xml into code base

2020-09-06 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-10738:
--
Target Version/s: 2.10.2  (was: 2.10.1)

> Dynamically adjust distcp configuration by adding distcp-site.xml into code 
> base
> 
>
> Key: HADOOP-10738
> URL: https://issues.apache.org/jira/browse/HADOOP-10738
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Siqi Li
>Priority: Major
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10738-branch-2.001.patch, HADOOP-10738.v1.patch, 
> HADOOP-10738.v2.patch
>
>
> For now, the configuration of distcp resides in hadoop-distcp.jar. This makes 
> it difficult to adjust the configuration dynamically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10738) Dynamically adjust distcp configuration by adding distcp-site.xml into code base

2020-09-06 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17191262#comment-17191262
 ] 

Masatake Iwasaki commented on HADOOP-10738:
---

updated the target version for preparing 2.10.1 release.

> Dynamically adjust distcp configuration by adding distcp-site.xml into code 
> base
> 
>
> Key: HADOOP-10738
> URL: https://issues.apache.org/jira/browse/HADOOP-10738
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Siqi Li
>Priority: Major
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10738-branch-2.001.patch, HADOOP-10738.v1.patch, 
> HADOOP-10738.v2.patch
>
>
> For now, the configuration of distcp resides in hadoop-distcp.jar. This makes 
> it difficult to adjust the configuration dynamically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14176) distcp reports beyond physical memory limits on 2.X

2020-09-06 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17191260#comment-17191260
 ] 

Masatake Iwasaki commented on HADOOP-14176:
---

updated the target version for preparing 2.10.1 release.

> distcp reports beyond physical memory limits on 2.X
> ---
>
> Key: HADOOP-14176
> URL: https://issues.apache.org/jira/browse/HADOOP-14176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-14176-branch-2.001.patch, 
> HADOOP-14176-branch-2.002.patch, HADOOP-14176-branch-2.003.patch, 
> HADOOP-14176-branch-2.004.patch
>
>
> When i run distcp,  i get some errors as follow
> {quote}
> 17/02/21 15:31:18 INFO mapreduce.Job: Task Id : 
> attempt_1487645941615_0037_m_03_0, Status : FAILED
> Container [pid=24661,containerID=container_1487645941615_0037_01_05] is 
> running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical 
> memory used; 4.0 GB of 5 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1487645941615_0037_01_05 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> |- 24661 24659 24661 24661 (bash) 0 0 108650496 301 /bin/bash -c 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN  -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5 
> 1>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stdout
>  
> 2>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stderr
> |- 24665 24661 24661 24661 (java) 1766 336 4235558912 280699 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
> {quote}
> Deep into the code , i find that because distcp configuration covers 
> mapred-site.xml
> {code}
> 
> mapred.job.map.memory.mb
> 1024
> 
> 
> mapred.job.reduce.memory.mb
> 1024
> 
> {code}
> When mapreduce.map.java.opts and mapreduce.map.memory.mb is setting in 
> mapred-default.xml, and the value is larger than setted in 
> distcp-default.xml, the error maybe occur.
> we should remove those two configurations in distcp-default.xml 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14176) distcp reports beyond physical memory limits on 2.X

2020-09-06 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-14176:
--
Target Version/s: 2.10.2  (was: 2.10.1)

> distcp reports beyond physical memory limits on 2.X
> ---
>
> Key: HADOOP-14176
> URL: https://issues.apache.org/jira/browse/HADOOP-14176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-14176-branch-2.001.patch, 
> HADOOP-14176-branch-2.002.patch, HADOOP-14176-branch-2.003.patch, 
> HADOOP-14176-branch-2.004.patch
>
>
> When i run distcp,  i get some errors as follow
> {quote}
> 17/02/21 15:31:18 INFO mapreduce.Job: Task Id : 
> attempt_1487645941615_0037_m_03_0, Status : FAILED
> Container [pid=24661,containerID=container_1487645941615_0037_01_05] is 
> running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical 
> memory used; 4.0 GB of 5 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1487645941615_0037_01_05 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> |- 24661 24659 24661 24661 (bash) 0 0 108650496 301 /bin/bash -c 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN  -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5 
> 1>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stdout
>  
> 2>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stderr
> |- 24665 24661 24661 24661 (java) 1766 336 4235558912 280699 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
> {quote}
> Deep into the code , i find that because distcp configuration covers 
> mapred-site.xml
> {code}
> 
> mapred.job.map.memory.mb
> 1024
> 
> 
> mapred.job.reduce.memory.mb
> 1024
> 
> {code}
> When mapreduce.map.java.opts and mapreduce.map.memory.mb is setting in 
> mapred-default.xml, and the value is larger than setted in 
> distcp-default.xml, the error maybe occur.
> we should remove those two configurations in distcp-default.xml 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree

2020-09-06 Thread GitBox


hadoop-yetus commented on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-687765095


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  28m 21s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  1s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
6 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 24s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 32s |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 14s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  16m 55s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 52s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 57s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 41s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 14s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 11s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 25s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 57s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 46s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  18m 46s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 47s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  16m 47s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 42s |  root: The patch generated 5 new 
+ 182 unchanged - 1 fixed = 187 total (was 183)  |
   | +1 :green_heart: |  mvnsite  |   2m 54s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m  9s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 14s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   5m 38s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 29s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  |  99m 13s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  6s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 311m  6s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
   |   | hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestGetFileChecksum |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2185/15/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2185 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 2a8252a0d357 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1841a5bb03f |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2185/15/artifact/out/diff-checkstyle-root.txt
 

[jira] [Work logged] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree

2020-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15891?focusedWorklogId=479400=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479400
 ]

ASF GitHub Bot logged work on HADOOP-15891:
---

Author: ASF GitHub Bot
Created on: 06/Sep/20 11:37
Start Date: 06/Sep/20 11:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-687765095


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  28m 21s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  1s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
6 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 24s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 32s |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 14s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  16m 55s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 52s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 57s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 41s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 14s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 11s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 25s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 57s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 46s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  18m 46s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 47s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  16m 47s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 42s |  root: The patch generated 5 new 
+ 182 unchanged - 1 fixed = 187 total (was 183)  |
   | +1 :green_heart: |  mvnsite  |   2m 54s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m  9s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 14s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   5m 38s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 29s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  |  99m 13s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  6s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 311m  6s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
   |   | hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestGetFileChecksum |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2185/15/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2185 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 2a8252a0d357 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git 

[jira] [Work logged] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree

2020-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15891?focusedWorklogId=479396=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479396
 ]

ASF GitHub Bot logged work on HADOOP-15891:
---

Author: ASF GitHub Bot
Created on: 06/Sep/20 09:55
Start Date: 06/Sep/20 09:55
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-687743536


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  28m 21s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
6 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 22s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m  7s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 33s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  16m 47s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 50s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m  0s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 27s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m  9s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 12s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 21s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 57s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 54s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  18m 54s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 20s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 20s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 43s |  root: The patch generated 2 new 
+ 182 unchanged - 1 fixed = 184 total (was 183)  |
   | +1 :green_heart: |  mvnsite  |   2m 55s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 21s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 14s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   5m 38s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 45s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  |  97m 55s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  7s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 308m 22s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2185/14/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2185 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux bdb50cea41e2 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1841a5bb03f |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree

2020-09-06 Thread GitBox


hadoop-yetus commented on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-687743536


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  28m 21s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
6 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 22s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m  7s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 33s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  16m 47s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 50s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m  0s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 27s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m  9s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 12s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 21s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 57s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 54s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  18m 54s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 20s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 20s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 43s |  root: The patch generated 2 new 
+ 182 unchanged - 1 fixed = 184 total (was 183)  |
   | +1 :green_heart: |  mvnsite  |   2m 55s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 21s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 14s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   5m 38s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 45s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  |  97m 55s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  7s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 308m 22s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2185/14/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2185 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux bdb50cea41e2 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1841a5bb03f |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2185/14/artifact/out/diff-checkstyle-root.txt
 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2185/14/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
  

[GitHub] [hadoop] hadoop-yetus commented on pull request #2278: HADOOP-17191. ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-06 Thread GitBox


hadoop-yetus commented on pull request #2278:
URL: https://github.com/apache/hadoop/pull/2278#issuecomment-687718737


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  4s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 40s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 50s |  branch has no errors when 
building and testing our client artifacts.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  There were no new shellcheck 
issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 14s |  The patch generated 0 new + 
104 unchanged - 132 fixed = 104 total (was 236)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 36s |  patch has no errors when 
building and testing our client artifacts.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 29s |  hadoop-azure in the patch passed.  
|
   | -1 :x: |  asflicense  |   0m 32s |  The patch generated 2 ASF License 
warnings.  |
   |  |   |  68m 51s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2278/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2278 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs 
markdownlint |
   | uname | Linux 59e40bddadd5 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1841a5bb03f |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2278/1/testReport/ |
   | asflicense | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2278/1/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 310 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2278/1/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17191) ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?focusedWorklogId=479393=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479393
 ]

ASF GitHub Bot logged work on HADOOP-17191:
---

Author: ASF GitHub Bot
Created on: 06/Sep/20 07:48
Start Date: 06/Sep/20 07:48
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2278:
URL: https://github.com/apache/hadoop/pull/2278#issuecomment-687718737


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  4s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 40s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 50s |  branch has no errors when 
building and testing our client artifacts.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  There were no new shellcheck 
issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 14s |  The patch generated 0 new + 
104 unchanged - 132 fixed = 104 total (was 236)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 36s |  patch has no errors when 
building and testing our client artifacts.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 29s |  hadoop-azure in the patch passed.  
|
   | -1 :x: |  asflicense  |   0m 32s |  The patch generated 2 ASF License 
warnings.  |
   |  |   |  68m 51s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2278/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2278 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs 
markdownlint |
   | uname | Linux 59e40bddadd5 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1841a5bb03f |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2278/1/testReport/ |
   | asflicense | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2278/1/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 310 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2278/1/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 479393)
Time Spent: 0.5h  (was: 20m)

> ABFS: Run the integration tests with various combinations of configurations 
> and publish a consolidated results
> --
>
> Key: HADOOP-17191
> URL: https://issues.apache.org/jira/browse/HADOOP-17191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
> ABFS driver supports various authorization mechanisms like OAuth, haredKey, 
> Shared Access Signature. The integration tests 

[jira] [Created] (HADOOP-17248) Support ChRootedFileSystem level cache for Regex Mount points

2020-09-06 Thread zhenzhao wang (Jira)
zhenzhao wang created HADOOP-17248:
--

 Summary: Support ChRootedFileSystem level cache for Regex Mount 
points
 Key: HADOOP-17248
 URL: https://issues.apache.org/jira/browse/HADOOP-17248
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: zhenzhao wang


Support ChRootedFileSystem level cache for Regex Mount points, so users don't 
need default change rename strategy settings to use rename API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-17191) ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-06 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17191 started by Bilahari T H.
-
> ABFS: Run the integration tests with various combinations of configurations 
> and publish a consolidated results
> --
>
> Key: HADOOP-17191
> URL: https://issues.apache.org/jira/browse/HADOOP-17191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
> ABFS driver supports various authorization mechanisms like OAuth, haredKey, 
> Shared Access Signature. The integration tests need to be executed against 
> accounts with and without hierarchical namespace support using various 
> authorization mechanisms.
> Currently the developer has to manually run the tests with different 
> combinations of configurations ex: HNS account with SharedKey and OAuth, 
> NonHNS account with SharedKey etc..
> The expectation is to automate these runs with different combinations. This 
> will help the developer to run the integration tests with different variants 
> of configurations automatically. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17191) ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-06 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-17191:
--
Status: Patch Available  (was: In Progress)

> ABFS: Run the integration tests with various combinations of configurations 
> and publish a consolidated results
> --
>
> Key: HADOOP-17191
> URL: https://issues.apache.org/jira/browse/HADOOP-17191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
> ABFS driver supports various authorization mechanisms like OAuth, haredKey, 
> Shared Access Signature. The integration tests need to be executed against 
> accounts with and without hierarchical namespace support using various 
> authorization mechanisms.
> Currently the developer has to manually run the tests with different 
> combinations of configurations ex: HNS account with SharedKey and OAuth, 
> NonHNS account with SharedKey etc..
> The expectation is to automate these runs with different combinations. This 
> will help the developer to run the integration tests with different variants 
> of configurations automatically. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17191) ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-06 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-17191:
--
Description: 
ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
ABFS driver supports various authorization mechanisms like OAuth, haredKey, 
Shared Access Signature. The integration tests need to be executed against 
accounts with and without hierarchical namespace support using various 
authorization mechanisms.
Currently the developer has to manually run the tests with different 
combinations of configurations ex: HNS account with SharedKey and OAuth, NonHNS 
account with SharedKey etc..
The expectation is to automate these runs with different combinations. This 
will help the developer to run the integration tests with different variants of 
configurations automatically. 

  was:
ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
ABFS driver supports various authorization mechanisms like OAuth, haredKey, 
Shared Access Signature. The integration tests need to be executed against 
accounts with and without hierarchical namespace support using various 
authorization mechanisms.
Currently the developer has to manually run the tests with different 
combinations of configurations ex: HNS account with SharedKey and OAuth, NonHNS 
account with SharedKey etc..
The expectation is to automate these runs with different combinations. This 
will help the developer to run the integration tests with different variants of 
configurations. 


> ABFS: Run the integration tests with various combinations of configurations 
> and publish a consolidated results
> --
>
> Key: HADOOP-17191
> URL: https://issues.apache.org/jira/browse/HADOOP-17191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
> ABFS driver supports various authorization mechanisms like OAuth, haredKey, 
> Shared Access Signature. The integration tests need to be executed against 
> accounts with and without hierarchical namespace support using various 
> authorization mechanisms.
> Currently the developer has to manually run the tests with different 
> combinations of configurations ex: HNS account with SharedKey and OAuth, 
> NonHNS account with SharedKey etc..
> The expectation is to automate these runs with different combinations. This 
> will help the developer to run the integration tests with different variants 
> of configurations automatically. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17191) ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-06 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-17191:
--
Description: 
ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
ABFS driver supports various authorization mechanisms like OAuth, haredKey, 
Shared Access Signature. The integration tests need to be executed against 
accounts with and without hierarchical namespace support using various 
authorization mechanisms.
Currently the developer has to manually run the tests with different 
combinations of configurations ex: HNS account with SharedKey and OAuth, NonHNS 
account with SharedKey etc..
The expectation is to automate these runs with different combinations. This 
will help the developer to run the integration tests with different variants of 
configurations. 

  was:
ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
ABFS driver supports various authorization mechanisms like OAuth, haredKey, 
Shared Access Signature. The integration tests need to be executed against 
accounts with and without hierarchical namespace support using various 
authorization mechanisms.
Currently the developer has to manually run the tests with different 
combinations of configurations.
The expectation is to automate these runs with different combinations. This 
will help the developer to run the integration tests with different variants of 
configurations. 


> ABFS: Run the integration tests with various combinations of configurations 
> and publish a consolidated results
> --
>
> Key: HADOOP-17191
> URL: https://issues.apache.org/jira/browse/HADOOP-17191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
> ABFS driver supports various authorization mechanisms like OAuth, haredKey, 
> Shared Access Signature. The integration tests need to be executed against 
> accounts with and without hierarchical namespace support using various 
> authorization mechanisms.
> Currently the developer has to manually run the tests with different 
> combinations of configurations ex: HNS account with SharedKey and OAuth, 
> NonHNS account with SharedKey etc..
> The expectation is to automate these runs with different combinations. This 
> will help the developer to run the integration tests with different variants 
> of configurations. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-1719) Improve the utilization of shuffle copier threads

2020-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-1719?focusedWorklogId=479391=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479391
 ]

ASF GitHub Bot logged work on HADOOP-1719:
--

Author: ASF GitHub Bot
Created on: 06/Sep/20 06:42
Start Date: 06/Sep/20 06:42
Worklog Time Spent: 10m 
  Work Description: bilaharith edited a comment on pull request #2278:
URL: https://github.com/apache/hadoop/pull/2278#issuecomment-687710639


   **Driver test results using accounts in Canary**
   mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 87, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [ERROR] Errors: 
   [ERROR]   ITestAbfsInputStreamStatistics.testReadAheadCounters:346 » 
TestTimedOut test t...
   [INFO] 
   [ERROR] Tests run: 451, Failures: 0, Errors: 1, Skipped: 64
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 207, Failures: 0, Errors: 0, Skipped: 24
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 87, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [ERROR] Errors: 
   [ERROR]   ITestAbfsInputStreamStatistics.testReadAheadCounters:346 » 
TestTimedOut test t...
   [INFO] 
   [ERROR] Tests run: 451, Failures: 0, Errors: 1, Skipped: 24
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 207, Failures: 0, Errors: 0, Skipped: 16
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 87, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [ERROR] Errors: 
   [ERROR]   ITestAbfsInputStreamStatistics.testReadAheadCounters:346 » 
TestTimedOut test t...
   [INFO] 
   [ERROR] Tests run: 451, Failures: 0, Errors: 1, Skipped: 245
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 207, Failures: 0, Errors: 0, Skipped: 16
   
   
   The above error is tracked under the JIRA: 
https://issues.apache.org/jira/browse/HADOOP-17160



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 479391)
Time Spent: 0.5h  (was: 20m)

> Improve the utilization of shuffle copier threads
> -
>
> Key: HADOOP-1719
> URL: https://issues.apache.org/jira/browse/HADOOP-1719
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Devaraj Das
>Assignee: Amar Kamat
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.16.0
>
> Attachments: 1719.1.patch, 1719.patch, 1719.patch, HADOOP-1719.patch, 
> HADOOP-1719.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In the current design, the scheduling of copies is done and the scheduler 
> (the main loop in fetchOutputs) won't schedule anything until it hears back 
> from at least one of the copier threads. Due to this, the main loop won't 
> query the TaskTracker asking for new map locations and may not be using all 
> the copiers effectively. This may not be an issue for small-sized map 
> outputs, where at steady state, the frequency of such notifications is 
> frequent.
> Ideally, we should schedule all what we can, and, depending on how busy we 
> currently are, query the tasktracker for more map locations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith edited a comment on pull request #2278: HADOOP-1719. ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-06 Thread GitBox


bilaharith edited a comment on pull request #2278:
URL: https://github.com/apache/hadoop/pull/2278#issuecomment-687710639


   **Driver test results using accounts in Canary**
   mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 87, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [ERROR] Errors: 
   [ERROR]   ITestAbfsInputStreamStatistics.testReadAheadCounters:346 » 
TestTimedOut test t...
   [INFO] 
   [ERROR] Tests run: 451, Failures: 0, Errors: 1, Skipped: 64
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 207, Failures: 0, Errors: 0, Skipped: 24
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 87, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [ERROR] Errors: 
   [ERROR]   ITestAbfsInputStreamStatistics.testReadAheadCounters:346 » 
TestTimedOut test t...
   [INFO] 
   [ERROR] Tests run: 451, Failures: 0, Errors: 1, Skipped: 24
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 207, Failures: 0, Errors: 0, Skipped: 16
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 87, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [ERROR] Errors: 
   [ERROR]   ITestAbfsInputStreamStatistics.testReadAheadCounters:346 » 
TestTimedOut test t...
   [INFO] 
   [ERROR] Tests run: 451, Failures: 0, Errors: 1, Skipped: 245
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 207, Failures: 0, Errors: 0, Skipped: 16
   
   
   The above error is tracked under the JIRA: 
https://issues.apache.org/jira/browse/HADOOP-17160



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-17160) ITestAbfsInputStreamStatistics#testReadAheadCounters timing out always

2020-09-06 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H reopened HADOOP-17160:
---

The issue is still present.

> ITestAbfsInputStreamStatistics#testReadAheadCounters timing out always
> --
>
> Key: HADOOP-17160
> URL: https://issues.apache.org/jira/browse/HADOOP-17160
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Steve Loughran
>Priority: Major
>
> The test ITestAbfsInputStreamStatistics#testReadAheadCounters timing out 
> always is timing out always



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-1719) Improve the utilization of shuffle copier threads

2020-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-1719?focusedWorklogId=479390=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479390
 ]

ASF GitHub Bot logged work on HADOOP-1719:
--

Author: ASF GitHub Bot
Created on: 06/Sep/20 06:40
Start Date: 06/Sep/20 06:40
Worklog Time Spent: 10m 
  Work Description: bilaharith commented on pull request #2278:
URL: https://github.com/apache/hadoop/pull/2278#issuecomment-687710639


   **Driver test results using accounts in Central India**
   mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 87, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [ERROR] Errors: 
   [ERROR]   ITestAbfsInputStreamStatistics.testReadAheadCounters:346 » 
TestTimedOut test t...
   [INFO] 
   [ERROR] Tests run: 451, Failures: 0, Errors: 1, Skipped: 64
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 207, Failures: 0, Errors: 0, Skipped: 24
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 87, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [ERROR] Errors: 
   [ERROR]   ITestAbfsInputStreamStatistics.testReadAheadCounters:346 » 
TestTimedOut test t...
   [INFO] 
   [ERROR] Tests run: 451, Failures: 0, Errors: 1, Skipped: 24
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 207, Failures: 0, Errors: 0, Skipped: 16
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 87, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [ERROR] Errors: 
   [ERROR]   ITestAbfsInputStreamStatistics.testReadAheadCounters:346 » 
TestTimedOut test t...
   [INFO] 
   [ERROR] Tests run: 451, Failures: 0, Errors: 1, Skipped: 245
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 207, Failures: 0, Errors: 0, Skipped: 16



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 479390)
Time Spent: 20m  (was: 10m)

> Improve the utilization of shuffle copier threads
> -
>
> Key: HADOOP-1719
> URL: https://issues.apache.org/jira/browse/HADOOP-1719
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Devaraj Das
>Assignee: Amar Kamat
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.16.0
>
> Attachments: 1719.1.patch, 1719.patch, 1719.patch, HADOOP-1719.patch, 
> HADOOP-1719.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In the current design, the scheduling of copies is done and the scheduler 
> (the main loop in fetchOutputs) won't schedule anything until it hears back 
> from at least one of the copier threads. Due to this, the main loop won't 
> query the TaskTracker asking for new map locations and may not be using all 
> the copiers effectively. This may not be an issue for small-sized map 
> outputs, where at steady state, the frequency of such notifications is 
> frequent.
> Ideally, we should schedule all what we can, and, depending on how busy we 
> currently are, query the tasktracker for more map locations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on pull request #2278: HADOOP-1719. ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-06 Thread GitBox


bilaharith commented on pull request #2278:
URL: https://github.com/apache/hadoop/pull/2278#issuecomment-687710639


   **Driver test results using accounts in Central India**
   mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 87, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [ERROR] Errors: 
   [ERROR]   ITestAbfsInputStreamStatistics.testReadAheadCounters:346 » 
TestTimedOut test t...
   [INFO] 
   [ERROR] Tests run: 451, Failures: 0, Errors: 1, Skipped: 64
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 207, Failures: 0, Errors: 0, Skipped: 24
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 87, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [ERROR] Errors: 
   [ERROR]   ITestAbfsInputStreamStatistics.testReadAheadCounters:346 » 
TestTimedOut test t...
   [INFO] 
   [ERROR] Tests run: 451, Failures: 0, Errors: 1, Skipped: 24
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 207, Failures: 0, Errors: 0, Skipped: 16
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 87, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [ERROR] Errors: 
   [ERROR]   ITestAbfsInputStreamStatistics.testReadAheadCounters:346 » 
TestTimedOut test t...
   [INFO] 
   [ERROR] Tests run: 451, Failures: 0, Errors: 1, Skipped: 245
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 207, Failures: 0, Errors: 0, Skipped: 16



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith opened a new pull request #2278: HADOOP-1719. ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-06 Thread GitBox


bilaharith opened a new pull request #2278:
URL: https://github.com/apache/hadoop/pull/2278


   ADLS Gen 2 supports accounts with and without hierarchical namespace 
support. ABFS driver supports various authorization mechanisms like OAuth, 
haredKey, Shared Access Signature. The integration tests need to be executed 
against accounts with and without hierarchical namespace support using various 
authorization mechanisms.
   Currently the developer has to manually run the tests with different 
combinations of configurations.
   The expectation is to automate these runs with different combinations.
   The PR introduces a shell script with which the developer can specify the 
configuration variants and get different combinations of tests executed.
   
   The script runtests.sh contain template for 3 combinations of tests. In case 
any
   new flags or properties are introduced with the code change, add the
   combinations with the possible configurations into the runtests.sh.
   
   Adding a combination of tests involves setting the variable scenario (ex: HNS
   -OAuth) and specifying the specific configurations for the particular
   combination with 2 arrays namely properties and values. Specify the property
   names within the array properties and corresponding values in the values
   array. The property and value is determined by the array index. The value for
   the property mentioned at index 1 of array properties should be specified at
   index 1 of the array values. Call the function runtestwithconfs once the 3
   values mentioned are set. Now the script runtests.sh is ready to be ran.
   
   Once the tests are completed, logs will be present in the directory testlogs.
   A consolidated test results will be present in the file 
Test-$starttime-Results
   .log, $startname will be the start time of the test. Similarly, the full test
   report can be found in individual log files, for each of the scenarios with 
the
   file name Test-$starttime-Logs-$scenario. Please attach the consolidates test
   results from the file Test-$starttime-Results.log into the respective PRs.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-1719) Improve the utilization of shuffle copier threads

2020-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-1719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-1719:
---
Labels: pull-request-available  (was: )

> Improve the utilization of shuffle copier threads
> -
>
> Key: HADOOP-1719
> URL: https://issues.apache.org/jira/browse/HADOOP-1719
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Devaraj Das
>Assignee: Amar Kamat
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.16.0
>
> Attachments: 1719.1.patch, 1719.patch, 1719.patch, HADOOP-1719.patch, 
> HADOOP-1719.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In the current design, the scheduling of copies is done and the scheduler 
> (the main loop in fetchOutputs) won't schedule anything until it hears back 
> from at least one of the copier threads. Due to this, the main loop won't 
> query the TaskTracker asking for new map locations and may not be using all 
> the copiers effectively. This may not be an issue for small-sized map 
> outputs, where at steady state, the frequency of such notifications is 
> frequent.
> Ideally, we should schedule all what we can, and, depending on how busy we 
> currently are, query the tasktracker for more map locations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-1719) Improve the utilization of shuffle copier threads

2020-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-1719?focusedWorklogId=479389=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479389
 ]

ASF GitHub Bot logged work on HADOOP-1719:
--

Author: ASF GitHub Bot
Created on: 06/Sep/20 06:38
Start Date: 06/Sep/20 06:38
Worklog Time Spent: 10m 
  Work Description: bilaharith opened a new pull request #2278:
URL: https://github.com/apache/hadoop/pull/2278


   ADLS Gen 2 supports accounts with and without hierarchical namespace 
support. ABFS driver supports various authorization mechanisms like OAuth, 
haredKey, Shared Access Signature. The integration tests need to be executed 
against accounts with and without hierarchical namespace support using various 
authorization mechanisms.
   Currently the developer has to manually run the tests with different 
combinations of configurations.
   The expectation is to automate these runs with different combinations.
   The PR introduces a shell script with which the developer can specify the 
configuration variants and get different combinations of tests executed.
   
   The script runtests.sh contain template for 3 combinations of tests. In case 
any
   new flags or properties are introduced with the code change, add the
   combinations with the possible configurations into the runtests.sh.
   
   Adding a combination of tests involves setting the variable scenario (ex: HNS
   -OAuth) and specifying the specific configurations for the particular
   combination with 2 arrays namely properties and values. Specify the property
   names within the array properties and corresponding values in the values
   array. The property and value is determined by the array index. The value for
   the property mentioned at index 1 of array properties should be specified at
   index 1 of the array values. Call the function runtestwithconfs once the 3
   values mentioned are set. Now the script runtests.sh is ready to be ran.
   
   Once the tests are completed, logs will be present in the directory testlogs.
   A consolidated test results will be present in the file 
Test-$starttime-Results
   .log, $startname will be the start time of the test. Similarly, the full test
   report can be found in individual log files, for each of the scenarios with 
the
   file name Test-$starttime-Logs-$scenario. Please attach the consolidates test
   results from the file Test-$starttime-Results.log into the respective PRs.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 479389)
Remaining Estimate: 0h
Time Spent: 10m

> Improve the utilization of shuffle copier threads
> -
>
> Key: HADOOP-1719
> URL: https://issues.apache.org/jira/browse/HADOOP-1719
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Devaraj Das
>Assignee: Amar Kamat
>Priority: Major
> Fix For: 0.16.0
>
> Attachments: 1719.1.patch, 1719.patch, 1719.patch, HADOOP-1719.patch, 
> HADOOP-1719.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In the current design, the scheduling of copies is done and the scheduler 
> (the main loop in fetchOutputs) won't schedule anything until it hears back 
> from at least one of the copier threads. Due to this, the main loop won't 
> query the TaskTracker asking for new map locations and may not be using all 
> the copiers effectively. This may not be an issue for small-sized map 
> outputs, where at steady state, the frequency of such notifications is 
> frequent.
> Ideally, we should schedule all what we can, and, depending on how busy we 
> currently are, query the tasktracker for more map locations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17191) ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-06 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-17191:
--
Description: 
ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
ABFS driver supports various authorization mechanisms like OAuth, haredKey, 
Shared Access Signature. The integration tests need to be executed against 
accounts with and without hierarchical namespace support using various 
authorization mechanisms.
Currently the developer has to manually run the tests with different 
combinations of configurations.
The expectation is to automate these runs with different combinations. This 
will help the developer to run the integration tests with different variants of 
configurations. 

  was:
ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
ABFS driver supports various authorization mechanisms like OAuth, haredKey, 
Shared Access Signature. The integration tests need to be executed against 
accounts with and without hierarchical namespace support using various 
authorization mechanisms.
Currently the developer has to manually run the tests with different 
combinations of configurations.
The expectation is to automate these runs with different combinations. 


> ABFS: Run the integration tests with various combinations of configurations 
> and publish a consolidated results
> --
>
> Key: HADOOP-17191
> URL: https://issues.apache.org/jira/browse/HADOOP-17191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
> ABFS driver supports various authorization mechanisms like OAuth, haredKey, 
> Shared Access Signature. The integration tests need to be executed against 
> accounts with and without hierarchical namespace support using various 
> authorization mechanisms.
> Currently the developer has to manually run the tests with different 
> combinations of configurations.
> The expectation is to automate these runs with different combinations. This 
> will help the developer to run the integration tests with different variants 
> of configurations. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17191) ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-06 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-17191:
--
Description: 
ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
ABFS driver supports various authorization mechanisms like OAuth, haredKey, 
Shared Access Signature. The integration tests need to be executed against 
accounts with and without hierarchical namespace support using various 
authorization mechanisms.
Currently the developer has to manually run the tests with different 
combinations of configurations.
The expectation is to automate these runs with different combinations. 

  was:
colored textADLS Gen 2 supports accounts with and without hierarchical 
namespace support. ABFS driver supports various authorization mechanisms like 
OAuth, haredKey, Shared Access Signature. The integration tests need to be 
executed against accounts with and without hierarchical namespace support using 
various authorization mechanisms.
Currently the developer has to manually run the tests with different 
combinations of configurations.
The expectation is to automate these runs with different combinations. 


> ABFS: Run the integration tests with various combinations of configurations 
> and publish a consolidated results
> --
>
> Key: HADOOP-17191
> URL: https://issues.apache.org/jira/browse/HADOOP-17191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
> ABFS driver supports various authorization mechanisms like OAuth, haredKey, 
> Shared Access Signature. The integration tests need to be executed against 
> accounts with and without hierarchical namespace support using various 
> authorization mechanisms.
> Currently the developer has to manually run the tests with different 
> combinations of configurations.
> The expectation is to automate these runs with different combinations. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17191) ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-06 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-17191:
--
Summary: ABFS: Run the integration tests with various combinations of 
configurations and publish a consolidated results  (was: ABFS: Run tests with 
all the auth types)

> ABFS: Run the integration tests with various combinations of configurations 
> and publish a consolidated results
> --
>
> Key: HADOOP-17191
> URL: https://issues.apache.org/jira/browse/HADOOP-17191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> colored textADLS Gen 2 supports accounts with and without hierarchical 
> namespace support. ABFS driver supports various authorization mechanisms like 
> OAuth, haredKey, Shared Access Signature. The integration tests need to be 
> executed against accounts with and without hierarchical namespace support 
> using various authorization mechanisms.
> Currently the developer has to manually run the tests with different 
> combinations of configurations.
> The expectation is to automate these runs with different combinations. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17191) ABFS: Run tests with all the auth types

2020-09-06 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-17191:
--
Description: 
ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
ABFS driver supports various authorization mechanisms like OAuth, haredKey,
Shared Access Signature. The integration tests need to be executed against 
accounts with and without hierarchical namespace support using various 
authorization mechanisms.
Currently the developer has to manually run the tests with different 
combinations of configurations.
The expectation is to automate these runs with different combinations. 

  was:
ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
ABFS driver supports various authorization mechanisms like OAuth, haredKey,
Shared Access Signature. The integration tests need to be executed against 
accounts with and without hierarchical namespace support using various 
authorization mechanisms.
Currently the developer has to manually run the tests with different 
combinations of configurations.
The intention is to automate these runs with different combinations. 


> ABFS: Run tests with all the auth types
> ---
>
> Key: HADOOP-17191
> URL: https://issues.apache.org/jira/browse/HADOOP-17191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
> ABFS driver supports various authorization mechanisms like OAuth, haredKey,
> Shared Access Signature. The integration tests need to be executed against 
> accounts with and without hierarchical namespace support using various 
> authorization mechanisms.
> Currently the developer has to manually run the tests with different 
> combinations of configurations.
> The expectation is to automate these runs with different combinations. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17191) ABFS: Run tests with all the auth types

2020-09-06 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-17191:
--
Description: 
ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
ABFS driver supports various authorization mechanisms like OAuth, haredKey,
Shared Access Signature. The integration tests need to be executed against 
accounts with and without hierarchical namespace support using various 
authorization mechanisms.
Currently the developer has to manually run the tests with different 
combinations of configurations.
The intention is to automate these runs with different combinations. 

  was:
ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
ABFS driver supports various authorization mechanisms like OAuth, haredKey,
Shared Access Signature. The integration tests need to be executed against 
accounts with and without hierarchical namespace support using various 
authorization mechanisms.

Currently the developer has to manually run the tests with different 
combinations of configurations.


> ABFS: Run tests with all the auth types
> ---
>
> Key: HADOOP-17191
> URL: https://issues.apache.org/jira/browse/HADOOP-17191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
> ABFS driver supports various authorization mechanisms like OAuth, haredKey,
> Shared Access Signature. The integration tests need to be executed against 
> accounts with and without hierarchical namespace support using various 
> authorization mechanisms.
> Currently the developer has to manually run the tests with different 
> combinations of configurations.
> The intention is to automate these runs with different combinations. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17191) ABFS: Run tests with all the auth types

2020-09-06 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-17191:
--
Description: 
colored textADLS Gen 2 supports accounts with and without hierarchical 
namespace support. ABFS driver supports various authorization mechanisms like 
OAuth, haredKey, Shared Access Signature. The integration tests need to be 
executed against accounts with and without hierarchical namespace support using 
various authorization mechanisms.
Currently the developer has to manually run the tests with different 
combinations of configurations.
The expectation is to automate these runs with different combinations. 

  was:
ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
ABFS driver supports various authorization mechanisms like OAuth, haredKey,
Shared Access Signature. The integration tests need to be executed against 
accounts with and without hierarchical namespace support using various 
authorization mechanisms.
Currently the developer has to manually run the tests with different 
combinations of configurations.
The expectation is to automate these runs with different combinations. 


> ABFS: Run tests with all the auth types
> ---
>
> Key: HADOOP-17191
> URL: https://issues.apache.org/jira/browse/HADOOP-17191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> colored textADLS Gen 2 supports accounts with and without hierarchical 
> namespace support. ABFS driver supports various authorization mechanisms like 
> OAuth, haredKey, Shared Access Signature. The integration tests need to be 
> executed against accounts with and without hierarchical namespace support 
> using various authorization mechanisms.
> Currently the developer has to manually run the tests with different 
> combinations of configurations.
> The expectation is to automate these runs with different combinations. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17191) ABFS: Run tests with all the auth types

2020-09-06 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-17191:
--
Description: 
ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
ABFS driver supports various authorization mechanisms like OAuth, haredKey,
Shared Access Signature. The integration tests need to be executed against 
accounts with and without hierarchical namespace support using various 
authorization mechanisms.

Currently the developer has to manually run the tests with different 
combinations of configurations.

> ABFS: Run tests with all the auth types
> ---
>
> Key: HADOOP-17191
> URL: https://issues.apache.org/jira/browse/HADOOP-17191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
> ABFS driver supports various authorization mechanisms like OAuth, haredKey,
> Shared Access Signature. The integration tests need to be executed against 
> accounts with and without hierarchical namespace support using various 
> authorization mechanisms.
> Currently the developer has to manually run the tests with different 
> combinations of configurations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org