[GitHub] [hadoop] xiaoyuyao commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc..

2020-03-12 Thread GitBox
xiaoyuyao commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ 
AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, 
rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-598553328
 
 
   +1. Thanks @jojochuang  for the update. There are whitespace related 
checkstyle issue which you can fix at commit. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] brfrn169 edited a comment on issue #1889: HDFS-15215 The Timestamp for longest write/read lock held log is wrong

2020-03-12 Thread GitBox
brfrn169 edited a comment on issue #1889: HDFS-15215 The Timestamp for longest 
write/read lock held log is wrong
URL: https://github.com/apache/hadoop/pull/1889#issuecomment-598495978
 
 
   @goiri Thank you for approving this!
   
   @xkrogen Could you please take a look at it when you get a chance?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] brfrn169 commented on issue #1889: HDFS-15215 The Timestamp for longest write/read lock held log is wrong

2020-03-12 Thread GitBox
brfrn169 commented on issue #1889: HDFS-15215 The Timestamp for longest 
write/read lock held log is wrong
URL: https://github.com/apache/hadoop/pull/1889#issuecomment-598495978
 
 
   @goiri Thank you for approving this!
   
   @xkrogen Could you please take a look it when you get a chance?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on issue #1895: YARN-10195. Dependency divergence building Timeline Service on HBase 2.2.0 and above.

2020-03-12 Thread GitBox
jojochuang commented on issue #1895: YARN-10195. Dependency divergence building 
Timeline Service on HBase 2.2.0 and above.
URL: https://github.com/apache/hadoop/pull/1895#issuecomment-598493377
 
 
   The produced build passed Cloudera's internal L0 test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on issue #1889: HDFS-15215 The Timestamp for longest write/read lock held log is wrong

2020-03-12 Thread GitBox
goiri commented on issue #1889: HDFS-15215 The Timestamp for longest write/read 
lock held log is wrong
URL: https://github.com/apache/hadoop/pull/1889#issuecomment-598487732
 
 
   I'm approving this but I'd like for @xkrogen to take a look if possible.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] karthikhw commented on issue #1870: HDFS-15201 SnapshotCounter hits MaxSnapshotID limit

2020-03-12 Thread GitBox
karthikhw commented on issue #1870: HDFS-15201 SnapshotCounter hits 
MaxSnapshotID limit
URL: https://github.com/apache/hadoop/pull/1870#issuecomment-598484891
 
 
   Thank you @szetszwo  I changed back to -1.  


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] szetszwo commented on issue #1870: HDFS-15201 SnapshotCounter hits MaxSnapshotID limit

2020-03-12 Thread GitBox
szetszwo commented on issue #1870: HDFS-15201 SnapshotCounter hits 
MaxSnapshotID limit
URL: https://github.com/apache/hadoop/pull/1870#issuecomment-598480242
 
 
   Let's keep it "-1" since we are using 28 for the moment.  If there still a 
problem later on, we should think about what to do.  We do not necessarily 
change it to 31 at that time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16912) Emit per priority rpc queue time and processing time from DecayRpcScheduler

2020-03-12 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17058291#comment-17058291
 ] 

Hadoop QA commented on HADOOP-16912:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
10s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 3 new + 16 unchanged - 2 fixed = 19 total (was 18) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
16s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HADOOP-16912 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12996594/HADOOP-16912.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux eec22bd9fc08 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0a9b3c9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_242 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16791/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16791/testReport/ |
| Max. process+thread count | 2370 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16791/console |
| Powered by | Apache 

[jira] [Updated] (HADOOP-16912) Emit per priority rpc queue time and processing time from DecayRpcScheduler

2020-03-12 Thread Fengnan Li (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengnan Li updated HADOOP-16912:

Attachment: HADOOP-16912.002.patch

> Emit per priority rpc queue time and processing time from DecayRpcScheduler
> ---
>
> Key: HADOOP-16912
> URL: https://issues.apache.org/jira/browse/HADOOP-16912
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
>  Labels: metrics
> Attachments: HADOOP-16912.001.patch, HADOOP-16912.002.patch
>
>
> At ipc Server level we have the overall rpc queue time and processing time 
> for the whole CallQueueManager. In the case of using FairCallQueue, it will 
> be great to know the per queue/priority level rpc queue time since many times 
> we want to keep certain queues to meet some queue time SLA for customers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16912) Emit per priority rpc queue time and processing time from DecayRpcScheduler

2020-03-12 Thread Fengnan Li (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17058253#comment-17058253
 ] 

Fengnan Li commented on HADOOP-16912:
-

Upload [^HADOOP-16912.002.patch] to fix checkstyle and unit tests

> Emit per priority rpc queue time and processing time from DecayRpcScheduler
> ---
>
> Key: HADOOP-16912
> URL: https://issues.apache.org/jira/browse/HADOOP-16912
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
>  Labels: metrics
> Attachments: HADOOP-16912.001.patch, HADOOP-16912.002.patch
>
>
> At ipc Server level we have the overall rpc queue time and processing time 
> for the whole CallQueueManager. In the case of using FairCallQueue, it will 
> be great to know the per queue/priority level rpc queue time since many times 
> we want to keep certain queues to meet some queue time SLA for customers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1895: YARN-10195. Dependency divergence building Timeline Service on HBase 2.2.0 and above.

2020-03-12 Thread GitBox
hadoop-yetus commented on issue #1895: YARN-10195. Dependency divergence 
building Timeline Service on HBase 2.2.0 and above.
URL: https://github.com/apache/hadoop/pull/1895#issuecomment-598403569
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 19s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  23m 46s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 15s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 19s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  39m 53s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 11s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 10s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 10s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 14s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 39s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 12s |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 28s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  60m 46s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.8 Server=19.03.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1895/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1895 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux a7fff5e6fb36 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0a9b3c9 |
   | Default Java | 1.8.0_242 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1895/1/testReport/ |
   | Max. process+thread count | 312 (vs. ulimit of 5500) |
   | modules | C: hadoop-project U: hadoop-project |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1895/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1861: HADOOP-13230. Optionally retain directory markers

2020-03-12 Thread GitBox
hadoop-yetus commented on issue #1861: HADOOP-13230. Optionally retain 
directory markers
URL: https://github.com/apache/hadoop/pull/1861#issuecomment-598392416
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 20s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  22m 24s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 30s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 58s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 56s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 25s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 17s |  hadoop-tools/hadoop-aws: The 
patch generated 1 new + 12 unchanged - 0 fixed = 13 total (was 12)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 24s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m  1s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   1m 29s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 27s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  64m 57s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.s3a.TestS3AGetFileStatus |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.8 Server=19.03.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1861/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1861 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 28818861734a 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0a9b3c9 |
   | Default Java | 1.8.0_242 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1861/3/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1861/3/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1861/3/testReport/ |
   | Max. process+thread count | 342 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1861/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang opened a new pull request #1895: YARN-10195. Dependency divergence building Timeline Service on HBase 2.2.0 and above.

2020-03-12 Thread GitBox
jojochuang opened a new pull request #1895: YARN-10195. Dependency divergence 
building Timeline Service on HBase 2.2.0 and above.
URL: https://github.com/apache/hadoop/pull/1895
 
 
   
   Manually verified with 
   
   `mvn clean install -Dhbase.profile=2.0 -Dhbase.two.version=2.2.0 
-Dmaven.javadoc.skip=true -DskipTests
   `


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16921) NPE in s3a byte buffer block upload

2020-03-12 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-16921:
---

Assignee: Steve Loughran

> NPE in s3a byte buffer block upload
> ---
>
> Key: HADOOP-16921
> URL: https://issues.apache.org/jira/browse/HADOOP-16921
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> NPE in s3a upload when fs.s3a.fast.upload.buffer = bytebuffer



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16213) Update guava to 27.0-jre in hadoop-project branch-3.1

2020-03-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16213:
-
Fix Version/s: 3.1.3

> Update guava to 27.0-jre in hadoop-project branch-3.1
> -
>
> Key: HADOOP-16213
> URL: https://issues.apache.org/jira/browse/HADOOP-16213
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.1.0, 3.1.1, 3.1.2
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Fix For: 3.1.3
>
> Attachments: HADOOP-16213-branch-3.1.001.patch, 
> HADOOP-16213-branch-3.1.002.patch, HADOOP-16213-branch-3.1.003.patch, 
> HADOOP-16213-branch-3.1.004.patch, HADOOP-16213-branch-3.1.005.patch, 
> HADOOP-16213-branch-3.1.006.patch
>
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> CVE-2018-10237.
> This is a sub-task for branch-3.1 from HADOOP-15960 to track issues on that 
> particular branch. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13007) cherry pick s3 ehancements from PrestoS3FileSystem

2020-03-12 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17058166#comment-17058166
 ] 

Steve Loughran commented on HADOOP-13007:
-

h2. Presto features

I just had a big look through the code to see what is directory logic was.

* docs: 
https://prestosql.io/docs/current/connector/hive.html#amazon-s3-configuration
* source 
https://github.com/prestodb/presto/blob/master/presto-hive/src/main/java/com/facebook/presto/hive/s3/PrestoS3FileSystem.java

h3. Interesting points

* glacier objects can be skipped
* list always adds the trailing / for non-empty and then does the scan
* it doesn't create directories, so no need to recreate them


(I was about to say "demand create AWS client", but I misread it. Interesting 
thought though

h2. listLocatedStatus

does a list under path p + /

{code}
String key = keyFromPath(path);
if (!key.isEmpty()) {
key += PATH_SEPARATOR;
}

ListObjectsRequest request = new ListObjectsRequest()
.withBucketName(getBucketName(uri))
.withPrefix(key)
.withDelimiter(PATH_SEPARATOR);

{code}


mapping to stats can skip "GLACIER" == getStorageClass; they aren't visible at 
all.

{code}
private Iterator statusFromObjects(List 
objects)
{
// NOTE: for encrypted objects, S3ObjectSummary.size() used below is 
NOT correct,
// however, to get the correct size we'd need to make an additional 
request to get
// user metadata, and in this case it doesn't matter.
return objects.stream()
.filter(object -> !object.getKey().endsWith(PATH_SEPARATOR))
.filter(object -> !skipGlacierObjects || 
!isGlacierObject(object))
.map(object -> new FileStatus(
object.getSize(),
false,
1,
BLOCK_SIZE.toBytes(),
object.getLastModified().getTime(),
qualifiedPath(new Path(PATH_SEPARATOR + 
object.getKey()
.map(this::createLocatedFileStatus)
.iterator();
}
{code}

Note: this feeds into delete(); intentional or not

{code}
public boolean mkdirs(Path f, FsPermission permission)
{
// no need to do anything for S3
return true;
} 
{code}

* but getFileStatus does do: HEAD, HEAD + /, LIST + /

{code}
return new FileStatus(
getObjectSize(path, metadata),
S3_DIRECTORY_OBJECT_CONTENT_TYPE.equals(metadata.getContentType()),
1,
BLOCK_SIZE.toBytes(),
lastModifiedTime(metadata),
qualifiedPath(path));

{code}


* file length inferred from metadata -"x-amz-unencrypted-content-length" 
parsed. docs acknowlege that the results of a list are not consistent.
* dir marker content type used for "is this a dir"

h3. input stream

optimised for forward reads; 

* doesn't do a HEAD at open
* lazy seek
* skips range (default 1MB) before (non-draining) abort() and reopen
* simply opens with initial range in GET, but no limit
* maps 416 ->  EOFException 




h3. delete()

- doesn't do bulk deletes, just lists children and recursively calls delete on 
them (very, very inefficient)
-for recursive delete, if glacier files are skipped, they don't get deleted. 
Interesting idea


h3. Metrics

implements AWS SDK stats collection in 
com.facebook.presto.hive.s3.PrestoS3FileSystemMetricCollector; these feed back 
to the Presto metrics

h3. create()

if overwrite=true, skips all checks (even for dest being a dir). 
{code}
if ((!overwrite) && exists(path)) {
throw new IOException("File already exists:" + path);
}
{code}
Should we do that? it's, well, aggressive, but for apps which know what they 
are doing...

1. A new transfer manager is created for each output stream
1. The entire file is written to the staging dir
1. the transfer manager is given the file to upload; it will partition and 
upload how it chooses

so: no incremental writes, disk always used. But good simplicitly and no risk 
of partial uploads remaining around.
omitting all existence checks does make for a faster write and avoids all 404s 
sneaking in.

h2. Overall analysis

It's nicely minimal; optimised for directory trees with no markers. 
* input stream seems a lot less optimised than our code, which works better for 
backwards seeking clients.
* output stream avoids 404s by omitting all probes. Something to consider
* Metric wireup from AWS SDK looks simple enough for us to copy
* it does handle CSE where actual length < list length,  but caller has to do a 
HEAD to see this; if it runs off the result of listLocatedStatus and then did a 
seek off EOF-16, things would fail. Similarly: splitting.


What can we adopt?

# metrics
# maybe: dest is dir for overwrite=true
# dir content type get/check (ie 


{code}
isDir == (path.endswith"/ && len==0) || 
content_type==_"application/x-directory")
{code}



> 

[GitHub] [hadoop] mehakmeet commented on a change in pull request #1881: HADOOP-16910 Adding file system counters in ABFS

2020-03-12 Thread GitBox
mehakmeet commented on a change in pull request #1881: HADOOP-16910 Adding file 
system counters in ABFS
URL: https://github.com/apache/hadoop/pull/1881#discussion_r391786077
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStreamStatistics.java
 ##
 @@ -0,0 +1,111 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.services.AbfsInputStream;
+
+/**
+ * Test Abfs Stream.
+ */
+
+public class ITestAbfsStreamStatistics extends AbstractAbfsIntegrationTest {
+  public ITestAbfsStreamStatistics() throws Exception {
+  }
+
+  /***
+   * {@link AbfsInputStream#incrementReadOps()}.
+   *
+   * @throws Exception
+   */
+  @Test
+  public void testAbfsStreamOps() throws Exception {
+describe("Test to see correct population of read and write operations in "
++ "Abfs");
+
+final AzureBlobFileSystem fs = getFileSystem();
+Path smallOperaionsFile = new Path("testOneReadWriteOps");
 
 Review comment:
   just saw this typo. will fix it in the next commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet commented on a change in pull request #1881: HADOOP-16910 Adding file system counters in ABFS

2020-03-12 Thread GitBox
mehakmeet commented on a change in pull request #1881: HADOOP-16910 Adding file 
system counters in ABFS
URL: https://github.com/apache/hadoop/pull/1881#discussion_r391786649
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStreamStatistics.java
 ##
 @@ -0,0 +1,111 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.services.AbfsInputStream;
+
+/**
+ * Test Abfs Stream.
+ */
+
+public class ITestAbfsStreamStatistics extends AbstractAbfsIntegrationTest {
+  public ITestAbfsStreamStatistics() throws Exception {
+  }
+
+  /***
+   * {@link AbfsInputStream#incrementReadOps()}.
+   *
+   * @throws Exception
+   */
+  @Test
+  public void testAbfsStreamOps() throws Exception {
+describe("Test to see correct population of read and write operations in "
++ "Abfs");
+
+final AzureBlobFileSystem fs = getFileSystem();
+Path smallOperaionsFile = new Path("testOneReadWriteOps");
+Path largeOperationsFile = new Path("testLargeReadWriteOps");
+FileSystem.Statistics statistics = fs.getFsStatistics();
+String testReadWriteOps = "test this";
+statistics.reset();
+
+//Test for zero read and write operation
+Assert.assertEquals("Mismatch in read operations", 0,
+statistics.getReadOps());
+Assert.assertEquals("Mismatch in write operations", 0,
+statistics.getWriteOps());
+
+FSDataOutputStream outForOneOperation = fs.create(smallOperaionsFile);
+statistics.reset();
+outForOneOperation.write(testReadWriteOps.getBytes());
+FSDataInputStream inForOneCall = fs.open(smallOperaionsFile);
+inForOneCall.read(testReadWriteOps.getBytes(), 0,
+testReadWriteOps.getBytes().length);
+
+//Test for one read and write operation
+Assert.assertEquals("Mismatch in read operations", 1,
+statistics.getReadOps());
+Assert.assertEquals("Mismatch in write operations", 1,
+statistics.getWriteOps());
+
+outForOneOperation.close();
+//Validating if Content is being written in the smallFile
+Assert.assertEquals("Mismatch in content validation", true,
+validateContent(fs, smallOperaionsFile,
+testReadWriteOps.getBytes()));
+
+FSDataOutputStream outForLargeOperations = fs.create(largeOperationsFile);
+statistics.reset();
+
+StringBuilder largeOperationsValidationString = new StringBuilder();
+for (int i = 0; i < 100; i++) {
+  outForLargeOperations.write(testReadWriteOps.getBytes());
+
+  //Creating the String for content Validation
+  largeOperationsValidationString.append(testReadWriteOps);
+}
+
+FSDataInputStream inForLargeCalls = fs.open(largeOperationsFile);
+
+for (int i = 0; i < 100; i++)
+  inForLargeCalls
+  .read(testReadWriteOps.getBytes(), 0,
+  testReadWriteOps.getBytes().length);
+
+//Test for one million read and write operations
+Assert.assertEquals("Mismatch in read operations", 100,
+statistics.getReadOps());
+Assert.assertEquals("Mismatch in write operations", 100,
+statistics.getWriteOps());
+
+outForLargeOperations.close();
+//Validating if actually "test" is being written million times in 
largeOperationsFile
 
 Review comment:
   this comment needs to change


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet commented on a change in pull request #1881: HADOOP-16910 Adding file system counters in ABFS

2020-03-12 Thread GitBox
mehakmeet commented on a change in pull request #1881: HADOOP-16910 Adding file 
system counters in ABFS
URL: https://github.com/apache/hadoop/pull/1881#discussion_r391786077
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStreamStatistics.java
 ##
 @@ -0,0 +1,111 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.services.AbfsInputStream;
+
+/**
+ * Test Abfs Stream.
+ */
+
+public class ITestAbfsStreamStatistics extends AbstractAbfsIntegrationTest {
+  public ITestAbfsStreamStatistics() throws Exception {
+  }
+
+  /***
+   * {@link AbfsInputStream#incrementReadOps()}.
+   *
+   * @throws Exception
+   */
+  @Test
+  public void testAbfsStreamOps() throws Exception {
+describe("Test to see correct population of read and write operations in "
++ "Abfs");
+
+final AzureBlobFileSystem fs = getFileSystem();
+Path smallOperaionsFile = new Path("testOneReadWriteOps");
 
 Review comment:
   just saw this typo. will fix it in the next patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet opened a new pull request #1881: HADOOP-16910 Adding file system counters in ABFS

2020-03-12 Thread GitBox
mehakmeet opened a new pull request #1881: HADOOP-16910 Adding file system 
counters in ABFS
URL: https://github.com/apache/hadoop/pull/1881
 
 
   - Write_ops
   - Read_ops
   - Bytes_written (already updated)
   - Bytes_Read (already updated)
   
   Change-Id: I77349fdd158babd66df665713201fa9c8606f191
   
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet closed pull request #1881: HADOOP-16910 Adding file system counters in ABFS

2020-03-12 Thread GitBox
mehakmeet closed pull request #1881: HADOOP-16910 Adding file system counters 
in ABFS
URL: https://github.com/apache/hadoop/pull/1881
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15430) hadoop fs -mkdir -p path-ending-with-slash/ fails with s3guard

2020-03-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17057991#comment-17057991
 ] 

Hudson commented on HADOOP-15430:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18046 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18046/])
HADOOP-15430. hadoop fs -mkdir -p path-ending-with-slash/ fails with (github: 
rev 0a9b3c98b160f2cf825ea6e8422ce093a8ae7cb3)
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/PathMetadataDynamoDBTranslation.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardFsShell.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AMiscOperations.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestPathMetadataDynamoDBTranslation.java


> hadoop fs -mkdir -p path-ending-with-slash/ fails with s3guard
> --
>
> Key: HADOOP-15430
> URL: https://issues.apache.org/jira/browse/HADOOP-15430
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15430-001.patch, HADOOP-15430-002.patch, 
> HADOOP-15430-003.patch
>
>
> if you call {{hadoop fs -mkdir -p path/}} on the command line with a path 
> ending in "/:. you get a DDB error "An AttributeValue may not contain an 
> empty string"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #1646: HADOOP-15430. hadoop fs -mkdir -p path-ending-with-slash/ fails with s3guard

2020-03-12 Thread GitBox
steveloughran merged pull request #1646: HADOOP-15430. hadoop fs -mkdir -p 
path-ending-with-slash/ fails with s3guard
URL: https://github.com/apache/hadoop/pull/1646
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a change in pull request #1881: HADOOP-16910 Adding file system counters in ABFS

2020-03-12 Thread GitBox
mukund-thakur commented on a change in pull request #1881: HADOOP-16910 Adding 
file system counters in ABFS
URL: https://github.com/apache/hadoop/pull/1881#discussion_r391613932
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStreamStatistics.java
 ##
 @@ -0,0 +1,109 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.services.AbfsInputStream;
+
+/**
+ * Test Abfs Stream.
+ */
+
+public class ITestAbfsStreamStatistics extends AbstractAbfsIntegrationTest {
+  public ITestAbfsStreamStatistics() throws Exception {
+  }
+
+  /***
+   * {@link AbfsInputStream#incrementReadOps()}.
+   *
+   * @throws Exception
+   */
+  @Test
+  public void testAbfsStreamOps() throws Exception {
+describe("Test to see correct population of read and write operations in "
++ "Abfs");
+
+final AzureBlobFileSystem fs = getFileSystem();
+Path smallOperaionsFile = new Path("testOneReadWriteOps");
+Path largeOperationsFile = new Path("testLargeReadWriteOps");
+FileSystem.Statistics statistics = fs.getFsStatistics();
+String testReadWriteOps = "test this";
+statistics.reset();
+
+//Test for zero read and write operation
+Assert.assertEquals("Zero read operations", 0, statistics.getReadOps());
+Assert.assertEquals("Zero write operations", 0, statistics.getWriteOps());
+
+FSDataOutputStream outForOneOperation = fs.create(smallOperaionsFile);
+statistics.reset();
+outForOneOperation.write(testReadWriteOps.getBytes());
+FSDataInputStream inForOneCall = fs.open(smallOperaionsFile);
+inForOneCall.read(testReadWriteOps.getBytes(), 0,
+testReadWriteOps.getBytes().length);
+
+//Test for one read and write operation
+Assert.assertEquals("one read operation is performed", 1,
+statistics.getReadOps());
+Assert.assertEquals("one write operation is performed", 1,
+statistics.getWriteOps());
+
+outForOneOperation.close();
+//validating Content of file
+Assert.assertEquals("one operation Content validation", true,
+validateContent(fs, smallOperaionsFile,
+testReadWriteOps.getBytes()));
+
+FSDataOutputStream outForLargeOperations = fs.create(largeOperationsFile);
+statistics.reset();
+
+StringBuilder largeOperationsValidationString = new StringBuilder();
+for (int i = 0; i < 100; i++) {
+  outForLargeOperations.write(testReadWriteOps.getBytes());
+
+  //Creating the String for content Validation
+  largeOperationsValidationString.append(testReadWriteOps);
+}
+
+FSDataInputStream inForLargeCalls = fs.open(largeOperationsFile);
+
+for (int i = 0; i < 100; i++)
+  inForLargeCalls
+  .read(testReadWriteOps.getBytes(), 0,
+  testReadWriteOps.getBytes().length);
+
+//Test for one million read and write operations
+Assert.assertEquals("Large read operations", 100,
 
 Review comment:
   yes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet commented on a change in pull request #1881: HADOOP-16910 Adding file system counters in ABFS

2020-03-12 Thread GitBox
mehakmeet commented on a change in pull request #1881: HADOOP-16910 Adding file 
system counters in ABFS
URL: https://github.com/apache/hadoop/pull/1881#discussion_r391613039
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStreamStatistics.java
 ##
 @@ -0,0 +1,109 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.services.AbfsInputStream;
+
+/**
+ * Test Abfs Stream.
+ */
+
+public class ITestAbfsStreamStatistics extends AbstractAbfsIntegrationTest {
+  public ITestAbfsStreamStatistics() throws Exception {
+  }
+
+  /***
+   * {@link AbfsInputStream#incrementReadOps()}.
+   *
+   * @throws Exception
+   */
+  @Test
+  public void testAbfsStreamOps() throws Exception {
+describe("Test to see correct population of read and write operations in "
++ "Abfs");
+
+final AzureBlobFileSystem fs = getFileSystem();
+Path smallOperaionsFile = new Path("testOneReadWriteOps");
+Path largeOperationsFile = new Path("testLargeReadWriteOps");
+FileSystem.Statistics statistics = fs.getFsStatistics();
+String testReadWriteOps = "test this";
+statistics.reset();
+
+//Test for zero read and write operation
+Assert.assertEquals("Zero read operations", 0, statistics.getReadOps());
+Assert.assertEquals("Zero write operations", 0, statistics.getWriteOps());
+
+FSDataOutputStream outForOneOperation = fs.create(smallOperaionsFile);
+statistics.reset();
+outForOneOperation.write(testReadWriteOps.getBytes());
+FSDataInputStream inForOneCall = fs.open(smallOperaionsFile);
+inForOneCall.read(testReadWriteOps.getBytes(), 0,
+testReadWriteOps.getBytes().length);
+
+//Test for one read and write operation
+Assert.assertEquals("one read operation is performed", 1,
+statistics.getReadOps());
+Assert.assertEquals("one write operation is performed", 1,
+statistics.getWriteOps());
+
+outForOneOperation.close();
+//validating Content of file
+Assert.assertEquals("one operation Content validation", true,
+validateContent(fs, smallOperaionsFile,
+testReadWriteOps.getBytes()));
+
+FSDataOutputStream outForLargeOperations = fs.create(largeOperationsFile);
+statistics.reset();
+
+StringBuilder largeOperationsValidationString = new StringBuilder();
+for (int i = 0; i < 100; i++) {
+  outForLargeOperations.write(testReadWriteOps.getBytes());
+
+  //Creating the String for content Validation
+  largeOperationsValidationString.append(testReadWriteOps);
+}
+
+FSDataInputStream inForLargeCalls = fs.open(largeOperationsFile);
+
+for (int i = 0; i < 100; i++)
+  inForLargeCalls
+  .read(testReadWriteOps.getBytes(), 0,
+  testReadWriteOps.getBytes().length);
+
+//Test for one million read and write operations
+Assert.assertEquals("Large read operations", 100,
 
 Review comment:
   Will both 1 op error and 1000 ops error have same message ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a change in pull request #1881: HADOOP-16910 Adding file system counters in ABFS

2020-03-12 Thread GitBox
mukund-thakur commented on a change in pull request #1881: HADOOP-16910 Adding 
file system counters in ABFS
URL: https://github.com/apache/hadoop/pull/1881#discussion_r391602770
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStreamStatistics.java
 ##
 @@ -0,0 +1,109 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.services.AbfsInputStream;
+
+/**
+ * Test Abfs Stream.
+ */
+
+public class ITestAbfsStreamStatistics extends AbstractAbfsIntegrationTest {
+  public ITestAbfsStreamStatistics() throws Exception {
+  }
+
+  /***
+   * {@link AbfsInputStream#incrementReadOps()}.
+   *
+   * @throws Exception
+   */
+  @Test
+  public void testAbfsStreamOps() throws Exception {
+describe("Test to see correct population of read and write operations in "
++ "Abfs");
+
+final AzureBlobFileSystem fs = getFileSystem();
+Path smallOperaionsFile = new Path("testOneReadWriteOps");
+Path largeOperationsFile = new Path("testLargeReadWriteOps");
+FileSystem.Statistics statistics = fs.getFsStatistics();
+String testReadWriteOps = "test this";
+statistics.reset();
+
+//Test for zero read and write operation
+Assert.assertEquals("Zero read operations", 0, statistics.getReadOps());
+Assert.assertEquals("Zero write operations", 0, statistics.getWriteOps());
+
+FSDataOutputStream outForOneOperation = fs.create(smallOperaionsFile);
+statistics.reset();
+outForOneOperation.write(testReadWriteOps.getBytes());
+FSDataInputStream inForOneCall = fs.open(smallOperaionsFile);
+inForOneCall.read(testReadWriteOps.getBytes(), 0,
+testReadWriteOps.getBytes().length);
+
+//Test for one read and write operation
+Assert.assertEquals("one read operation is performed", 1,
+statistics.getReadOps());
+Assert.assertEquals("one write operation is performed", 1,
+statistics.getWriteOps());
+
+outForOneOperation.close();
+//validating Content of file
+Assert.assertEquals("one operation Content validation", true,
 
 Review comment:
   Mismatch in file content.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a change in pull request #1881: HADOOP-16910 Adding file system counters in ABFS

2020-03-12 Thread GitBox
mukund-thakur commented on a change in pull request #1881: HADOOP-16910 Adding 
file system counters in ABFS
URL: https://github.com/apache/hadoop/pull/1881#discussion_r391602594
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStreamStatistics.java
 ##
 @@ -0,0 +1,109 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.services.AbfsInputStream;
+
+/**
+ * Test Abfs Stream.
+ */
+
+public class ITestAbfsStreamStatistics extends AbstractAbfsIntegrationTest {
+  public ITestAbfsStreamStatistics() throws Exception {
+  }
+
+  /***
+   * {@link AbfsInputStream#incrementReadOps()}.
+   *
+   * @throws Exception
+   */
+  @Test
+  public void testAbfsStreamOps() throws Exception {
+describe("Test to see correct population of read and write operations in "
++ "Abfs");
+
+final AzureBlobFileSystem fs = getFileSystem();
+Path smallOperaionsFile = new Path("testOneReadWriteOps");
+Path largeOperationsFile = new Path("testLargeReadWriteOps");
+FileSystem.Statistics statistics = fs.getFsStatistics();
+String testReadWriteOps = "test this";
+statistics.reset();
+
+//Test for zero read and write operation
+Assert.assertEquals("Zero read operations", 0, statistics.getReadOps());
+Assert.assertEquals("Zero write operations", 0, statistics.getWriteOps());
+
+FSDataOutputStream outForOneOperation = fs.create(smallOperaionsFile);
+statistics.reset();
+outForOneOperation.write(testReadWriteOps.getBytes());
+FSDataInputStream inForOneCall = fs.open(smallOperaionsFile);
+inForOneCall.read(testReadWriteOps.getBytes(), 0,
+testReadWriteOps.getBytes().length);
+
+//Test for one read and write operation
+Assert.assertEquals("one read operation is performed", 1,
+statistics.getReadOps());
+Assert.assertEquals("one write operation is performed", 1,
+statistics.getWriteOps());
+
+outForOneOperation.close();
+//validating Content of file
+Assert.assertEquals("one operation Content validation", true,
+validateContent(fs, smallOperaionsFile,
+testReadWriteOps.getBytes()));
+
+FSDataOutputStream outForLargeOperations = fs.create(largeOperationsFile);
+statistics.reset();
+
+StringBuilder largeOperationsValidationString = new StringBuilder();
+for (int i = 0; i < 100; i++) {
+  outForLargeOperations.write(testReadWriteOps.getBytes());
+
+  //Creating the String for content Validation
+  largeOperationsValidationString.append(testReadWriteOps);
+}
+
+FSDataInputStream inForLargeCalls = fs.open(largeOperationsFile);
+
+for (int i = 0; i < 100; i++)
+  inForLargeCalls
+  .read(testReadWriteOps.getBytes(), 0,
+  testReadWriteOps.getBytes().length);
+
+//Test for one million read and write operations
+Assert.assertEquals("Large read operations", 100,
 
 Review comment:
   Same here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a change in pull request #1881: HADOOP-16910 Adding file system counters in ABFS

2020-03-12 Thread GitBox
mukund-thakur commented on a change in pull request #1881: HADOOP-16910 Adding 
file system counters in ABFS
URL: https://github.com/apache/hadoop/pull/1881#discussion_r391602306
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStreamStatistics.java
 ##
 @@ -0,0 +1,109 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.services.AbfsInputStream;
+
+/**
+ * Test Abfs Stream.
+ */
+
+public class ITestAbfsStreamStatistics extends AbstractAbfsIntegrationTest {
+  public ITestAbfsStreamStatistics() throws Exception {
+  }
+
+  /***
+   * {@link AbfsInputStream#incrementReadOps()}.
+   *
+   * @throws Exception
+   */
+  @Test
+  public void testAbfsStreamOps() throws Exception {
+describe("Test to see correct population of read and write operations in "
++ "Abfs");
+
+final AzureBlobFileSystem fs = getFileSystem();
+Path smallOperaionsFile = new Path("testOneReadWriteOps");
+Path largeOperationsFile = new Path("testLargeReadWriteOps");
+FileSystem.Statistics statistics = fs.getFsStatistics();
+String testReadWriteOps = "test this";
+statistics.reset();
+
+//Test for zero read and write operation
+Assert.assertEquals("Zero read operations", 0, statistics.getReadOps());
+Assert.assertEquals("Zero write operations", 0, statistics.getWriteOps());
+
+FSDataOutputStream outForOneOperation = fs.create(smallOperaionsFile);
+statistics.reset();
+outForOneOperation.write(testReadWriteOps.getBytes());
+FSDataInputStream inForOneCall = fs.open(smallOperaionsFile);
+inForOneCall.read(testReadWriteOps.getBytes(), 0,
+testReadWriteOps.getBytes().length);
+
+//Test for one read and write operation
+Assert.assertEquals("one read operation is performed", 1,
 
 Review comment:
   Change to "Mismatch in read operation count."


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a change in pull request #1881: HADOOP-16910 Adding file system counters in ABFS

2020-03-12 Thread GitBox
mukund-thakur commented on a change in pull request #1881: HADOOP-16910 Adding 
file system counters in ABFS
URL: https://github.com/apache/hadoop/pull/1881#discussion_r391602306
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStreamStatistics.java
 ##
 @@ -0,0 +1,109 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.services.AbfsInputStream;
+
+/**
+ * Test Abfs Stream.
+ */
+
+public class ITestAbfsStreamStatistics extends AbstractAbfsIntegrationTest {
+  public ITestAbfsStreamStatistics() throws Exception {
+  }
+
+  /***
+   * {@link AbfsInputStream#incrementReadOps()}.
+   *
+   * @throws Exception
+   */
+  @Test
+  public void testAbfsStreamOps() throws Exception {
+describe("Test to see correct population of read and write operations in "
++ "Abfs");
+
+final AzureBlobFileSystem fs = getFileSystem();
+Path smallOperaionsFile = new Path("testOneReadWriteOps");
+Path largeOperationsFile = new Path("testLargeReadWriteOps");
+FileSystem.Statistics statistics = fs.getFsStatistics();
+String testReadWriteOps = "test this";
+statistics.reset();
+
+//Test for zero read and write operation
+Assert.assertEquals("Zero read operations", 0, statistics.getReadOps());
+Assert.assertEquals("Zero write operations", 0, statistics.getWriteOps());
+
+FSDataOutputStream outForOneOperation = fs.create(smallOperaionsFile);
+statistics.reset();
+outForOneOperation.write(testReadWriteOps.getBytes());
+FSDataInputStream inForOneCall = fs.open(smallOperaionsFile);
+inForOneCall.read(testReadWriteOps.getBytes(), 0,
+testReadWriteOps.getBytes().length);
+
+//Test for one read and write operation
+Assert.assertEquals("one read operation is performed", 1,
 
 Review comment:
   Mismatch in read operation count.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1894: HADOOP-16819 Possible inconsistent state of AbstractDelegationTokenSecretManager

2020-03-12 Thread GitBox
hadoop-yetus commented on issue #1894: HADOOP-16819 Possible inconsistent state 
of AbstractDelegationTokenSecretManager
URL: https://github.com/apache/hadoop/pull/1894#issuecomment-598093114
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  25m 17s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m 31s |  trunk passed  |
   | +1 :green_heart: |  compile  |  16m 46s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 27s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 53s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   2m  7s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m  5s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 50s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 17s |  the patch passed  |
   | +1 :green_heart: |  javac  |  16m 17s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 27s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m  8s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  the patch passed  |
   | -1 :x: |  findbugs  |   2m 15s |  hadoop-common-project/hadoop-common 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 39s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 54s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 132m 27s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.currentKey;
 locked 88% of time  Unsynchronized access at 
AbstractDelegationTokenSecretManager.java:88% of time  Unsynchronized access at 
AbstractDelegationTokenSecretManager.java:[line 366] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.8 Server=19.03.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1894/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1894 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux bdfd99a28b6c 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0b931f3 |
   | Default Java | 1.8.0_242 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1894/1/artifact/out/new-findbugs-hadoop-common-project_hadoop-common.html
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1894/1/testReport/ |
   | Max. process+thread count | 3205 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1894/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16921) NPE in s3a byte buffer block upload

2020-03-12 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17057746#comment-17057746
 ] 

Steve Loughran commented on HADOOP-16921:
-

{code}
 java.io.IOException: regular upload failed: java.lang.NullPointerException at 
org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:338) at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject(S3ABlockOutputStream.java:454)
 at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:366)
 at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
 at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101) 
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:70) at 
org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:129) at 
org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:415) at 
org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:387) at 
org.apache.hadoop.hive.common.FileUtils.copy(FileUtils.java:666) at 
org.apache.hadoop.hive.common.FileUtils.copy(FileUtils.java:633) at 
org.apache.hadoop.hive.ql.metadata.Hive.mvFile(Hive.java:4436) at 
org.apache.hadoop.hive.ql.metadata.Hive.access$100(Hive.java:221) at 
org.apache.hadoop.hive.ql.metadata.Hive$5.call(Hive.java:4296) ... 5 moreCaused 
by: java.lang.NullPointerException at 
org.apache.hadoop.fs.s3a.S3ADataBlocks$ByteBufferBlockFactory$ByteBufferBlock$ByteBufferInputStream.position(S3ADataBlocks.java:708)
 at 
org.apache.hadoop.fs.s3a.S3ADataBlocks$ByteBufferBlockFactory$ByteBufferBlock$ByteBufferInputStream.mark(S3ADataBlocks.java:721)
 at 
com.amazonaws.internal.SdkFilterInputStream.mark(SdkFilterInputStream.java:114) 
at 
com.amazonaws.internal.SdkFilterInputStream.mark(SdkFilterInputStream.java:114) 
at 
com.amazonaws.util.LengthCheckInputStream.mark(LengthCheckInputStream.java:116) 
at 
com.amazonaws.internal.SdkFilterInputStream.mark(SdkFilterInputStream.java:114) 
at 
com.amazonaws.internal.SdkFilterInputStream.mark(SdkFilterInputStream.java:114) 
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1067)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:770)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:744)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:686)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:668)
 at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:532) at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:512) at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4920) at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4866) at 
com.amazonaws.services.s3.AmazonS3Client.access$300(AmazonS3Client.java:389) at 
com.amazonaws.services.s3.AmazonS3Client$PutObjectStrategy.invokeServiceCall(AmazonS3Client.java:5800)
 at 
com.amazonaws.services.s3.AmazonS3Client.uploadObject(AmazonS3Client.java:1789) 
at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1749) 
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:2136) 
at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$putObject$5(WriteOperationHelper.java:462)
 at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:110) at 
org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:315) at 
org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:407) at 
org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:311) at 
org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:286) at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.retry(WriteOperationHelper.java:150)
 at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:460)
 at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.lambda$putObject$0(S3ABlockOutputStream.java:439)
 at 
org.apache.hadoop.util.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:219)
 at 
org.apache.hadoop.util.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:219)
 at 
com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
 at 
com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
 at 
com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
 ... 3 more
{code}

> NPE in s3a byte buffer block upload
> ---
>
> Key: HADOOP-16921
> URL: https://issues.apache.org/jira/browse/HADOOP-16921
> Project: Hadoop Common
>  Issue Type: Sub-task

[jira] [Assigned] (HADOOP-16920) ABFS: Make list page size configurable

2020-03-12 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H reassigned HADOOP-16920:
-

Assignee: Bilahari T H

> ABFS: Make list page size configurable
> --
>
> Key: HADOOP-16920
> URL: https://issues.apache.org/jira/browse/HADOOP-16920
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>
> Make list page size configurable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16922) ABFS: Change in User-Agent header

2020-03-12 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H reassigned HADOOP-16922:
-

Assignee: Bilahari T H

> ABFS: Change in User-Agent header
> -
>
> Key: HADOOP-16922
> URL: https://issues.apache.org/jira/browse/HADOOP-16922
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Major
>
> Move the configured prefix from the end of the User-Agent value to right 
> after the driver version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16922) ABFS: Change in User-Agent header

2020-03-12 Thread Bilahari T H (Jira)
Bilahari T H created HADOOP-16922:
-

 Summary: ABFS: Change in User-Agent header
 Key: HADOOP-16922
 URL: https://issues.apache.org/jira/browse/HADOOP-16922
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Bilahari T H


Move the configured prefix from the end of the User-Agent value to right after 
the driver version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16819) Possible inconsistent state of AbstractDelegationTokenSecretManager

2020-03-12 Thread Jira


[ 
https://issues.apache.org/jira/browse/HADOOP-16819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17057689#comment-17057689
 ] 

Hankó Gergely commented on HADOOP-16819:


I think the findbugs warning is acceptable as the comment explicitly says "Log 
must be invoked outside the lock on 'this'".

It can probably be solved by introducing a semaphore dedicated for currentKey 
access so we don't have to lock on "this", but it would be a bigger effort. 
What do you think?

> Possible inconsistent state of AbstractDelegationTokenSecretManager
> ---
>
> Key: HADOOP-16819
> URL: https://issues.apache.org/jira/browse/HADOOP-16819
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, security
>Affects Versions: 3.3.0
>Reporter: Hankó Gergely
>Assignee: Hankó Gergely
>Priority: Major
> Attachments: HADOOP-16819.001.patch
>
>
> [AbstractDelegationTokenSecretManager.updateCurrentKey|https://github.com/apache/hadoop/blob/581072a8f04f7568d3560f105fd1988d3acc9e54/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java#L360]
>  increments the current key id and creates the new delegation key in two 
> distinct synchronized blocks.
> This means that other threads can see the class in an *inconsistent state, 
> where the key for the current key id doesn't exist (yet)*.
> For example the following method sometimes returns null when the token 
> remover thread is between the two synchronized blocks:
> {noformat}
> @Override
> public DelegationKey getCurrentKey() {
>   return getDelegationKey(getCurrentKeyId());
> }{noformat}
>  
> Also it is possible that updateCurrentKey is called from multiple threads at 
> the same time so *distinct keys can be generated with the same key id*.
>  
> This issue is suspected to be the cause of the intermittent failure of  
> [TestLlapSignerImpl.testSigning|https://github.com/apache/hive/blob/3c0705eaf5121c7b61f2dbe9db9545c3926f26f1/llap-server/src/test/org/apache/hadoop/hive/llap/security/TestLlapSignerImpl.java#L195]
>  - HIVE-22621.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16819) Possible inconsistent state of AbstractDelegationTokenSecretManager

2020-03-12 Thread Jira


[ 
https://issues.apache.org/jira/browse/HADOOP-16819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17057653#comment-17057653
 ] 

Hankó Gergely commented on HADOOP-16819:


I'm checking the findbugs issue.

> Possible inconsistent state of AbstractDelegationTokenSecretManager
> ---
>
> Key: HADOOP-16819
> URL: https://issues.apache.org/jira/browse/HADOOP-16819
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, security
>Affects Versions: 3.3.0
>Reporter: Hankó Gergely
>Assignee: Hankó Gergely
>Priority: Major
> Attachments: HADOOP-16819.001.patch
>
>
> [AbstractDelegationTokenSecretManager.updateCurrentKey|https://github.com/apache/hadoop/blob/581072a8f04f7568d3560f105fd1988d3acc9e54/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java#L360]
>  increments the current key id and creates the new delegation key in two 
> distinct synchronized blocks.
> This means that other threads can see the class in an *inconsistent state, 
> where the key for the current key id doesn't exist (yet)*.
> For example the following method sometimes returns null when the token 
> remover thread is between the two synchronized blocks:
> {noformat}
> @Override
> public DelegationKey getCurrentKey() {
>   return getDelegationKey(getCurrentKeyId());
> }{noformat}
>  
> Also it is possible that updateCurrentKey is called from multiple threads at 
> the same time so *distinct keys can be generated with the same key id*.
>  
> This issue is suspected to be the cause of the intermittent failure of  
> [TestLlapSignerImpl.testSigning|https://github.com/apache/hive/blob/3c0705eaf5121c7b61f2dbe9db9545c3926f26f1/llap-server/src/test/org/apache/hadoop/hive/llap/security/TestLlapSignerImpl.java#L195]
>  - HIVE-22621.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16819) Possible inconsistent state of AbstractDelegationTokenSecretManager

2020-03-12 Thread Jira


[ 
https://issues.apache.org/jira/browse/HADOOP-16819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17057650#comment-17057650
 ] 

Hankó Gergely commented on HADOOP-16819:


I've submitted the PR, but couldn't run the tests because I still don't have 
the necessary AWS keys.

> Possible inconsistent state of AbstractDelegationTokenSecretManager
> ---
>
> Key: HADOOP-16819
> URL: https://issues.apache.org/jira/browse/HADOOP-16819
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, security
>Affects Versions: 3.3.0
>Reporter: Hankó Gergely
>Assignee: Hankó Gergely
>Priority: Major
> Attachments: HADOOP-16819.001.patch
>
>
> [AbstractDelegationTokenSecretManager.updateCurrentKey|https://github.com/apache/hadoop/blob/581072a8f04f7568d3560f105fd1988d3acc9e54/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java#L360]
>  increments the current key id and creates the new delegation key in two 
> distinct synchronized blocks.
> This means that other threads can see the class in an *inconsistent state, 
> where the key for the current key id doesn't exist (yet)*.
> For example the following method sometimes returns null when the token 
> remover thread is between the two synchronized blocks:
> {noformat}
> @Override
> public DelegationKey getCurrentKey() {
>   return getDelegationKey(getCurrentKeyId());
> }{noformat}
>  
> Also it is possible that updateCurrentKey is called from multiple threads at 
> the same time so *distinct keys can be generated with the same key id*.
>  
> This issue is suspected to be the cause of the intermittent failure of  
> [TestLlapSignerImpl.testSigning|https://github.com/apache/hive/blob/3c0705eaf5121c7b61f2dbe9db9545c3926f26f1/llap-server/src/test/org/apache/hadoop/hive/llap/security/TestLlapSignerImpl.java#L195]
>  - HIVE-22621.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1893: HADOOP-16920 ABFS: Make list page size configurable

2020-03-12 Thread GitBox
hadoop-yetus commented on issue #1893: HADOOP-16920 ABFS: Make list page size 
configurable
URL: https://github.com/apache/hadoop/pull/1893#issuecomment-598037247
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 16s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  22m 11s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 46s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  0s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 56s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 14s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 24s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 16s |  hadoop-tools/hadoop-azure: The 
patch generated 3 new + 2 unchanged - 0 fixed = 5 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 29s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m 29s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 24s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 44s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  67m 56s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.8 Server=19.03.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1893/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1893 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux c742fe3766f0 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0b931f3 |
   | Default Java | 1.8.0_242 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1893/2/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1893/2/testReport/ |
   | Max. process+thread count | 309 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1893/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ghanko opened a new pull request #1894: HADOOP-16819 Possible inconsistent state of AbstractDelegationTokenSecretManager

2020-03-12 Thread GitBox
ghanko opened a new pull request #1894: HADOOP-16819 Possible inconsistent 
state of AbstractDelegationTokenSecretManager
URL: https://github.com/apache/hadoop/pull/1894
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16854) ABFS: Tune the logic calculating max concurrent request count

2020-03-12 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H reassigned HADOOP-16854:
-

Assignee: Bilahari T H  (was: Sneha Vijayarajan)

> ABFS: Tune the logic calculating max concurrent request count
> -
>
> Key: HADOOP-16854
> URL: https://issues.apache.org/jira/browse/HADOOP-16854
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.1
>Reporter: Sneha Vijayarajan
>Assignee: Bilahari T H
>Priority: Major
>
> Currently in environments where memory is restricted, current max concurrent 
> request count logic will trigger a large number of buffers needed for the 
> execution to be blocked leading to out Of Memory exceptions. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move et

2020-03-12 Thread GitBox
hadoop-yetus commented on issue #1829: HDFS-14743. Enhance 
INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support 
Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-598022742
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 14s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
3 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  21m 41s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 33s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m  2s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 58s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  3s |  the patch passed  |
   | -1 :x: |  javac  |   1m  3s |  hadoop-hdfs-project_hadoop-hdfs generated 6 
new + 579 unchanged - 0 fixed = 585 total (was 579)  |
   | -0 :warning: |  checkstyle  |   0m 44s |  hadoop-hdfs-project/hadoop-hdfs: 
The patch generated 6 new + 339 unchanged - 0 fixed = 345 total (was 339)  |
   | +1 :green_heart: |  mvnsite  |   1m 10s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 43s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 43s |  hadoop-hdfs-project_hadoop-hdfs generated 
1 new + 100 unchanged - 0 fixed = 101 total (was 100)  |
   | +1 :green_heart: |  findbugs  |   3m 18s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 108m  2s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 181m 53s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.8 Server=19.03.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/15/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1829 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 23bd4bf9dade 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0b931f3 |
   | Default Java | 1.8.0_242 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/15/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/15/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/15/artifact/out/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/15/testReport/ |
   | Max. process+thread count | 2876 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/15/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org