[jira] [Commented] (HADOOP-16097) Provide proper documentation for FairCallQueue

2019-02-11 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765744#comment-16765744
 ] 

Yiqun Lin commented on HADOOP-16097:


Thanks for addressing the comments, [~xkrogen]! The patch looks great now.
{quote}The priority level computation for the users is started from low 
priority levels since they will be most common.
{quote}
This sentence is from the comment in 
{{DecayRpcScheduler#computePriorityLevel}}. I just want let users know the 
computation for the priority level. But it's ok to mention this or not. Current 
description is enough I think.

Rendered the doc in my local, the imgae cannot be rendered well,  
 Can we update {{(../resources/images/faircallqueue-overview.png)}} to 
{{(./images/faircallqueue-overview.png)}}? For the hadoop site page, the latter 
should be the right path, the former only makes sense for the github page.

BTW, [~xkrogen], can you attach a screen shot of the rendered page once this 
addressed?

Follow steps:
 * cd hadoop-common-project/hadoop-common
 * run mvn site:site
 * open the page 
hadoop-common-project/hadoop-common/target/site/FairCallQueue.html 

> Provide proper documentation for FairCallQueue
> --
>
> Key: HADOOP-16097
> URL: https://issues.apache.org/jira/browse/HADOOP-16097
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, ipc
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16097.000.patch, HADOOP-16097.001.patch, 
> faircallqueue-overview.png
>
>
> FairCallQueue, added in HADOOP-10282, doesn't seem to be well-documented 
> anywhere. Let's add in a new documentation for it and related components.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16098) Fix javadoc warnings in hadoop-aws

2019-02-11 Thread Masatake Iwasaki (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-16098:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

> Fix javadoc warnings in hadoop-aws
> --
>
> Key: HADOOP-16098
> URL: https://issues.apache.org/jira/browse/HADOOP-16098
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-16098.001.patch
>
>
> mvn package -Pdist fails due to javadoc warnings in hadoop-aws.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16085) S3Guard: use object version to protect against inconsistent read after replace/overwrite

2019-02-11 Thread Ben Roling (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765489#comment-16765489
 ] 

Ben Roling commented on HADOOP-16085:
-

I commented on HADOOP-15625:

https://issues.apache.org/jira/browse/HADOOP-15625?focusedCommentId=16765486=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16765486

 

As mentioned there, I have a patch for that issue.  I'm having trouble 
uploading it for some reason though.  It is as though I don't have permission.  
The attachment area of the Jira doesn't look like it does on this issue where I 
AM allowed to upload.

In that patch I elected to just use a vanilla IOException for the exception 
type.  Alternative suggestions are welcome.

> S3Guard: use object version to protect against inconsistent read after 
> replace/overwrite
> 
>
> Key: HADOOP-16085
> URL: https://issues.apache.org/jira/browse/HADOOP-16085
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Ben Roling
>Priority: Major
> Attachments: HADOOP-16085_002.patch, HADOOP-16085_3.2.0_001.patch
>
>
> Currently S3Guard doesn't track S3 object versions.  If a file is written in 
> S3A with S3Guard and then subsequently overwritten, there is no protection 
> against the next reader seeing the old version of the file instead of the new 
> one.
> It seems like the S3Guard metadata could track the S3 object version.  When a 
> file is created or updated, the object version could be written to the 
> S3Guard metadata.  When a file is read, the read out of S3 could be performed 
> by object version, ensuring the correct version is retrieved.
> I don't have a lot of direct experience with this yet, but this is my 
> impression from looking through the code.  My organization is looking to 
> shift some datasets stored in HDFS over to S3 and is concerned about this 
> potential issue as there are some cases in our codebase that would do an 
> overwrite.
> I imagine this idea may have been considered before but I couldn't quite 
> track down any JIRAs discussing it.  If there is one, feel free to close this 
> with a reference to it.
> Am I understanding things correctly?  Is this idea feasible?  Any feedback 
> that could be provided would be appreciated.  We may consider crafting a 
> patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11223) Offer a read-only conf alternative to new Configuration()

2019-02-11 Thread Michael Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765488#comment-16765488
 ] 

Michael Miller commented on HADOOP-11223:
-

The above failure seems unrelated.  The test I added is passing:
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.381 s 
- in org.apache.hadoop.core.conf.TestUnmodifiableConfiguration


> Offer a read-only conf alternative to new Configuration()
> -
>
> Key: HADOOP-11223
> URL: https://issues.apache.org/jira/browse/HADOOP-11223
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Gopal V
>Assignee: Michael Miller
>Priority: Major
>  Labels: Performance
> Attachments: HADOOP-11223.001.patch, HADOOP-11223.002.patch, 
> HADOOP-11223.003.patch
>
>
> new Configuration() is called from several static blocks across Hadoop.
> This is incredibly inefficient, since each one of those involves primarily 
> XML parsing at a point where the JIT won't be triggered & interpreter mode is 
> essentially forced on the JVM.
> The alternate solution would be to offer a {{Configuration::getDefault()}} 
> alternative which disallows any modifications.
> At the very least, such a method would need to be called from 
> # org.apache.hadoop.io.nativeio.NativeIO::()
> # org.apache.hadoop.security.SecurityUtil::()
> # org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider::



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-11 Thread Ben Roling (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765486#comment-16765486
 ] 

Ben Roling commented on HADOOP-15625:
-

Hmm, it seems like I cannot upload a patch to this issue for some reason?

The changes can be seen here until I figure out how to attach the patch:
https://github.com/ben-roling/hadoop/commit/2e7e42e6663ebd16161a1c636682f44753fcaac7

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP-15625-001.patch, HADOOP-15625-002.patch, 
> HADOOP-15625-003.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-11 Thread Ben Roling (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765485#comment-16765485
 ] 

Ben Roling commented on HADOOP-15625:
-

I've got a patch that modifies only S3AInputStream and adds no new remote 
calls.  Additionally, it modifies the 
ITestS3AFailureHandling.testReadFileChanged() test case, which was already 
testing the concurrent read and write scenario.  Previously the test had the 
expectation of EOF on a read after a seek backwards past the new length of the 
file with a concurrent write of a smaller file.  In my patch the test expects 
an IOException after any seek backwards.

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP-15625-001.patch, HADOOP-15625-002.patch, 
> HADOOP-15625-003.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16098) Fix javadoc warnings in hadoop-aws

2019-02-11 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765484#comment-16765484
 ] 

Hudson commented on HADOOP-16098:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15931 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15931/])
HADOOP-16098. Fix javadoc warnings in hadoop-aws. Contributed by (iwasakims: 
rev 6c999fe4b0181720c8e55be8388bd592196c8c87)
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/TemporaryAWSCredentialsProvider.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/RoleModel.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AOpContext.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3Guard.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/AbstractSessionCredentialsProvider.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InternalConstants.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/AbstractDelegationTokenBinding.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/select/InternalSelectConstants.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AReadOpContext.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DirListingMetadata.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/MarshalledCredentialBinding.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/S3ADelegationTokens.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/AbstractS3ATokenIdentifier.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/SessionTokenIdentifier.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InconsistentAmazonS3Client.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/SessionTokenBinding.java


> Fix javadoc warnings in hadoop-aws
> --
>
> Key: HADOOP-16098
> URL: https://issues.apache.org/jira/browse/HADOOP-16098
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-16098.001.patch
>
>
> mvn package -Pdist fails due to javadoc warnings in hadoop-aws.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16098) Fix javadoc warnings in hadoop-aws

2019-02-11 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765446#comment-16765446
 ] 

Masatake Iwasaki commented on HADOOP-16098:
---

Thanks, [~eyang]. Committing this.

> Fix javadoc warnings in hadoop-aws
> --
>
> Key: HADOOP-16098
> URL: https://issues.apache.org/jira/browse/HADOOP-16098
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-16098.001.patch
>
>
> mvn package -Pdist fails due to javadoc warnings in hadoop-aws.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16106) hadoop-aws project javadoc does not compile

2019-02-11 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang resolved HADOOP-16106.

Resolution: Duplicate

This is a duplicate of HADOOP-16098.

> hadoop-aws project javadoc does not compile
> ---
>
> Key: HADOOP-16106
> URL: https://issues.apache.org/jira/browse/HADOOP-16106
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hadoop-aws
>Reporter: Eric Yang
>Assignee: Steve Loughran
>Priority: Trivial
>
> Apache Hadoop Amazon Web Services support maven javadoc doesn't build 
> properly because two non-html friendly characters in javadoc comments.
> {code}
> [ERROR] 
> /home/eyang/test/hadoop/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InternalConstants.java:31:
>  error: bad HTML entity
> [ERROR]  * Please don't refer to these outside of this module & its tests.
> [ERROR]   ^
> [ERROR] 
> /home/eyang/test/hadoop/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AReadOpContext.java:115:
>  error: bad use of '>'
> [ERROR]* @return a value >= 0
> [ERROR]  ^
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16098) Fix javadoc warnings in hadoop-aws

2019-02-11 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765439#comment-16765439
 ] 

Eric Yang commented on HADOOP-16098:


+1 looks good.

> Fix javadoc warnings in hadoop-aws
> --
>
> Key: HADOOP-16098
> URL: https://issues.apache.org/jira/browse/HADOOP-16098
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-16098.001.patch
>
>
> mvn package -Pdist fails due to javadoc warnings in hadoop-aws.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11223) Offer a read-only conf alternative to new Configuration()

2019-02-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765429#comment-16765429
 ] 

Hadoop QA commented on HADOOP-11223:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 57s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 80 new + 0 unchanged - 0 fixed = 80 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 35s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-11223 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958301/HADOOP-11223.003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 585a8c81c3e0 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ca4e46a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15913/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15913/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15913/testReport/ |
| Max. process+thread count | 1380 (vs. 

[jira] [Commented] (HADOOP-16097) Provide proper documentation for FairCallQueue

2019-02-11 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765409#comment-16765409
 ] 

Erik Krogen commented on HADOOP-16097:
--

Thanks [~jojochuang]! I had to develop my own understanding and make a write up 
to explain it to some internal partners so I thought it would make sense to 
publish it here as well. Glad that you're getting use out of it.

> Provide proper documentation for FairCallQueue
> --
>
> Key: HADOOP-16097
> URL: https://issues.apache.org/jira/browse/HADOOP-16097
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, ipc
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16097.000.patch, HADOOP-16097.001.patch, 
> faircallqueue-overview.png
>
>
> FairCallQueue, added in HADOOP-10282, doesn't seem to be well-documented 
> anywhere. Let's add in a new documentation for it and related components.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16097) Provide proper documentation for FairCallQueue

2019-02-11 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765394#comment-16765394
 ] 

Wei-Chiu Chuang commented on HADOOP-16097:
--

Thanks for the doc. Honestly it is deeper than what I understand about 
FairCallQueue, so really appreciate & enjoyed the read.

> Provide proper documentation for FairCallQueue
> --
>
> Key: HADOOP-16097
> URL: https://issues.apache.org/jira/browse/HADOOP-16097
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, ipc
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16097.000.patch, HADOOP-16097.001.patch, 
> faircallqueue-overview.png
>
>
> FairCallQueue, added in HADOOP-10282, doesn't seem to be well-documented 
> anywhere. Let's add in a new documentation for it and related components.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11223) Offer a read-only conf alternative to new Configuration()

2019-02-11 Thread Michael Miller (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Miller updated HADOOP-11223:

Attachment: HADOOP-11223.003.patch

> Offer a read-only conf alternative to new Configuration()
> -
>
> Key: HADOOP-11223
> URL: https://issues.apache.org/jira/browse/HADOOP-11223
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Gopal V
>Assignee: Michael Miller
>Priority: Major
>  Labels: Performance
> Attachments: HADOOP-11223.001.patch, HADOOP-11223.002.patch, 
> HADOOP-11223.003.patch
>
>
> new Configuration() is called from several static blocks across Hadoop.
> This is incredibly inefficient, since each one of those involves primarily 
> XML parsing at a point where the JIT won't be triggered & interpreter mode is 
> essentially forced on the JVM.
> The alternate solution would be to offer a {{Configuration::getDefault()}} 
> alternative which disallows any modifications.
> At the very least, such a method would need to be called from 
> # org.apache.hadoop.io.nativeio.NativeIO::()
> # org.apache.hadoop.security.SecurityUtil::()
> # org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider::



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11223) Offer a read-only conf alternative to new Configuration()

2019-02-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765334#comment-16765334
 ] 

Hadoop QA commented on HADOOP-11223:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
40s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m  
2s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m  2s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 79 new + 0 unchanged - 0 fixed = 79 total (was 0) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
43s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  0m 
34s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
31s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 46s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-11223 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958287/HADOOP-11223.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3b7ff4e64023 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0ceb1b7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15912/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15912/artifact/out/patch-compile-root.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15912/artifact/out/patch-compile-root.txt
 |
| checkstyle | 

[jira] [Commented] (HADOOP-16105) WASB in secure mode does not set connectingUsingSAS

2019-02-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765326#comment-16765326
 ] 

Hadoop QA commented on HADOOP-16105:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
8s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16105 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958286/HADOOP-16105-002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 95044a1e3bd6 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0ceb1b7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15911/artifact/out/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15911/testReport/ |
| Max. process+thread count | 340 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15911/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was 

[jira] [Commented] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-02-11 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765324#comment-16765324
 ] 

Da Zhou commented on HADOOP-16068:
--

Thanks for adding this, great documentation and tests!
There is one typo in AbfsDtFetcher.java: line 34: S3A should be replaced with 
ABFS.

> ABFS Auth and DT plugins to be bound to specific URI of the FS
> --
>
> Key: HADOOP-16068
> URL: https://issues.apache.org/jira/browse/HADOOP-16068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16068-001.patch, HADOOP-16068-002.patch, 
> HADOOP-16068-003.patch, HADOOP-16068-004.patch
>
>
> followup from HADOOP-15692: pass in the URI & conf of the owner FS to bind 
> the plugins to the specific FS instance. Without that you can't have per FS 
> auth
> +add a stub DT plugin for testing, verify that DTs are collected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16106) hadoop-aws project javadoc does not compile

2019-02-11 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765318#comment-16765318
 ] 

Masatake Iwasaki commented on HADOOP-16106:
---

[~eyang] and [~ste...@apache.org], I submitted a javadoc fix patch in 
HADOOP-16098. Is this duplicate of that?

> hadoop-aws project javadoc does not compile
> ---
>
> Key: HADOOP-16106
> URL: https://issues.apache.org/jira/browse/HADOOP-16106
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hadoop-aws
>Reporter: Eric Yang
>Assignee: Steve Loughran
>Priority: Trivial
>
> Apache Hadoop Amazon Web Services support maven javadoc doesn't build 
> properly because two non-html friendly characters in javadoc comments.
> {code}
> [ERROR] 
> /home/eyang/test/hadoop/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InternalConstants.java:31:
>  error: bad HTML entity
> [ERROR]  * Please don't refer to these outside of this module & its tests.
> [ERROR]   ^
> [ERROR] 
> /home/eyang/test/hadoop/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AReadOpContext.java:115:
>  error: bad use of '>'
> [ERROR]* @return a value >= 0
> [ERROR]  ^
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16105) WASB in secure mode does not set connectingUsingSAS

2019-02-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765319#comment-16765319
 ] 

Hadoop QA commented on HADOOP-16105:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
54s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
12s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16105 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958285/HADOOP-16105-001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5a2af070414d 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0ceb1b7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15910/artifact/out/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15910/testReport/ |
| Max. process+thread count | 305 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15910/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was 

[jira] [Comment Edited] (HADOOP-16105) WASB in secure mode does not set connectingUsingSAS

2019-02-11 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765255#comment-16765255
 ] 

Steve Loughran edited comment on HADOOP-16105 at 2/11/19 6:45 PM:
--

Patch 002: patch 001 with commented out lines in the new test case actually 
deleted.
Test run against Azure ireland; all well


was (Author: ste...@apache.org):
Patch 002: patch 001 with commented out lines in the new test case actually 
deleted.
Test run in progress

> WASB in secure mode does not set connectingUsingSAS
> ---
>
> Key: HADOOP-16105
> URL: https://issues.apache.org/jira/browse/HADOOP-16105
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.8.5, 3.1.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16105-001.patch, HADOOP-16105-002.patch
>
>
> If you run WASB in secure mode, it doesn't set {{connectingUsingSAS}} to 
> true, which can break things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15229) Add FileSystem builder-based openFile() API to match createFile(); S3A to implement S3 Select through this API.

2019-02-11 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765266#comment-16765266
 ] 

Eric Yang commented on HADOOP-15229:


Some trivial fix in javadoc is required.

> Add FileSystem builder-based openFile() API to match createFile(); S3A to 
> implement S3 Select through this API.
> ---
>
> Key: HADOOP-15229
> URL: https://issues.apache.org/jira/browse/HADOOP-15229
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15229-001.patch, HADOOP-15229-002.patch, 
> HADOOP-15229-003.patch, HADOOP-15229-004.patch, HADOOP-15229-004.patch, 
> HADOOP-15229-005.patch, HADOOP-15229-006.patch, HADOOP-15229-007.patch, 
> HADOOP-15229-009.patch, HADOOP-15229-010.patch, HADOOP-15229-011.patch, 
> HADOOP-15229-012.patch, HADOOP-15229-013.patch, HADOOP-15229-014.patch, 
> HADOOP-15229-015.patch, HADOOP-15229-016.patch, HADOOP-15229-017.patch, 
> HADOOP-15229-018.patch, HADOOP-15229-019.patch, HADOOP-15229-020.patch
>
>
> Replicate HDFS-1170 and HADOOP-14365 with an API to open files.
> A key requirement of this is not HDFS, it's to put in the fadvise policy for 
> working with object stores, where getting the decision to do a full GET and 
> TCP abort on seek vs smaller GETs is fundamentally different: the wrong 
> option can cost you minutes. S3A and Azure both have adaptive policies now 
> (first backward seek), but they still don't do it that well.
> Columnar formats (ORC, Parquet) should be able to say "fs.input.fadvise" 
> "random" as an option when they open files; I can imagine other options too.
> The Builder model of [~eddyxu] is the one to mimic, method for method. 
> Ideally with as much code reuse as possible



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16106) hadoop-aws project javadoc does not compile

2019-02-11 Thread Eric Yang (JIRA)
Eric Yang created HADOOP-16106:
--

 Summary: hadoop-aws project javadoc does not compile
 Key: HADOOP-16106
 URL: https://issues.apache.org/jira/browse/HADOOP-16106
 Project: Hadoop Common
  Issue Type: Bug
  Components: hadoop-aws
Reporter: Eric Yang
Assignee: Steve Loughran


Apache Hadoop Amazon Web Services support maven javadoc doesn't build properly 
because two non-html friendly characters in javadoc comments.

{code}
[ERROR] 
/home/eyang/test/hadoop/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InternalConstants.java:31:
 error: bad HTML entity
[ERROR]  * Please don't refer to these outside of this module & its tests.
[ERROR]   ^
[ERROR] 
/home/eyang/test/hadoop/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AReadOpContext.java:115:
 error: bad use of '>'
[ERROR]* @return a value >= 0
[ERROR]  ^
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16105) WASB in secure mode does not set connectingUsingSAS

2019-02-11 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16105:

Attachment: HADOOP-16105-002.patch

> WASB in secure mode does not set connectingUsingSAS
> ---
>
> Key: HADOOP-16105
> URL: https://issues.apache.org/jira/browse/HADOOP-16105
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.8.5, 3.1.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16105-001.patch, HADOOP-16105-002.patch
>
>
> If you run WASB in secure mode, it doesn't set {{connectingUsingSAS}} to 
> true, which can break things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11223) Offer a read-only conf alternative to new Configuration()

2019-02-11 Thread Michael Miller (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Miller updated HADOOP-11223:

Attachment: HADOOP-11223.002.patch

> Offer a read-only conf alternative to new Configuration()
> -
>
> Key: HADOOP-11223
> URL: https://issues.apache.org/jira/browse/HADOOP-11223
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Gopal V
>Assignee: Michael Miller
>Priority: Major
>  Labels: Performance
> Attachments: HADOOP-11223.001.patch, HADOOP-11223.002.patch
>
>
> new Configuration() is called from several static blocks across Hadoop.
> This is incredibly inefficient, since each one of those involves primarily 
> XML parsing at a point where the JIT won't be triggered & interpreter mode is 
> essentially forced on the JVM.
> The alternate solution would be to offer a {{Configuration::getDefault()}} 
> alternative which disallows any modifications.
> At the very least, such a method would need to be called from 
> # org.apache.hadoop.io.nativeio.NativeIO::()
> # org.apache.hadoop.security.SecurityUtil::()
> # org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider::



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16105) WASB in secure mode does not set connectingUsingSAS

2019-02-11 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16105:

Status: Patch Available  (was: Open)

Patch 002: patch 001 with commented out lines in the new test case actually 
deleted.
Test run in progress

> WASB in secure mode does not set connectingUsingSAS
> ---
>
> Key: HADOOP-16105
> URL: https://issues.apache.org/jira/browse/HADOOP-16105
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.1.2, 2.8.5
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16105-001.patch, HADOOP-16105-002.patch
>
>
> If you run WASB in secure mode, it doesn't set {{connectingUsingSAS}} to 
> true, which can break things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16105) WASB in secure mode does not set connectingUsingSAS

2019-02-11 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16105:

Status: Open  (was: Patch Available)

> WASB in secure mode does not set connectingUsingSAS
> ---
>
> Key: HADOOP-16105
> URL: https://issues.apache.org/jira/browse/HADOOP-16105
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.1.2, 2.8.5
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16105-001.patch, HADOOP-16105-002.patch
>
>
> If you run WASB in secure mode, it doesn't set {{connectingUsingSAS}} to 
> true, which can break things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16105) WASB in secure mode does not set connectingUsingSAS

2019-02-11 Thread David McGinnis (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765243#comment-16765243
 ] 

David McGinnis commented on HADOOP-16105:
-

Thanks, [~steve_l], this is pretty much it based on the patch i supplied to 
y'all and my investigation. One thing I want to point out is that the check for 
the existing container isn't actually the thing that fails, but instead the 
call to downloadAttributes on the container. I've tried giving the SAS key all 
permissions on the container, but it still fails, so i suspect that you have to 
give it the SAS token for the user to have permissions to do that, which is 
just not reasonable anywhere that security matters.

Also, you should be able to test more easily by just having the REST service 
return a container SAS, and ensure that works. The current code fails in this 
case.

> WASB in secure mode does not set connectingUsingSAS
> ---
>
> Key: HADOOP-16105
> URL: https://issues.apache.org/jira/browse/HADOOP-16105
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.8.5, 3.1.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16105-001.patch
>
>
> If you run WASB in secure mode, it doesn't set {{connectingUsingSAS}} to 
> true, which can break things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16105) WASB in secure mode does not set connectingUsingSAS

2019-02-11 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16105:

Status: Patch Available  (was: Open)

patch 001. Tested, Azure ireland.

Tests run: 633, Failures: 0, Errors: 0, Skipped: 108

> WASB in secure mode does not set connectingUsingSAS
> ---
>
> Key: HADOOP-16105
> URL: https://issues.apache.org/jira/browse/HADOOP-16105
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.1.2, 2.8.5
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16105-001.patch
>
>
> If you run WASB in secure mode, it doesn't set {{connectingUsingSAS}} to 
> true, which can break things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16105) WASB in secure mode does not set connectingUsingSAS

2019-02-11 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16105:

Attachment: HADOOP-16105-001.patch

> WASB in secure mode does not set connectingUsingSAS
> ---
>
> Key: HADOOP-16105
> URL: https://issues.apache.org/jira/browse/HADOOP-16105
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.8.5, 3.1.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16105-001.patch
>
>
> If you run WASB in secure mode, it doesn't set {{connectingUsingSAS}} to 
> true, which can break things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16105) WASB in secure mode does not set connectingUsingSAS

2019-02-11 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765232#comment-16765232
 ] 

Steve Loughran commented on HADOOP-16105:
-

OK, I understand this now. I'd thought that it wasn't actually connecting, 
which would be a major issue that people should have seen. It's a bit more 
subtle

The {{connectingUsingSAS}} flag tells the client not to check for the existence 
of the bucket, on the basis that the granted account *may not have the 
permissions to probe for that state*. 

Which is why if you connect using SAS outside of secure mode, the existence 
check is disabled.

In secure mode tests, if your a/c has the right permissions, everything works, 
so the fact that the property was null doesn't surface. 

You need the following conditions
# secure mode = true
# client doesn't have permissions to call 
{{container.downloadAttributes(getInstrumentedContext())}}

Setting the {{connectingUsingSAS}} property to true disables the existence 
checks on the first file IO operation. 

I've got the patch for this, along with a test which does a best effort attempt 
to replicate the problem by (a) deleting the container and (b) expecting the 
fact that the container is not found to surface a bit later in the write file 
codepath.

It also
 * stops the test setup deletion of the account credentials in secure mode 
(they're needed to generate the local SAS key)
 * Adds logging @ debug as to what is going on in the secure mode codepath
 * {{LocalSASKeyGeneratorImpl}} to raise a SASKeyGenerationException if the 
account key is null/empty. This means that the error message is more meaningful 
than an IllegalArgumentException about base-64 decoding. 


> WASB in secure mode does not set connectingUsingSAS
> ---
>
> Key: HADOOP-16105
> URL: https://issues.apache.org/jira/browse/HADOOP-16105
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.8.5, 3.1.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> If you run WASB in secure mode, it doesn't set {{connectingUsingSAS}} to 
> true, which can break things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16104) Wasb tests to downgrade to skip when test a/c is namespace enabled

2019-02-11 Thread Masatake Iwasaki (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki reassigned HADOOP-16104:
-

Assignee: Masatake Iwasaki

> Wasb tests to downgrade to skip when test a/c is namespace enabled
> --
>
> Key: HADOOP-16104
> URL: https://issues.apache.org/jira/browse/HADOOP-16104
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Masatake Iwasaki
>Priority: Major
>
> When you run the abfs tests with a namespace-enabled accounts, all the wasb 
> tests fail "don't yet work with namespace-enabled accounts". This should be 
> downgraded to a test skip, somehow



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16104) Wasb tests to downgrade to skip when test a/c is namespace enabled

2019-02-11 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765231#comment-16765231
 ] 

Masatake Iwasaki commented on HADOOP-16104:
---

I'm going to write a patch since I'm setting up azure account and trying it by 
chance.

> Wasb tests to downgrade to skip when test a/c is namespace enabled
> --
>
> Key: HADOOP-16104
> URL: https://issues.apache.org/jira/browse/HADOOP-16104
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Major
>
> When you run the abfs tests with a namespace-enabled accounts, all the wasb 
> tests fail "don't yet work with namespace-enabled accounts". This should be 
> downgraded to a test skip, somehow



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16105) WASB in secure mode does not set connectingUsingSAS

2019-02-11 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16105:

Description: If you run WASB in secure mode, it doesn't set 
{{connectingUsingSAS}} to true, which can break things  (was: If you run WASB 
in secure mode, it doesn't try to connect with the sas key, because it doesn't 
set {{connectingUsingSAS}} to true)

> WASB in secure mode does not set connectingUsingSAS
> ---
>
> Key: HADOOP-16105
> URL: https://issues.apache.org/jira/browse/HADOOP-16105
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.8.5, 3.1.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> If you run WASB in secure mode, it doesn't set {{connectingUsingSAS}} to 
> true, which can break things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16105) WASB in secure mode does not set connectingUsingSAS

2019-02-11 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16105:

Summary: WASB in secure mode does not set connectingUsingSAS  (was: WASB in 
secure mode not using SAS key)

> WASB in secure mode does not set connectingUsingSAS
> ---
>
> Key: HADOOP-16105
> URL: https://issues.apache.org/jira/browse/HADOOP-16105
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.8.5, 3.1.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> If you run WASB in secure mode, it doesn't try to connect with the sas key, 
> because it doesn't set {{connectingUsingSAS}} to true



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16100) checksum errors observed while executing examples on big-endian system

2019-02-11 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765213#comment-16765213
 ] 

Eric Yang commented on HADOOP-16100:


[~salamani] Most compression libraries are not big endian compliant.  If I 
recall correctly in porting Hadoop to IBM Power7 several years ago, and there 
were some changes required in compression codec like lz0 and snappy to get 
proper checksum.  Try to switch compression codecs, and see if it makes any 
difference.

> checksum errors observed while executing examples on big-endian system
> --
>
> Key: HADOOP-16100
> URL: https://issues.apache.org/jira/browse/HADOOP-16100
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: salamani
>Priority: Major
> Attachments: hadoop_checksumerror_on_bigendian.log, 
> hadoop_test_2.9.2_log.txt
>
>
> I have observed the checksum error with big-endian system while executing an 
> example with hadoop 2.7.1/2.7.7.
> I have also tried by building the hadoop 2.7.7 on big-endian system. Build 
> was successful, however still observed the checksum error while executing an 
> example and testcases. Please find the logs for more information
> can you help us know how to go about resolving this checksum issues or are 
> they known issues for big endian platform



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16097) Provide proper documentation for FairCallQueue

2019-02-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765182#comment-16765182
 ] 

Hadoop QA commented on HADOOP-16097:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
5s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
32m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16097 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958274/HADOOP-16097.001.patch
 |
| Optional Tests |  dupname  asflicense  mvnsite  xml  |
| uname | Linux 98d5784f0752 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 73b67b2 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-project hadoop-common-project/hadoop-common U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15909/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Provide proper documentation for FairCallQueue
> --
>
> Key: HADOOP-16097
> URL: https://issues.apache.org/jira/browse/HADOOP-16097
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, ipc
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16097.000.patch, HADOOP-16097.001.patch, 
> faircallqueue-overview.png
>
>
> FairCallQueue, added in HADOOP-10282, doesn't seem to be well-documented 
> anywhere. Let's add in a new documentation for it and related components.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16100) checksum errors observed while executing examples on big-endian system

2019-02-11 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765189#comment-16765189
 ] 

Allen Wittenauer commented on HADOOP-16100:
---

bq. Are these are expected failures?

It looks like you've got native code enabled. Try building without native code. 
 My hunch is that it will work because most of the native code is not POSIX 
compliant and features some incredibly unsafe code. (There's an exchange in 
JIRA somewhere between an Oracle nee Sun kernel developer and a Hadoop PMC 
member that really pushes this point home.)



> checksum errors observed while executing examples on big-endian system
> --
>
> Key: HADOOP-16100
> URL: https://issues.apache.org/jira/browse/HADOOP-16100
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: salamani
>Priority: Major
> Attachments: hadoop_checksumerror_on_bigendian.log, 
> hadoop_test_2.9.2_log.txt
>
>
> I have observed the checksum error with big-endian system while executing an 
> example with hadoop 2.7.1/2.7.7.
> I have also tried by building the hadoop 2.7.7 on big-endian system. Build 
> was successful, however still observed the checksum error while executing an 
> example and testcases. Please find the logs for more information
> can you help us know how to go about resolving this checksum issues or are 
> they known issues for big endian platform



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16091) Create hadoop/ozone docker images with inline build process

2019-02-11 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765148#comment-16765148
 ] 

Elek, Marton commented on HADOOP-16091:
---

Thanks to share the technical details [~eyang].
 # For me the file based activation is not enough. I wouldn't like to build a 
new docker image with each build. I think it should be activated with explicit 
profile declaration.
 # With this approach for the docker based builds (eg. release builds, jenkins 
builds) we need docker-in-docker base image or we need to map the docker.sock 
from outside to inside.
 # My questions are still open: I think the we need a method to 
upgrade/modify/create images for existing releases, especially:
 ## adding security fixes to existing, released images
 ## creating new images for older releases
 # I think the containers are more reproducible if they are based on released 
tar files. 

 

> Create hadoop/ozone docker images with inline build process
> ---
>
> Key: HADOOP-16091
> URL: https://issues.apache.org/jira/browse/HADOOP-16091
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Priority: Major
>
> This is proposed by [~eyang] in 
> [this|https://lists.apache.org/thread.html/33ac54bdeacb4beb023ebd452464603aaffa095bd104cb43c22f484e@%3Chdfs-dev.hadoop.apache.org%3E]
>  mailing thread.
> {quote}1, 3. There are 38 Apache projects hosting docker images on Docker hub 
> using Apache Organization. By browsing Apache github mirror. There are only 7 
> projects using a separate repository for docker image build. Popular projects 
> official images are not from Apache organization, such as zookeeper, tomcat, 
> httpd. We may not disrupt what other Apache projects are doing, but it looks 
> like inline build process is widely employed by majority of projects such as 
> Nifi, Brooklyn, thrift, karaf, syncope and others. The situation seems a bit 
> chaotic for Apache as a whole. However, Hadoop community can decide what is 
> best for Hadoop. My preference is to remove ozone from source tree naming, if 
> Ozone is intended to be subproject of Hadoop for long period of time. This 
> enables Hadoop community to host docker images for various subproject without 
> having to check out several source tree to trigger a grand build. However, 
> inline build process seems more popular than separated process. Hence, I 
> highly recommend making docker build inline if possible.
> {quote}
> The main challenges are also discussed in the thread:
> {code:java}
> 3. Technically it would be possible to add the Dockerfile to the source
> tree and publish the docker image together with the release by the
> release manager but it's also problematic:
> {code}
> a) there is no easy way to stage the images for the vote
>  c) it couldn't be flagged as automated on dockerhub
>  d) It couldn't support the critical updates.
>  * Updating existing images (for example in case of an ssl bug, rebuild
>  all the existing images with exactly the same payload but updated base
>  image/os environment)
>  * Creating image for older releases (We would like to provide images,
>  for hadoop 2.6/2.7/2.7/2.8/2.9. Especially for doing automatic testing
>  with different versions).
> {code:java}
>  {code}
> The a) can be solved (as [~eyang] suggested) with using a personal docker 
> image during the vote and publish it to the dockerhub after the vote (in case 
> the permission can be set by the INFRA)
> Note: based on LEGAL-270 and linked discussion both approaches (inline build 
> process / external build process) are compatible with the apache release.
> Note: HDDS-851 and HADOOP-14898 contains more information about these 
> problems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16091) Create hadoop/ozone docker images with inline build process

2019-02-11 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HADOOP-16091:
--
Description: 
This is proposed by [~eyang] in 
[this|https://lists.apache.org/thread.html/33ac54bdeacb4beb023ebd452464603aaffa095bd104cb43c22f484e@%3Chdfs-dev.hadoop.apache.org%3E]
 mailing thread.
{quote}1, 3. There are 38 Apache projects hosting docker images on Docker hub 
using Apache Organization. By browsing Apache github mirror. There are only 7 
projects using a separate repository for docker image build. Popular projects 
official images are not from Apache organization, such as zookeeper, tomcat, 
httpd. We may not disrupt what other Apache projects are doing, but it looks 
like inline build process is widely employed by majority of projects such as 
Nifi, Brooklyn, thrift, karaf, syncope and others. The situation seems a bit 
chaotic for Apache as a whole. However, Hadoop community can decide what is 
best for Hadoop. My preference is to remove ozone from source tree naming, if 
Ozone is intended to be subproject of Hadoop for long period of time. This 
enables Hadoop community to host docker images for various subproject without 
having to check out several source tree to trigger a grand build. However, 
inline build process seems more popular than separated process. Hence, I highly 
recommend making docker build inline if possible.
{quote}
The main challenges are also discussed in the thread:
{code:java}
3. Technically it would be possible to add the Dockerfile to the source
tree and publish the docker image together with the release by the
release manager but it's also problematic:

{code}
a) there is no easy way to stage the images for the vote
 c) it couldn't be flagged as automated on dockerhub
 d) It couldn't support the critical updates.
 * Updating existing images (for example in case of an ssl bug, rebuild
 all the existing images with exactly the same payload but updated base
 image/os environment)

 * Creating image for older releases (We would like to provide images,
 for hadoop 2.6/2.7/2.7/2.8/2.9. Especially for doing automatic testing
 with different versions).

{code:java}
 {code}
The a) can be solved (as [~eyang] suggested) with using a personal docker image 
during the vote and publish it to the dockerhub after the vote (in case the 
permission can be set by the INFRA)

Note: based on LEGAL-270 and linked discussion both approaches (inline build 
process / external build process) are compatible with the apache release.

Note: HDDS-851 and HADOOP-14898 contains more information about these problems.

  was:
This is proposed by [~eyang] in 
[this|https://lists.apache.org/thread.html/33ac54bdeacb4beb023ebd452464603aaffa095bd104cb43c22f484e@%3Chdfs-dev.hadoop.apache.org%3E]
 mailing thread.

bq. 1, 3. There are 38 Apache projects hosting docker images on Docker hub 
using Apache Organization.  By browsing Apache github mirror.  There are only 7 
projects using a separate repository for docker image build.  Popular projects 
official images are not from Apache organization, such as zookeeper, tomcat, 
httpd.  We may not disrupt what other Apache projects are doing, but it looks 
like inline build process is widely employed by majority of projects such as 
Nifi, Brooklyn, thrift, karaf, syncope and others.  The situation seems a bit 
chaotic for Apache as a whole.  However, Hadoop community can decide what is 
best for Hadoop.  My preference is to remove ozone from source tree naming, if 
Ozone is intended to be subproject of Hadoop for long period of time.  This 
enables Hadoop community to host docker images for various subproject without 
having to check out several source tree to trigger a grand build.  However, 
inline build process seems more popular than separated process.  Hence, I 
highly recommend making docker build inline if possible.

The main challenges are also discussed in the thread:

{code}
3. Technically it would be possible to add the Dockerfile to the source
tree and publish the docker image together with the release by the
release manager but it's also problematic:

{code}
  a) there is no easy way to stage the images for the vote
  c) it couldn't be flagged as automated on dockerhub
  d) It couldn't support the critical updates.


 * Updating existing images (for example in case of an ssl bug, rebuild
all the existing images with exactly the same payload but updated base
image/os environment)

 * Creating image for older releases (We would like to provide images,
for hadoop 2.6/2.7/2.7/2.8/2.9. Especially for doing automatic testing
with different versions).


{code}

The a) can be solved (as [~eyang] suggested) with using a personal docker image 
during the vote and publish it to the dockerhub after the vote (in case the 
permission can be set by the INFRA)

Note: based on LEGAL-270 and linked discussion both approaches (inline build 
process / external 

[jira] [Comment Edited] (HADOOP-16092) Move the source of hadoop/ozone containers to the same repository

2019-02-11 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765128#comment-16765128
 ] 

Elek, Marton edited comment on HADOOP-16092 at 2/11/19 4:30 PM:


Thanks [~eyang] the comment.

It works together with HADOOP-16091. But it's independent from HADOOP-16091: 
not all the containers can be created from maven. Eg. we have build/base-runner 
images.

Until now there was a strong limitation on dockerhub. If you defined a 
branch->dockertag mapping it was not possible to use the backref of the regular 
expression.

Let's say we have the following branch-tag mapping for the same repository:
||Branch name||container tag||
|hadooprunner-(.*)|{sourceref}|
|hadoop-(.*)|{sourceref}|

With these settings, a hadoop-2.7.0 branch was used to create a docker image 
with tag hadoop-2.7.0. And instead of apache/hadoop:2.7.0 we got an 
apache/hadoop:hadoop-2.7.0. As a workaround we started to use fixed mapping 
without regular expression but it made very hard to support multiple versions 
(that's the reason why we have only hadoop:2 and hadoop:3).

I tested it again recently, and with the latest dockerhub version there is no 
such limitation fortunately. Now we can also use the \{\1} ref.
||Branch name||container tag||
|hadooprunner-(.*)|{\1}|
|hadoop-(.*)|{\1}|

Long story short, with this improvement we can move all the ozone images (from 
hadoop-docker-ozone repo) and hadoop images (from 3 branches of hadoop repo) to 
the same dedicated repository (hadoop-docker).

hadoop-docker.it repository is not yet created, but if you create it with self 
service (can be created only by members), I would be happy to move the existing 
images to there under this jira.


was (Author: elek):
Thanks [~eyang] the comment.

It works together with HADOOP-16091. But it's independent from HADOOP-16091: 
not all the containers can be created from maven. Eg. we have build/base-runner 
images.

Until now there was a strong limitation on dockerhub. If you defined a 
branch->dockertag mapping it was not possible to use the backref of the regular 
expression.

Let's say we have the following branch-tag mapping for the same repository:
||Branch name||container tag||
|hadooprunner-(.*)|{sourceref}|
|ozonerunner-(.*)|{sourceref}|
|hadoop-(.*)|{sourceref}|
|ozone-(.*)|{sourceref}|

With these settings, a hadoop-2.7.0 branch was used to create a docker image 
with tag hadoop-2.7.0. And instead of apache/hadoop:2.7.0 we got an 
apache/hadoop:hadoop-2.7.0. As a workaround we started to use fixed mapping 
without regular expression but it made very hard to support multiple versions 
(that's the reason why we have only hadoop:2 and hadoop:3).

I tested it again recently, and with the latest dockerhub version there is no 
such limitation fortunately. Now we can also use the \{\1} ref.
||Branch name||container tag||final tag||
|hadooprunner-(.*)|{\1}|
|ozonerunner-(.*)|{\1}|
|hadoop-(.*)|{\1}|
|ozone-(.*)|{\1}|

Long story short, with this improvement we can move all the ozone images (from 
hadoop-docker-ozone repo) and hadoop images (from 3 branches of hadoop repo) to 
the same dedicated repository (hadoop-docker).

hadoop-docker repository is not yet created, but if you create it with self 
service, I would be happy to move the existing images to there under this jira.

> Move the source of hadoop/ozone containers to the same repository
> -
>
> Key: HADOOP-16092
> URL: https://issues.apache.org/jira/browse/HADOOP-16092
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Priority: Major
>
> This is proposed by [~eyang] in 
> [this|https://lists.apache.org/thread.html/33ac54bdeacb4beb023ebd452464603aaffa095bd104cb43c22f484e@%3Chdfs-dev.hadoop.apache.org%3E]
>  mailing thread.
> bq. Hadoop community can decide what is best for Hadoop.  My preference is to 
> remove ozone from source tree naming, if Ozone is intended to be subproject 
> of Hadoop for long period of time.  This enables Hadoop community to host 
> docker images for various subproject without having to check out several 
> source tree to trigger a grand build
> As of now the source of  hadoop docker images are stored in the hadoop git 
> repository (docker-* branches) for hadoop and in hadoop-docker-ozone git 
> repository for ozone (all branches).
> As it's discussed in HDDS-851 the biggest challenge to solve here is the 
> mapping between git branches and dockerhub tags. It's not possible to use the 
> captured part of a github branch.
> For example it's not possible to define a rule to build all the ozone-(.*) 
> branches and use a tag $1 for it. Without this support we need to create a 
> new mapping for all the releases manually (with the help of the INFRA).
> Note: HADOOP-16091 can solve this 

[jira] [Commented] (HADOOP-16092) Move the source of hadoop/ozone containers to the same repository

2019-02-11 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765128#comment-16765128
 ] 

Elek, Marton commented on HADOOP-16092:
---

Thanks [~eyang] the comment.

It works together with HADOOP-16091. But it's independent from HADOOP-16091: 
not all the containers can be created from maven. Eg. we have build/base-runner 
images.

Until now there was a strong limitation on dockerhub. If you defined a 
branch->dockertag mapping it was not possible to use the backref of the regular 
expression.

Let's say we have the following branch-tag mapping for the same repository:
||Branch name||container tag||
|hadooprunner-(.*)|{sourceref}|
|ozonerunner-(.*)|{sourceref}|
|hadoop-(.*)|{sourceref}|
|ozone-(.*)|{sourceref}|

With these settings, a hadoop-2.7.0 branch was used to create a docker image 
with tag hadoop-2.7.0. And instead of apache/hadoop:2.7.0 we got an 
apache/hadoop:hadoop-2.7.0. As a workaround we started to use fixed mapping 
without regular expression but it made very hard to support multiple versions 
(that's the reason why we have only hadoop:2 and hadoop:3).

I tested it again recently, and with the latest dockerhub version there is no 
such limitation fortunately. Now we can also use the \{\1} ref.
||Branch name||container tag||final tag||
|hadooprunner-(.*)|{\1}|
|ozonerunner-(.*)|{\1}|
|hadoop-(.*)|{\1}|
|ozone-(.*)|{\1}|

Long story short, with this improvement we can move all the ozone images (from 
hadoop-docker-ozone repo) and hadoop images (from 3 branches of hadoop repo) to 
the same dedicated repository (hadoop-docker).

hadoop-docker repository is not yet created, but if you create it with self 
service, I would be happy to move the existing images to there under this jira.

> Move the source of hadoop/ozone containers to the same repository
> -
>
> Key: HADOOP-16092
> URL: https://issues.apache.org/jira/browse/HADOOP-16092
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Priority: Major
>
> This is proposed by [~eyang] in 
> [this|https://lists.apache.org/thread.html/33ac54bdeacb4beb023ebd452464603aaffa095bd104cb43c22f484e@%3Chdfs-dev.hadoop.apache.org%3E]
>  mailing thread.
> bq. Hadoop community can decide what is best for Hadoop.  My preference is to 
> remove ozone from source tree naming, if Ozone is intended to be subproject 
> of Hadoop for long period of time.  This enables Hadoop community to host 
> docker images for various subproject without having to check out several 
> source tree to trigger a grand build
> As of now the source of  hadoop docker images are stored in the hadoop git 
> repository (docker-* branches) for hadoop and in hadoop-docker-ozone git 
> repository for ozone (all branches).
> As it's discussed in HDDS-851 the biggest challenge to solve here is the 
> mapping between git branches and dockerhub tags. It's not possible to use the 
> captured part of a github branch.
> For example it's not possible to define a rule to build all the ozone-(.*) 
> branches and use a tag $1 for it. Without this support we need to create a 
> new mapping for all the releases manually (with the help of the INFRA).
> Note: HADOOP-16091 can solve this problem as it doesn't require branch 
> mapping any more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16097) Provide proper documentation for FairCallQueue

2019-02-11 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-16097:
-
Attachment: HADOOP-16097.001.patch

> Provide proper documentation for FairCallQueue
> --
>
> Key: HADOOP-16097
> URL: https://issues.apache.org/jira/browse/HADOOP-16097
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, ipc
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16097.000.patch, HADOOP-16097.001.patch, 
> faircallqueue-overview.png
>
>
> FairCallQueue, added in HADOOP-10282, doesn't seem to be well-documented 
> anywhere. Let's add in a new documentation for it and related components.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16097) Provide proper documentation for FairCallQueue

2019-02-11 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765126#comment-16765126
 ] 

Erik Krogen commented on HADOOP-16097:
--

Thanks for the review [~linyiqun]!

I don't quite understand this sentence suggestion:
{quote}
The priority level computation for the users is started from low priority 
levels since they will be most common.
{quote}
Can you explain? Why does the user need to be aware of this?

Re: "a RpcMultiplexer" vs. "an RpcMultiplexer," "an" is the correct form here 
because it is pronounced using a vowel sound ("arr-pee-see"). See this 
[explanation|https://blog.apastyle.org/apastyle/2012/04/using-a-or-an-with-acronyms-and-abbreviations.html].

Re: Line 86, "be proceed" is not correct. Proceed is a verb by itself, there is 
no need for "be."

Thanks for reminding me about identity provider, I completely forgot that it is 
configurable.

Uploading patch v001, this time with the image. 

> Provide proper documentation for FairCallQueue
> --
>
> Key: HADOOP-16097
> URL: https://issues.apache.org/jira/browse/HADOOP-16097
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, ipc
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16097.000.patch, HADOOP-16097.001.patch, 
> faircallqueue-overview.png
>
>
> FairCallQueue, added in HADOOP-10282, doesn't seem to be well-documented 
> anywhere. Let's add in a new documentation for it and related components.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16096) HADOOP-15281/distcp -Xdirect needs to use commons-logging on 3.1

2019-02-11 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765116#comment-16765116
 ] 

Steve Loughran commented on HADOOP-16096:
-

thx for committing this Eric. sorry I broke things. Moral: more due diligence 
on distcp before committing

> HADOOP-15281/distcp -Xdirect needs to use commons-logging on 3.1
> 
>
> Key: HADOOP-16096
> URL: https://issues.apache.org/jira/browse/HADOOP-16096
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.3
>Reporter: Eric Payne
>Assignee: Steve Loughran
>Priority: Critical
> Fix For: 3.1.3
>
> Attachments: HADOOP-15281-branch-3.1-001.patch
>
>
> HADOOP-15281 breaks the branch-3.1 build when building with java 1.8.
> {code:title="RetriableFileCopyCommand.java"}
> LOG.info("Copying {} to {}", source.getPath(), target);
> {code}
> Multiple lines have this error:
> {panel:title="Build Failure"}
> [ERROR] 
> hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java:[121,8]
>  no suitable method found for 
> info(java.lang.String,org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path)
> [ERROR] method org.apache.commons.logging.Log.info(java.lang.Object) is 
> not applicable
> [ERROR]   (actual and formal argument lists differ in length)
> [ERROR] method 
> org.apache.commons.logging.Log.info(java.lang.Object,java.lang.Throwable) is 
> not applicable
> [ERROR]   (actual and formal argument lists differ in length)
> {panel}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16057) IndexOutOfBoundsException in ITestS3GuardToolLocal

2019-02-11 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765111#comment-16765111
 ] 

Steve Loughran commented on HADOOP-16057:
-

oh, that I can believe. Will revert, retest and resubmit

> IndexOutOfBoundsException in ITestS3GuardToolLocal
> --
>
> Key: HADOOP-16057
> URL: https://issues.apache.org/jira/browse/HADOOP-16057
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Adam Antal
>Priority: Major
>
> A new test from HADOOP-15843 is failing: {{testDestroyNoArgs}}; one arg too 
> short in the command line.
> Test run with {{ -Ds3guard -Ddynamodb}}
> {code}
> [ERROR] 
> testDestroyNoArgs(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocal)  
> Time elapsed: 0.761 s  <<< ERROR!
> java.lang.IndexOutOfBoundsException: toIndex = 1
>   at java.util.ArrayList.subListRangeCheck(ArrayList.java:1004)
>   at java.util.ArrayList.subList(ArrayList.java:996)
>   at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:89)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.parseArgs(S3GuardTool.java:371)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Destroy.run(S3GuardTool.java:626)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:399)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.lambda$testDestroyNoArgs$4(AbstractS3GuardToolTestBase.java:403)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16100) checksum errors observed while executing examples on big-endian system

2019-02-11 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765109#comment-16765109
 ] 

Allen Wittenauer commented on HADOOP-16100:
---

There were attempts made to fix a lot of these issues prior to 3.0.  Some of 
the ones that made it in were reverted because the PMC went back on their 
compatibility promises, usually under the guise of 2.x->3.x rolling upgrade 
(which was never supposed to be supported anyway). (A great example is the 
token format, which is currently raw java bytes.  Totally not portable in any 
way, shape, or form and absolutely requires Java to be in the pipeline 
somewhere to process it.)

These issues are pretty much doomed to never get fixed since they require 
breaking compatibility.  That interferes with the vendors bottom line and a 
quick PMC vote is all it takes to stop any sort of forward progress on these 
issues.  (Again, see some of the issues reverted pre-3.0)

> checksum errors observed while executing examples on big-endian system
> --
>
> Key: HADOOP-16100
> URL: https://issues.apache.org/jira/browse/HADOOP-16100
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: salamani
>Priority: Major
> Attachments: hadoop_checksumerror_on_bigendian.log, 
> hadoop_test_2.9.2_log.txt
>
>
> I have observed the checksum error with big-endian system while executing an 
> example with hadoop 2.7.1/2.7.7.
> I have also tried by building the hadoop 2.7.7 on big-endian system. Build 
> was successful, however still observed the checksum error while executing an 
> example and testcases. Please find the logs for more information
> can you help us know how to go about resolving this checksum issues or are 
> they known issues for big endian platform



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16057) IndexOutOfBoundsException in ITestS3GuardToolLocal

2019-02-11 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765091#comment-16765091
 ] 

Adam Antal commented on HADOOP-16057:
-

Uhm, It's quite strange, but it looks like not patch v2 from HADOOP-15843, but 
v1 got committed to trunk. [~ste...@apache.org], [~gabor.bota], could you guys 
take a look at it, if I'm not missing anything here?

> IndexOutOfBoundsException in ITestS3GuardToolLocal
> --
>
> Key: HADOOP-16057
> URL: https://issues.apache.org/jira/browse/HADOOP-16057
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Adam Antal
>Priority: Major
>
> A new test from HADOOP-15843 is failing: {{testDestroyNoArgs}}; one arg too 
> short in the command line.
> Test run with {{ -Ds3guard -Ddynamodb}}
> {code}
> [ERROR] 
> testDestroyNoArgs(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocal)  
> Time elapsed: 0.761 s  <<< ERROR!
> java.lang.IndexOutOfBoundsException: toIndex = 1
>   at java.util.ArrayList.subListRangeCheck(ArrayList.java:1004)
>   at java.util.ArrayList.subList(ArrayList.java:996)
>   at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:89)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.parseArgs(S3GuardTool.java:371)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Destroy.run(S3GuardTool.java:626)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:399)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.lambda$testDestroyNoArgs$4(AbstractS3GuardToolTestBase.java:403)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16105) WASB in secure mode not using SAS key

2019-02-11 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16105:
---

 Summary: WASB in secure mode not using SAS key
 Key: HADOOP-16105
 URL: https://issues.apache.org/jira/browse/HADOOP-16105
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure
Affects Versions: 3.1.2, 2.8.5
Reporter: Steve Loughran
Assignee: Steve Loughran


If you run WASB in secure mode, it doesn't try to connect with the sas key, 
because it doesn't set {{connectingUsingSAS}} to true



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16100) checksum errors observed while executing examples on big-endian system

2019-02-11 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765064#comment-16765064
 ] 

Steve Loughran commented on HADOOP-16100:
-

ok, if there are protobuf and leveldbnji issues, then its not going to work. 
And we can't update protobuf in branch-2 because protobuf updates break all 
compiled code across all apps.

I think the strategy here is "can this be fixed in branch-3", which is where 
people with big-endian systems need to come and help. A big problem in the past 
there has been: regardless of the arguments as to which is better (big endian, 
obviously), little-endian is where all the code runs. We can't handle changes 
to the codebase which slow down everyone's work...changes will take care

> checksum errors observed while executing examples on big-endian system
> --
>
> Key: HADOOP-16100
> URL: https://issues.apache.org/jira/browse/HADOOP-16100
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: salamani
>Priority: Major
> Attachments: hadoop_checksumerror_on_bigendian.log, 
> hadoop_test_2.9.2_log.txt
>
>
> I have observed the checksum error with big-endian system while executing an 
> example with hadoop 2.7.1/2.7.7.
> I have also tried by building the hadoop 2.7.7 on big-endian system. Build 
> was successful, however still observed the checksum error while executing an 
> example and testcases. Please find the logs for more information
> can you help us know how to go about resolving this checksum issues or are 
> they known issues for big endian platform



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15999) [s3a] Better support for out-of-band operations

2019-02-11 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765059#comment-16765059
 ] 

Steve Loughran commented on HADOOP-15999:
-

-1
not quite yet I'm afraid, though we are close. And this reminds me that without 
resilience to rename/delete failures auth mode still has issues.

h3. Docs

It's going to need an update to the s3guard.md doc, something about "S3Guard 
and out-of-band IO".

h3. S3AFileSystem

I think we should add another metric/counter of S3Guard out-of-band events. 
That way changes aren't just silently swallowed. It also allows our unit tests 
to call getStorageStatistics to measure what S3AFileSystem has seen.

* L2307 "S3aFileStatus is null" is correct from the view of the code, but 
meaningless for users.And it doesn't include the actual path. Better

"Failed to find file {}. Either it is not yet visible, or it has been deleted"

* L2313 nit: Space between if and (


h3. S3Guard.java

L239: put a space between if and ()
L241. Don't call toString() on the status, let SLF4J do it on demand. Also, fix 
typo on "wtih"

h3. ITestS3GuardOutOfBandOperations

* check your import ordering
* L46: use a  section round the bits of the javadoc you want to stay 
formatted all the way through to javadocs. If you are really ambitious: build 
the javadocs to see
* L210: you can use ContractTestUtils.writeTextFile() to write that text file.

* L233: overwriteFile shoud use the path() method to return a path for testing. 
These are (a) in a subtree unique to each JVM testing in parallel and (b) 
cleaned up in test teardown.

I worry about the switching between modes of the filesystem. Would it be better 
to create a new FS instance which has been switched to no store, e..g a 
guardedFS and a rawFS? That'd make clearer what is happening.

* L312. swap order of assertEquals() args
* L333. again, use path()
* L379 expectExceptionWhenReading(). Use LambdaTestUtils.intercept, and have 
the closure actually return the contents of the file

> [s3a] Better support for out-of-band operations
> ---
>
> Key: HADOOP-15999
> URL: https://issues.apache.org/jira/browse/HADOOP-15999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15999.001.patch, HADOOP-15999.002.patch, 
> out-of-band-operations.patch
>
>
> S3Guard was initially done on the premise that a new MetadataStore would be 
> the source of truth, and that it wouldn't provide guarantees if updates were 
> done without using S3Guard.
> I've been seeing increased demand for better support for scenarios where 
> operations are done on the data that can't reasonably be done with S3Guard 
> involved. For example:
> * A file is deleted using S3Guard, and replaced by some other tool. S3Guard 
> can't tell the difference between the new file and delete / list 
> inconsistency and continues to treat the file as deleted.
> * An S3Guard-ed file is overwritten by a longer file by some other tool. When 
> reading the file, only the length of the original file is read.
> We could possibly have smarter behavior here by querying both S3 and the 
> MetadataStore (even in cases where we may currently only query the 
> MetadataStore in getFileStatus) and use whichever one has the higher modified 
> time.
> This kills the performance boost we currently get in some workloads with the 
> short-circuited getFileStatus, but we could keep it with authoritative mode 
> which should give a larger performance boost. At least we'd get more 
> correctness without authoritative mode and a clear declaration of when we can 
> make the assumptions required to short-circuit the process. If we can't 
> consider S3Guard the source of truth, we need to defer to S3 more.
> We'd need to be extra sure of any locality / time zone issues if we start 
> relying on mod_time more directly, but currently we're tracking the 
> modification time as returned by S3 anyway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-02-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765041#comment-16765041
 ] 

Hadoop QA commented on HADOOP-16068:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 9 
new + 7 unchanged - 0 fixed = 16 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 60 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-tools_hadoop-azure generated 5 new + 0 
unchanged - 0 fixed = 5 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
20s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16068 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958249/HADOOP-16068-004.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 38d1a1c42e30 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e7d1ae5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15908/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
| whitespace | 

[jira] [Commented] (HADOOP-16093) Move DurationInfo from hadoop-aws to hadoop-common org.apache.fs.impl

2019-02-11 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765031#comment-16765031
 ] 

Steve Loughran commented on HADOOP-16093:
-

# can we keep "DurationInfo" with the same name? As well as reducing the size 
of the diff, it avoids the problem that a lot of the changed lines will get too 
long.

# Both it and OperationDuration will need a javadoc I'm afraid.
# check your IDE's settings on imports. All the org.apache entries should come 
in their own section...in this patch they're being moved up into that previous 
section.

It's going to need some tests now, isn't it? Something like

* create one, assert that its completed duration is always >= 0
* create one, sleep for a few seconds, assert that the duration is now > 0
* close it twice, verify all is well
* pass in null as the message to log. What should we do there? Ignore? Fail?

> Move DurationInfo from hadoop-aws to hadoop-common org.apache.fs.impl
> -
>
> Key: HADOOP-16093
> URL: https://issues.apache.org/jira/browse/HADOOP-16093
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, util
>Reporter: Steve Loughran
>Assignee: Abhishek Modi
>Priority: Minor
> Attachments: HADOOP-16093.001.patch, HADOOP-16093.002.patch, 
> HADOOP-16093.003.patch, HADOOP-16093.004.patch
>
>
> It'd be useful to have DurationInfo usable in other places (e.g. distcp, 
> abfs, ...). But as it is in hadoop-aws under 
> {{org.apache.hadoop.fs.s3a.commit.DurationInfo
> org.apache.hadoop.fs.s3a.commit.DurationInfo}} we can't do that
> Move it.
> We'll have to rename the Duration class in the process, as java 8 time has a 
> class of that name too. Maybe "OperationDuration", with DurationInfo a 
> subclass of that
> Probably need a test too, won't it?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11223) Offer a read-only conf alternative to new Configuration()

2019-02-11 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765020#comment-16765020
 ] 

Steve Loughran commented on HADOOP-11223:
-

you're down as a contributor so can assign issues & submit patches -attach the 
patch and hit the "submit patch" button for Yetus to trigger the test run

> Offer a read-only conf alternative to new Configuration()
> -
>
> Key: HADOOP-11223
> URL: https://issues.apache.org/jira/browse/HADOOP-11223
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Gopal V
>Assignee: Michael Miller
>Priority: Major
>  Labels: Performance
> Attachments: HADOOP-11223.001.patch
>
>
> new Configuration() is called from several static blocks across Hadoop.
> This is incredibly inefficient, since each one of those involves primarily 
> XML parsing at a point where the JIT won't be triggered & interpreter mode is 
> essentially forced on the JVM.
> The alternate solution would be to offer a {{Configuration::getDefault()}} 
> alternative which disallows any modifications.
> At the very least, such a method would need to be called from 
> # org.apache.hadoop.io.nativeio.NativeIO::()
> # org.apache.hadoop.security.SecurityUtil::()
> # org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider::



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-02-11 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16068:

Status: Patch Available  (was: Open)

> ABFS Auth and DT plugins to be bound to specific URI of the FS
> --
>
> Key: HADOOP-16068
> URL: https://issues.apache.org/jira/browse/HADOOP-16068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16068-001.patch, HADOOP-16068-002.patch, 
> HADOOP-16068-003.patch, HADOOP-16068-004.patch
>
>
> followup from HADOOP-15692: pass in the URI & conf of the owner FS to bind 
> the plugins to the specific FS instance. Without that you can't have per FS 
> auth
> +add a stub DT plugin for testing, verify that DTs are collected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-02-11 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16764982#comment-16764982
 ] 

Steve Loughran commented on HADOOP-16068:
-

HADOOP-16068 patch 004

* custom oath provider can declare a UA field which is then used in the ABFS 
client UA header (for audit logs when the login credentials have a more complex 
provenance)
* fix for HADOOP-16103 (more UGI resets)
* testing: Azure Amsterdam; found HADOOP-16103. more resetUGIs *seems* to fix 
this
* not done: tests to verify that the classic lifecycle works. The stub class 
for testing is there.


I'm going to do another iteration here and pause. I think the code is OK, but 
it is hitting the limits of testability —primarily because the extension points 
for DT issuing rely on kerberos being running, which confuses other tests.

# need some DT lifecycle tests which verify that the "classic" lifecycle still 
works; 
# To test the login extension point, without doing a full HADOOP-14556-style 
token marshalling plugin, this will need a proxy to the standard OAuth login.

> ABFS Auth and DT plugins to be bound to specific URI of the FS
> --
>
> Key: HADOOP-16068
> URL: https://issues.apache.org/jira/browse/HADOOP-16068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16068-001.patch, HADOOP-16068-002.patch, 
> HADOOP-16068-003.patch, HADOOP-16068-004.patch
>
>
> followup from HADOOP-15692: pass in the URI & conf of the owner FS to bind 
> the plugins to the specific FS instance. Without that you can't have per FS 
> auth
> +add a stub DT plugin for testing, verify that DTs are collected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-02-11 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16068:

Status: Open  (was: Patch Available)

> ABFS Auth and DT plugins to be bound to specific URI of the FS
> --
>
> Key: HADOOP-16068
> URL: https://issues.apache.org/jira/browse/HADOOP-16068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16068-001.patch, HADOOP-16068-002.patch, 
> HADOOP-16068-003.patch, HADOOP-16068-004.patch
>
>
> followup from HADOOP-15692: pass in the URI & conf of the owner FS to bind 
> the plugins to the specific FS instance. Without that you can't have per FS 
> auth
> +add a stub DT plugin for testing, verify that DTs are collected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-02-11 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16068:

Attachment: HADOOP-16068-004.patch

> ABFS Auth and DT plugins to be bound to specific URI of the FS
> --
>
> Key: HADOOP-16068
> URL: https://issues.apache.org/jira/browse/HADOOP-16068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16068-001.patch, HADOOP-16068-002.patch, 
> HADOOP-16068-003.patch, HADOOP-16068-004.patch
>
>
> followup from HADOOP-15692: pass in the URI & conf of the owner FS to bind 
> the plugins to the specific FS instance. Without that you can't have per FS 
> auth
> +add a stub DT plugin for testing, verify that DTs are collected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16059) Use SASL Factories Cache to Improve Performance

2019-02-11 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16764973#comment-16764973
 ] 

Ayush Saxena commented on HADOOP-16059:
---

Thanx [~jojochuang] for the review!!!

Regarding the performance.
Actually at the datanode side. (SASLParticipant) neither server nor client 
cache is there. So whenever a client connects to a datanode the SASLClient 
shall be created for any operation say read, write whatever the operation that 
a client needs and SASLServer at the dn side.  The number of dn's usually tends 
to be huge. This is where this cache seems to be useful. 

> Use SASL Factories Cache to Improve Performance
> ---
>
> Key: HADOOP-16059
> URL: https://issues.apache.org/jira/browse/HADOOP-16059
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
> Attachments: HADOOP-16059-01.patch, HADOOP-16059-02.patch, 
> HADOOP-16059-02.patch, HADOOP-16059-03.patch, HADOOP-16059-04.patch
>
>
> SASL Client factories can be cached and SASL Server Factories and SASL Client 
> Factories can be together extended at SaslParticipant  to improve performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16104) Wasb tests to downgrade to skip when test a/c is namespace enabled

2019-02-11 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16764967#comment-16764967
 ] 

Steve Loughran commented on HADOOP-16104:
-

Needs something in all tests which create a test setup, e.g. 
{code}
Assume.assumeFalse("Azure test accounts are namespace enabled",
conf.getBoolean(
TestConfigurationKeys.FS_AZURE_TEST_NAMESPACE_ENABLED_ACCOUNT,
false));
{code}

or make this some base check in setup "supportsNamespaces()" which is true in 
all non-local tests.

Something is also needed in {{NativeAzureFileSystemContract}} to do the same 
things

> Wasb tests to downgrade to skip when test a/c is namespace enabled
> --
>
> Key: HADOOP-16104
> URL: https://issues.apache.org/jira/browse/HADOOP-16104
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Major
>
> When you run the abfs tests with a namespace-enabled accounts, all the wasb 
> tests fail "don't yet work with namespace-enabled accounts". This should be 
> downgraded to a test skip, somehow



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16104) Wasb tests to downgrade to skip when test a/c is namespace enabled

2019-02-11 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16764959#comment-16764959
 ] 

Steve Loughran commented on HADOOP-16104:
-

Stack
{code}
at java.lang.Thread.run(Thread.java:745)

[ERROR] 
testRedoRenameFolderInFolderListing(org.apache.hadoop.fs.azure.ITestNativeAzureFSPageBlobLive)
  Time elapsed: 0.033 s  <<< ERROR!
com.microsoft.azure.storage.StorageException: Blob API is not yet supported for 
hierarchical namespace accounts.
at 
com.microsoft.azure.storage.StorageException.translateException(StorageException.java:87)
at 
com.microsoft.azure.storage.core.StorageRequest.materializeException(StorageRequest.java:315)
at 
com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:185)
at 
com.microsoft.azure.storage.blob.CloudBlobContainer.exists(CloudBlobContainer.java:769)
at 
com.microsoft.azure.storage.blob.CloudBlobContainer.createIfNotExists(CloudBlobContainer.java:379)
at 
com.microsoft.azure.storage.blob.CloudBlobContainer.createIfNotExists(CloudBlobContainer.java:326)
at 
org.apache.hadoop.fs.azure.AzureBlobStorageTestAccount.create(AzureBlobStorageTestAccount.java:584)
at 
org.apache.hadoop.fs.azure.AzureBlobStorageTestAccount.create(AzureBlobStorageTestAccount.java:554)
at 
org.apache.hadoop.fs.azure.AzureBlobStorageTestAccount.create(AzureBlobStorageTestAccount.java:486)
at 
org.apache.hadoop.fs.azure.ITestNativeAzureFSPageBlobLive.createTestAccount(ITestNativeAzureFSPageBlobLive.java:41)
at 
org.apache.hadoop.fs.azure.AbstractWasbTestBase.setUp(AbstractWasbTestBase.java:54)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystemBaseTest.setUp(NativeAzureFileSystemBaseTest.java:73)
at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:745)
{code}

> Wasb tests to downgrade to skip when test a/c is namespace enabled
> --
>
> Key: HADOOP-16104
> URL: https://issues.apache.org/jira/browse/HADOOP-16104
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Major
>
> When you run the abfs tests with a namespace-enabled accounts, all the wasb 
> tests fail "don't yet work with namespace-enabled accounts". This should be 
> downgraded to a test skip, somehow



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16104) Wasb tests to downgrade to skip when test a/c is namespace enabled

2019-02-11 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16764960#comment-16764960
 ] 

Steve Loughran commented on HADOOP-16104:
-

Proposed: test setup to skip these tests when namespaces are enabled
{code}

  fs.azure.test.namespace.enabled
  true

{code}

> Wasb tests to downgrade to skip when test a/c is namespace enabled
> --
>
> Key: HADOOP-16104
> URL: https://issues.apache.org/jira/browse/HADOOP-16104
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Major
>
> When you run the abfs tests with a namespace-enabled accounts, all the wasb 
> tests fail "don't yet work with namespace-enabled accounts". This should be 
> downgraded to a test skip, somehow



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16104) Wasb tests to downgrade to skip when test a/c is namespace enabled

2019-02-11 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16104:
---

 Summary: Wasb tests to downgrade to skip when test a/c is 
namespace enabled
 Key: HADOOP-16104
 URL: https://issues.apache.org/jira/browse/HADOOP-16104
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure, test
Affects Versions: 3.3.0
Reporter: Steve Loughran


When you run the abfs tests with a namespace-enabled accounts, all the wasb 
tests fail "don't yet work with namespace-enabled accounts". This should be 
downgraded to a test skip, somehow



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-02-11 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16764955#comment-16764955
 ] 

Steve Loughran commented on HADOOP-16068:
-

This triggers a failure elsewhere in the code, on account of the UGI thinking 
the user is still logged in

{code}
[ERROR] 
testNoOpWhenSettingSuperUserAsdentity(org.apache.hadoop.fs.azurebfs.ITestAbfsIdentityTransformer)
  Time elapsed: 0 s  <<< ERROR!
java.io.IOException: There is no primary group for UGI 
alice/localh...@example.com (auth:KERBEROS)
at 
org.apache.hadoop.security.UserGroupInformation.getPrimaryGroupName(UserGroupInformation.java:1601)
at 
org.apache.hadoop.fs.azurebfs.ITestAbfsIdentityTransformer.(ITestAbfsIdentityTransformer.java:67)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:217)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.runReflectiveCall(BlockJUnit4ClassRunner.java:266)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:263)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
{code}


> ABFS Auth and DT plugins to be bound to specific URI of the FS
> --
>
> Key: HADOOP-16068
> URL: https://issues.apache.org/jira/browse/HADOOP-16068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16068-001.patch, HADOOP-16068-002.patch, 
> HADOOP-16068-003.patch
>
>
> followup from HADOOP-15692: pass in the URI & conf of the owner FS to bind 
> the plugins to the specific FS instance. Without that you can't have per FS 
> auth
> +add a stub DT plugin for testing, verify that DTs are collected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16103) Failure of ABFS test ITestAbfsIdentityTransformer

2019-02-11 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16103:

Priority: Minor  (was: Major)

> Failure of ABFS test ITestAbfsIdentityTransformer
> -
>
> Key: HADOOP-16103
> URL: https://issues.apache.org/jira/browse/HADOOP-16103
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> The test {{ITestAbfsIdentityTransformer}} of HADOOP-15954  is failing, "There 
> is no primary group for UGI alice/localh...@example.com (auth:KERBEROS)"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16103) Failure of ABFS test ITestAbfsIdentityTransformer

2019-02-11 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16764954#comment-16764954
 ] 

Steve Loughran commented on HADOOP-16103:
-

{code}
[ERROR] 
testNoOpWhenSettingSuperUserAsdentity(org.apache.hadoop.fs.azurebfs.ITestAbfsIdentityTransformer)
  Time elapsed: 0 s  <<< ERROR!
java.io.IOException: There is no primary group for UGI 
alice/localh...@example.com (auth:KERBEROS)
at 
org.apache.hadoop.security.UserGroupInformation.getPrimaryGroupName(UserGroupInformation.java:1601)
at 
org.apache.hadoop.fs.azurebfs.ITestAbfsIdentityTransformer.(ITestAbfsIdentityTransformer.java:67)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:217)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.runReflectiveCall(BlockJUnit4ClassRunner.java:266)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:263)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
{code}


> Failure of ABFS test ITestAbfsIdentityTransformer
> -
>
> Key: HADOOP-16103
> URL: https://issues.apache.org/jira/browse/HADOOP-16103
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> The test {{ITestAbfsIdentityTransformer}} of HADOOP-15954  is failing, "There 
> is no primary group for UGI alice/localh...@example.com (auth:KERBEROS)"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16103) Failure of ABFS test ITestAbfsIdentityTransformer

2019-02-11 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16764953#comment-16764953
 ] 

Steve Loughran commented on HADOOP-16103:
-

Actually, I think this is being triggered not by the HADOOP-15954 patch, but 
the new DT one, which is logging in a user kerberized. If the test runs in the 
same JVM things are failing. UGI setup issues (a) the other tests must clean up 
and (b) this test ought to reset UGI for safety

> Failure of ABFS test ITestAbfsIdentityTransformer
> -
>
> Key: HADOOP-16103
> URL: https://issues.apache.org/jira/browse/HADOOP-16103
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> The test {{ITestAbfsIdentityTransformer}} of HADOOP-15954  is failing, "There 
> is no primary group for UGI alice/localh...@example.com (auth:KERBEROS)"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16103) Failure of ABFS test ITestAbfsIdentityTransformer

2019-02-11 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-16103:
---

Assignee: Steve Loughran

> Failure of ABFS test ITestAbfsIdentityTransformer
> -
>
> Key: HADOOP-16103
> URL: https://issues.apache.org/jira/browse/HADOOP-16103
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> The test {{ITestAbfsIdentityTransformer}} of HADOOP-15954  is failing, "There 
> is no primary group for UGI alice/localh...@example.com (auth:KERBEROS)"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16103) Failure of ABFS test ITestAbfsIdentityTransformer

2019-02-11 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16103:
---

 Summary: Failure of ABFS test ITestAbfsIdentityTransformer
 Key: HADOOP-16103
 URL: https://issues.apache.org/jira/browse/HADOOP-16103
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure, test
Affects Versions: 3.3.0
Reporter: Steve Loughran


The test {{ITestAbfsIdentityTransformer}} of HADOOP-15954  is failing, "There 
is no primary group for UGI alice/localh...@example.com (auth:KERBEROS)"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16100) checksum errors observed while executing examples on big-endian system

2019-02-11 Thread salamani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

salamani updated HADOOP-16100:
--
Attachment: hadoop_test_2.9.2_log.txt

> checksum errors observed while executing examples on big-endian system
> --
>
> Key: HADOOP-16100
> URL: https://issues.apache.org/jira/browse/HADOOP-16100
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: salamani
>Priority: Major
> Attachments: hadoop_checksumerror_on_bigendian.log, 
> hadoop_test_2.9.2_log.txt
>
>
> I have observed the checksum error with big-endian system while executing an 
> example with hadoop 2.7.1/2.7.7.
> I have also tried by building the hadoop 2.7.7 on big-endian system. Build 
> was successful, however still observed the checksum error while executing an 
> example and testcases. Please find the logs for more information
> can you help us know how to go about resolving this checksum issues or are 
> they known issues for big endian platform



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16100) checksum errors observed while executing examples on big-endian system

2019-02-11 Thread salamani (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16764912#comment-16764912
 ] 

salamani commented on HADOOP-16100:
---

I ran the hadoop-2.9.2 Observed the following test failures as in logs 
attached. Are these are expected failures?

> checksum errors observed while executing examples on big-endian system
> --
>
> Key: HADOOP-16100
> URL: https://issues.apache.org/jira/browse/HADOOP-16100
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: salamani
>Priority: Major
> Attachments: hadoop_checksumerror_on_bigendian.log
>
>
> I have observed the checksum error with big-endian system while executing an 
> example with hadoop 2.7.1/2.7.7.
> I have also tried by building the hadoop 2.7.7 on big-endian system. Build 
> was successful, however still observed the checksum error while executing an 
> example and testcases. Please find the logs for more information
> can you help us know how to go about resolving this checksum issues or are 
> they known issues for big endian platform



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek closed pull request #472: HDDS-1017. Use distributed tracing the indentify performance problems in Ozone.

2019-02-11 Thread GitBox
elek closed pull request #472: HDDS-1017. Use distributed tracing the indentify 
performance problems in Ozone.
URL: https://github.com/apache/hadoop/pull/472
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16101) Use lighter-weight alternatives to innerGetFileStatus where possible

2019-02-11 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16764855#comment-16764855
 ] 

Steve Loughran commented on HADOOP-16101:
-

I've thought about not doing the HEAD first -see HADOOP-13712

We've been constrained by the expectation that "if the file doesn't exist, 
open() must fail". With the new openFile() and its future<> response, we have a 
bit more leeway. 

h3. now may be the time to change the spec there and say "if you open a file 
with openFile(), failures may not surface until the stream is read()". 

FWIW, even though getFileStatus is doing three checks, in the successful path 
"the file is present", only that initial HEAD is used. The failure case does do 
three calls, with the last two essentially choosing between FNFE and some path 
is directory exception (which may be FNFE anyway, as some filesystems do). 
Because its the failure path, optimising that is probably less beneficial than 
saving 200ms on every file open, which could be done if we purge that initial 
HEAD and go straight for the GET on read. 

> Use lighter-weight alternatives to innerGetFileStatus where possible
> 
>
> Key: HADOOP-16101
> URL: https://issues.apache.org/jira/browse/HADOOP-16101
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Priority: Major
>
> Discussion in HADOOP-15999 highlighted the heaviness of a full 
> innerGetFileStatus call, where many usages of it may need a lighter weight 
> fileExists, etc. Let's investigate usage of innerGetFileStatus and slim it 
> down where possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14580) Sample mount table in ViewFs.md contains capitalization error

2019-02-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16764789#comment-16764789
 ] 

Hadoop QA commented on HADOOP-14580:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 38s{color} 
| {color:red} HADOOP-14580 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-14580 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878029/HADOOP-14580.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15907/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Sample mount table in ViewFs.md contains capitalization error
> -
>
> Key: HADOOP-14580
> URL: https://issues.apache.org/jira/browse/HADOOP-14580
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Reporter: Todd Owen
>Assignee: Chen Liang
>Priority: Trivial
> Attachments: HADOOP-14580.001.patch
>
>
> In this file: 
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
> The rest of the document refers to {{clusterX}}, but the sample mount table 
> uses the incorrect capitalisation {{ClusterX}}.
> It took me a little while to figure out why my experiments were not working. 
> This is compounded by the fact that errors from viewfs are not very 
> descriptive, e.g.:
> {noformat}
> $ hdfs dfs -ls /
> ls: viewfs://clusterX/
> $ hdfs dfs -cat /xyz
> cat: viewfs://clusterX/
> {noformat}
> Suggested fix is to replace the 5 instances of {{ClusterX}} with 
> {{clusterX}}. And ideally improve the error message to something like:
> {noformat}
> $ hdfs dfs -ls /
> ls: viewfs://clusterX/: No such property fs.viewfs.mounttable.clusterX.link.*
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14580) Sample mount table in ViewFs.md contains capitalization error

2019-02-11 Thread Todd Owen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16764787#comment-16764787
 ] 

Todd Owen commented on HADOOP-14580:


I reported this issue. A patch was available pretty quickly, but I see it was 
never merged. What happens now?

> Sample mount table in ViewFs.md contains capitalization error
> -
>
> Key: HADOOP-14580
> URL: https://issues.apache.org/jira/browse/HADOOP-14580
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Reporter: Todd Owen
>Assignee: Chen Liang
>Priority: Trivial
> Attachments: HADOOP-14580.001.patch
>
>
> In this file: 
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
> The rest of the document refers to {{clusterX}}, but the sample mount table 
> uses the incorrect capitalisation {{ClusterX}}.
> It took me a little while to figure out why my experiments were not working. 
> This is compounded by the fact that errors from viewfs are not very 
> descriptive, e.g.:
> {noformat}
> $ hdfs dfs -ls /
> ls: viewfs://clusterX/
> $ hdfs dfs -cat /xyz
> cat: viewfs://clusterX/
> {noformat}
> Suggested fix is to replace the 5 instances of {{ClusterX}} with 
> {{clusterX}}. And ideally improve the error message to something like:
> {noformat}
> $ hdfs dfs -ls /
> ls: viewfs://clusterX/: No such property fs.viewfs.mounttable.clusterX.link.*
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16102) FilterFileSystem does not implement getScheme

2019-02-11 Thread Todd Owen (JIRA)
Todd Owen created HADOOP-16102:
--

 Summary: FilterFileSystem does not implement getScheme
 Key: HADOOP-16102
 URL: https://issues.apache.org/jira/browse/HADOOP-16102
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Todd Owen


Calling {{getScheme}} on a {{FilterFileSystem}} throws 
{{UnsupportedOperationException}}, which is the default provided by the base 
class. Instead, it should return the scheme of the underlying ("filtered") 
filesystem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16067) Incorrect Format Debug Statement KMSACLs

2019-02-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16764734#comment-16764734
 ] 

Hadoop QA commented on HADOOP-16067:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
58s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
9s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16067 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958209/HADOOP-16067.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1a6f5ea43cdb 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a141458 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15906/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15906/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Incorrect Format Debug