[jira] [Commented] (HADOOP-16059) Use SASL Factories Cache to Improove Performance

2019-01-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16748333#comment-16748333
 ] 

Hadoop QA commented on HADOOP-16059:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
55s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
33s{color} | {color:green} root: The patch generated 0 new + 338 unchanged - 6 
fixed = 338 total (was 344) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 23s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
48s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
56s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}137m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestSaslRPC |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16059 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12955711/HADOOP-16059-02.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux db967a4b2216 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 

[jira] [Commented] (HADOOP-15787) [JDK11] TestIPC.testRTEDuringConnectionSetup fails

2019-01-21 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16748314#comment-16748314
 ] 

Hudson commented on HADOOP-15787:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15797 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15797/])
HADOOP-15787. [JDK11] TestIPC.testRTEDuringConnectionSetup fails. (aajisaka: 
rev a463cf75a0ab1f0dbb8cfa16c39a4e698bc1a625)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java


> [JDK11] TestIPC.testRTEDuringConnectionSetup fails
> --
>
> Key: HADOOP-15787
> URL: https://issues.apache.org/jira/browse/HADOOP-15787
> Project: Hadoop Common
>  Issue Type: Sub-task
> Environment: Java 11+28, CentOS 7.5
>Reporter: Akira Ajisaka
>Assignee: Zsolt Venczel
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15787.01.patch, HADOOP-15787.02.patch
>
>
> {noformat}
> [INFO] Running org.apache.hadoop.ipc.TestIPC
> [ERROR] Tests run: 40, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 82.577 s <<< FAILURE! - in org.apache.hadoop.ipc.TestIPC
> [ERROR] testRTEDuringConnectionSetup(org.apache.hadoop.ipc.TestIPC)  Time 
> elapsed: 0.462 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.ipc.TestIPC.testRTEDuringConnectionSetup(TestIPC.java:625)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15787) [JDK11] TestIPC.testRTEDuringConnectionSetup fails

2019-01-21 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15787:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~zvenczel] for the contribution!

> [JDK11] TestIPC.testRTEDuringConnectionSetup fails
> --
>
> Key: HADOOP-15787
> URL: https://issues.apache.org/jira/browse/HADOOP-15787
> Project: Hadoop Common
>  Issue Type: Sub-task
> Environment: Java 11+28, CentOS 7.5
>Reporter: Akira Ajisaka
>Assignee: Zsolt Venczel
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15787.01.patch, HADOOP-15787.02.patch
>
>
> {noformat}
> [INFO] Running org.apache.hadoop.ipc.TestIPC
> [ERROR] Tests run: 40, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 82.577 s <<< FAILURE! - in org.apache.hadoop.ipc.TestIPC
> [ERROR] testRTEDuringConnectionSetup(org.apache.hadoop.ipc.TestIPC)  Time 
> elapsed: 0.462 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.ipc.TestIPC.testRTEDuringConnectionSetup(TestIPC.java:625)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15787) [JDK11] TestIPC.testRTEDuringConnectionSetup fails

2019-01-21 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16748301#comment-16748301
 ] 

Akira Ajisaka commented on HADOOP-15787:


+1

> [JDK11] TestIPC.testRTEDuringConnectionSetup fails
> --
>
> Key: HADOOP-15787
> URL: https://issues.apache.org/jira/browse/HADOOP-15787
> Project: Hadoop Common
>  Issue Type: Sub-task
> Environment: Java 11+28, CentOS 7.5
>Reporter: Akira Ajisaka
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HADOOP-15787.01.patch, HADOOP-15787.02.patch
>
>
> {noformat}
> [INFO] Running org.apache.hadoop.ipc.TestIPC
> [ERROR] Tests run: 40, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 82.577 s <<< FAILURE! - in org.apache.hadoop.ipc.TestIPC
> [ERROR] testRTEDuringConnectionSetup(org.apache.hadoop.ipc.TestIPC)  Time 
> elapsed: 0.462 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.ipc.TestIPC.testRTEDuringConnectionSetup(TestIPC.java:625)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16059) Use SASL Factories Cache to Improove Performance

2019-01-21 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HADOOP-16059:
--
Attachment: HADOOP-16059-02.patch

> Use SASL Factories Cache to Improove Performance
> 
>
> Key: HADOOP-16059
> URL: https://issues.apache.org/jira/browse/HADOOP-16059
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
> Attachments: HADOOP-16059-01.patch, HADOOP-16059-02.patch, 
> HADOOP-16059-02.patch
>
>
> SASL Client factories can be cached and SASL Server Factories and SASL Client 
> Factories can be together extended at SaslParticipant  to improve performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #459: HADOOP-16035. Jenkinsfile for Hadoop

2019-01-21 Thread GitBox
hadoop-yetus commented on issue #459: HADOOP-16035. Jenkinsfile for Hadoop
URL: https://github.com/apache/hadoop/pull/459#issuecomment-456203158
 
 
   (that's stevel; its mapping my cloudera email to the elephant right now. We 
need to get it its own email address)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #459: HADOOP-16035. Jenkinsfile for Hadoop

2019-01-21 Thread GitBox
hadoop-yetus commented on issue #459: HADOOP-16035. Jenkinsfile for Hadoop
URL: https://github.com/apache/hadoop/pull/459#issuecomment-456202996
 
 
   testing a reply by mail to see what happens
   
   On Sat, Jan 19, 2019 at 8:55 PM Allen Wittenauer 
   wrote:
   
   > FYI: I have removed the Apache Yetus credentials from the job that was
   > used for testing. Hadoop community will need to provide their own.
   >
   > —
   > You are receiving this because you were mentioned.
   > Reply to this email directly, view it on GitHub
   > , or mute
   > the thread
   > 

   > .
   >
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14556) S3A to support Delegation Tokens

2019-01-21 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16748223#comment-16748223
 ] 

Steve Loughran commented on HADOOP-14556:
-

For people watching this, I've stuck up a video showing distcp collecting DTs 
and using it to do a cross-bucket copy in a test cluster which doesn't have any 
credentials: 
https://www.youtube.com/watch?v=rpyLkDEzIxI

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556-014.patch, 
> HADOOP-14556-015.patch, HADOOP-14556-016.patch, HADOOP-14556-017.patch, 
> HADOOP-14556-018a.patch, HADOOP-14556-019.patch, HADOOP-14556-020.patch, 
> HADOOP-14556-021.patch, HADOOP-14556-022.patch, HADOOP-14556-023.patch, 
> HADOOP-14556-024.patch, HADOOP-14556-025.patch, HADOOP-14556-026.patch, 
> HADOOP-14556-027.patch, HADOOP-14556-028.patch, HADOOP-14556-029.patch, 
> HADOOP-14556.oath-002.patch, HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] bharatviswa504 commented on a change in pull request #462: HDDS-764. Run S3 smoke tests with replication STANDARD.

2019-01-21 Thread GitBox
bharatviswa504 commented on a change in pull request #462: HDDS-764. Run S3 
smoke tests with replication STANDARD.
URL: https://github.com/apache/hadoop/pull/462#discussion_r249573167
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/test.sh
 ##
 @@ -38,8 +72,8 @@ execute_tests(){
   echo "-"
   docker-compose -f "$COMPOSE_FILE" down
   docker-compose -f "$COMPOSE_FILE" up -d
-  echo "Waiting 30s for cluster start up..."
-  sleep 30
+  docker-compose -f "$COMPOSE_FILE" scale datanode=3
 
 Review comment:
   Yes, it is updated it is in the diff. Thanks for the update.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] bharatviswa504 commented on a change in pull request #462: HDDS-764. Run S3 smoke tests with replication STANDARD.

2019-01-21 Thread GitBox
bharatviswa504 commented on a change in pull request #462: HDDS-764. Run S3 
smoke tests with replication STANDARD.
URL: https://github.com/apache/hadoop/pull/462#discussion_r249572853
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/test.sh
 ##
 @@ -38,8 +72,8 @@ execute_tests(){
   echo "-"
   docker-compose -f "$COMPOSE_FILE" down
   docker-compose -f "$COMPOSE_FILE" up -d
-  echo "Waiting 30s for cluster start up..."
-  sleep 30
+  docker-compose -f "$COMPOSE_FILE" scale datanode=3
 
 Review comment:
   So, you want to update it?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] bharatviswa504 commented on a change in pull request #462: HDDS-764. Run S3 smoke tests with replication STANDARD.

2019-01-21 Thread GitBox
bharatviswa504 commented on a change in pull request #462: HDDS-764. Run S3 
smoke tests with replication STANDARD.
URL: https://github.com/apache/hadoop/pull/462#discussion_r249571835
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/test.sh
 ##
 @@ -24,6 +23,41 @@ mkdir -p "$DIR/$RESULT_DIR"
 #Should be writeable from the docker containers where user is different.
 chmod ogu+w "$DIR/$RESULT_DIR"
 
+## @description wait until 3 datanodes are up (or 30 seconds)
+## @param the docker-compose file
+wait_for_datanodes(){
 
 Review comment:
   Even if we continue, almost all the tests will fail, as we return with 
replication STANDARD, where we need at least 3 datanodes. But I am fine for 
now, even if we continue, we can improve it later.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] bharatviswa504 commented on a change in pull request #462: HDDS-764. Run S3 smoke tests with replication STANDARD.

2019-01-21 Thread GitBox
bharatviswa504 commented on a change in pull request #462: HDDS-764. Run S3 
smoke tests with replication STANDARD.
URL: https://github.com/apache/hadoop/pull/462#discussion_r249572760
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/test.sh
 ##
 @@ -24,6 +23,41 @@ mkdir -p "$DIR/$RESULT_DIR"
 #Should be writeable from the docker containers where user is different.
 chmod ogu+w "$DIR/$RESULT_DIR"
 
+## @description wait until 3 datanodes are up (or 30 seconds)
+## @param the docker-compose file
+wait_for_datanodes(){
+
+  #Reset the timer
+  SECONDS=0
+
+  #Don't give it up until 30 seconds
 
 Review comment:
   Yes, but now I got a question, where is this value SLEEP is incremented, as 
we set value SECONDS=0, and after that loop, I don't see it is getting modified.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16058) S3A tests to include Terasort

2019-01-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16748176#comment-16748176
 ] 

Hadoop QA commented on HADOOP-16058:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 19 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m 41s{color} 
| {color:red} root generated 1 new + 1489 unchanged - 0 fixed = 1490 total (was 
1489) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 27s{color} | {color:orange} root: The patch generated 10 new + 77 unchanged 
- 0 fixed = 87 total (was 77) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 33 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}120m 49s{color} 
| {color:red} hadoop-mapreduce-client-jobclient in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
56s{color} | {color:green} hadoop-mapreduce-examples in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
50s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
49s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}230m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16058 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12955669/HADOOP-16058-002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | 

[jira] [Commented] (HADOOP-15229) Add FileSystem builder-based openFile() API to match createFile() + S3 Select

2019-01-21 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16748145#comment-16748145
 ] 

Gabor Bota commented on HADOOP-15229:
-

Hey [~ste...@apache.org], thanks for working on this, I've just reviewed the 
patch, but I haven't played around with the API yet. Seems like a very useful 
feature, can't wait to see how other components can use what it can provide.

 

*Two issues I've found in s3_select.md*

1. The following text:
{noformat}
+Most of the Hadoop RecordReaders automatically choose a decompressor
+based on the extension of the source file. This causes problems when[...]
{noformat}
is in the docs two times, maybe the second one is not in the right place. After 
subtitle
{noformat}
 +### How to disable the GZip decompressor when querying Gzipped source files. 
{noformat}
and also in
{noformat}
 +### How to Disable Text File Splitting {noformat}
Is this on purpose? (it seems like in {{+### How to Disable Text File 
Splitting}} it's mid-sentence.)

2. Under the subtitle
{noformat}
+### "mid-query" failures on large datasets
{noformat}
there's a sentence without an ending:
{noformat}
+may only surface partway through the read. This does not result in
{noformat}

 

*A question on the feature itself and compatibility with object stores*
This won't work with 3rd party object stores with S3 interface like Ceph 
radosgw, which does not support this feature. In the case, if this feature is 
enabled on an object store where the feature is not supported what is the 
expected behavior?
(The configuration should be fs.s3a.select.enabled=false in that case if I'm 
correct). I can test this if needed. 

 

Tested against us-west-2. No new failures (other than what's discussed under 
HADOOP-16057), there was a few timeouts but it was clear after a rerun.

> Add FileSystem builder-based openFile() API to match createFile() + S3 Select
> -
>
> Key: HADOOP-15229
> URL: https://issues.apache.org/jira/browse/HADOOP-15229
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15229-001.patch, HADOOP-15229-002.patch, 
> HADOOP-15229-003.patch, HADOOP-15229-004.patch, HADOOP-15229-004.patch, 
> HADOOP-15229-005.patch, HADOOP-15229-006.patch, HADOOP-15229-007.patch, 
> HADOOP-15229-009.patch, HADOOP-15229-010.patch, HADOOP-15229-011.patch, 
> HADOOP-15229-012.patch, HADOOP-15229-013.patch, HADOOP-15229-014.patch, 
> HADOOP-15229-015.patch, HADOOP-15229-016.patch, HADOOP-15229-017.patch, 
> HADOOP-15229-018.patch, HADOOP-15229-019.patch
>
>
> Replicate HDFS-1170 and HADOOP-14365 with an API to open files.
> A key requirement of this is not HDFS, it's to put in the fadvise policy for 
> working with object stores, where getting the decision to do a full GET and 
> TCP abort on seek vs smaller GETs is fundamentally different: the wrong 
> option can cost you minutes. S3A and Azure both have adaptive policies now 
> (first backward seek), but they still don't do it that well.
> Columnar formats (ORC, Parquet) should be able to say "fs.input.fadvise" 
> "random" as an option when they open files; I can imagine other options too.
> The Builder model of [~eddyxu] is the one to mimic, method for method. 
> Ideally with as much code reuse as possible



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16062) Add ability to disable Configuration reload registry

2019-01-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16748120#comment-16748120
 ] 

Hadoop QA commented on HADOOP-16062:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
12s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 54s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 5 new + 253 unchanged - 2 fixed = 258 total (was 255) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
57s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16062 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12955676/HADOOP-16062.01.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 376644573e04 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2e2508b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15820/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15820/testReport/ |
| Max. process+thread count | 1471 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15820/console |
| Powered by | 

[jira] [Commented] (HADOOP-16057) IndexOutOfBoundsException in ITestS3GuardToolLocal

2019-01-21 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16748111#comment-16748111
 ] 

Adam Antal commented on HADOOP-16057:
-

Sorry to hear that. I'd be glad if we could had this reverted, and I can check 
on it next week, if it's not too urgent [~ste...@apache.org], [~gabor.bota].

> IndexOutOfBoundsException in ITestS3GuardToolLocal
> --
>
> Key: HADOOP-16057
> URL: https://issues.apache.org/jira/browse/HADOOP-16057
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Adam Antal
>Priority: Major
>
> A new test from HADOOP-15843 is failing: {{testDestroyNoArgs}}; one arg too 
> short in the command line.
> Test run with {{ -Ds3guard -Ddynamodb}}
> {code}
> [ERROR] 
> testDestroyNoArgs(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocal)  
> Time elapsed: 0.761 s  <<< ERROR!
> java.lang.IndexOutOfBoundsException: toIndex = 1
>   at java.util.ArrayList.subListRangeCheck(ArrayList.java:1004)
>   at java.util.ArrayList.subList(ArrayList.java:996)
>   at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:89)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.parseArgs(S3GuardTool.java:371)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Destroy.run(S3GuardTool.java:626)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:399)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.lambda$testDestroyNoArgs$4(AbstractS3GuardToolTestBase.java:403)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16049) DistCp result has data and checksum mismatch when blocks per chunk > 0

2019-01-21 Thread Kai Xie (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16748093#comment-16748093
 ] 

Kai Xie commented on HADOOP-16049:
--

All right, jenkins is good! But I assume it can still have timeout problem 
sometimes, unless we upgrade the kernel/JDK version like trunk's 

> DistCp result has data and checksum mismatch when blocks per chunk > 0
> --
>
> Key: HADOOP-16049
> URL: https://issues.apache.org/jira/browse/HADOOP-16049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.2
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Attachments: HADOOP-16049-branch-2-003.patch, 
> HADOOP-16049-branch-2-003.patch, HADOOP-16049-branch-2-004.patch, 
> HADOOP-16049-branch-2-005.patch
>
>
> In 2.9.2 RetriableFileCopyCommand.copyBytes,
> {code:java}
> int bytesRead = readBytes(inStream, buf, sourceOffset);
> while (bytesRead >= 0) {
>   ...
>   if (action == FileAction.APPEND) {
> sourceOffset += bytesRead;
>   }
>   ... // write to dst
>   bytesRead = readBytes(inStream, buf, sourceOffset);
> }{code}
> it does a positioned read but the position (`sourceOffset` here) is never 
> updated when blocks per chunk is set to > 0 (which always disables append 
> action). So for chunk with offset != 0, it will keep copying the first few 
> bytes again and again, causing result to have data & checksum mismatch.
> To re-produce this issue, in branch-2, update BLOCK_SIZE to 10240 (> default 
> copy buffer size) in class TestDistCpSystem and run it.
> HADOOP-15292 has resolved the issue reported in this ticket in 
> trunk/branch-3.1/branch-3.2 by not using the positioned read, but has not 
> been backported to branch-2 yet
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16059) Use SASL Factories Cache to Improove Performance

2019-01-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16748091#comment-16748091
 ] 

Hadoop QA commented on HADOOP-16059:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
31s{color} | {color:green} root: The patch generated 0 new + 338 unchanged - 6 
fixed = 338 total (was 344) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 36s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
48s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
48s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}148m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestSaslRPC |
|   | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16059 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12955659/HADOOP-16059-02.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 22dfdd6ddfca 4.4.0-139-generic 

[jira] [Commented] (HADOOP-15787) [JDK11] TestIPC.testRTEDuringConnectionSetup fails

2019-01-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16748077#comment-16748077
 ] 

Hadoop QA commented on HADOOP-15787:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
50s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}105m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15787 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12955661/HADOOP-15787.02.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9fa7f7a29b46 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / abde1e1 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15818/testReport/ |
| Max. process+thread count | 1449 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15818/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> [JDK11] TestIPC.testRTEDuringConnectionSetup fails
> --
>
> Key: HADOOP-15787
>

[jira] [Commented] (HADOOP-15787) [JDK11] TestIPC.testRTEDuringConnectionSetup fails

2019-01-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16748061#comment-16748061
 ] 

Hadoop QA commented on HADOOP-15787:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m  
7s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 75 unchanged - 0 fixed = 76 total (was 75) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
48s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15787 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12955660/HADOOP-15787.01.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux af12bb429c9c 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / abde1e1 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15816/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15816/testReport/ |
| Max. process+thread count | 1749 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15816/console |
| Powered by | 

[jira] [Updated] (HADOOP-16062) Add ability to disable Configuration reload registry

2019-01-21 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HADOOP-16062:
--
Status: Patch Available  (was: Open)

> Add ability to disable Configuration reload registry
> 
>
> Key: HADOOP-16062
> URL: https://issues.apache.org/jira/browse/HADOOP-16062
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Priority: Major
> Attachments: HADOOP-16062.01.patch, jxray_registry.png
>
>
> In Hive we see that after an extensive amount of usage there are a lot of 
> Configuration objects not fully reclaimed because of Configuration's REGISTRY 
> weak hashmap.
> https://github.com/apache/hadoop/blob/abde1e1f58d5b699e4b8e460cff68e154738169b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L321
>  !jxray_registry.png! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15961) S3A committers: make sure there's regular progress() calls

2019-01-21 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16748045#comment-16748045
 ] 

Steve Loughran commented on HADOOP-15961:
-

BTW, Looking at this patch, I think the progress call could go in the inner 
loop, 
{code}
...
UploadPartResult partResult = writeOperations.uploadPart(part);
offset += uploadPartSize;
parts.add(partResult.getPartETag());
progress.progess()   //HERE
}
{code}

That way, it'll be invoked every 32, 64MB of part upload. If the task created 
4GB of data, without the per-part uploads you could still get some timeout just 
from the time to upload. a progress event per block eliminates this problem

> S3A committers: make sure there's regular progress() calls
> --
>
> Key: HADOOP-15961
> URL: https://issues.apache.org/jira/browse/HADOOP-15961
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Minor
> Attachments: HADOOP-15961-001.patch, HADOOP-15961-002.patch
>
>
> MAPREDUCE-7164 highlights how inside job/task commit more context.progress() 
> callbacks are needed, just for HDFS.
> the S3A committers should be reviewed similarly.
> At a glance:
> StagingCommitter.commitTaskInternal() is at risk if a task write upload 
> enough data to the localfs that the upload takes longer than the timeout.
> it should call progress it every single file commits, or better: modify 
> {{uploadFileToPendingCommit}} to take a Progressable for progress callbacks 
> after every part upload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16062) Add ability to disable Configuration reload registry

2019-01-21 Thread Zoltan Haindrich (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16748044#comment-16748044
 ] 

Zoltan Haindrich commented on HADOOP-16062:
---

this could be probably done a few different ways...I've attached a patch which 
does the above by setting the registry to null, in case it's disabled



> Add ability to disable Configuration reload registry
> 
>
> Key: HADOOP-16062
> URL: https://issues.apache.org/jira/browse/HADOOP-16062
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Priority: Major
> Attachments: HADOOP-16062.01.patch, jxray_registry.png
>
>
> In Hive we see that after an extensive amount of usage there are a lot of 
> Configuration objects not fully reclaimed because of Configuration's REGISTRY 
> weak hashmap.
> https://github.com/apache/hadoop/blob/abde1e1f58d5b699e4b8e460cff68e154738169b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L321
>  !jxray_registry.png! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16062) Add ability to disable Configuration reload registry

2019-01-21 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HADOOP-16062:
--
Attachment: HADOOP-16062.01.patch

> Add ability to disable Configuration reload registry
> 
>
> Key: HADOOP-16062
> URL: https://issues.apache.org/jira/browse/HADOOP-16062
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Priority: Major
> Attachments: HADOOP-16062.01.patch, jxray_registry.png
>
>
> In Hive we see that after an extensive amount of usage there are a lot of 
> Configuration objects not fully reclaimed because of Configuration's REGISTRY 
> weak hashmap.
> https://github.com/apache/hadoop/blob/abde1e1f58d5b699e4b8e460cff68e154738169b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L321
>  !jxray_registry.png! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15961) S3A committers: make sure there's regular progress() calls

2019-01-21 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16748039#comment-16748039
 ] 

Steve Loughran commented on HADOOP-15961:
-

build the patch off trunk.e.g

{code}
git diff trunk...HEAD > ~/hadoop-patches/work/HADOOP-16961-002.patch
{code}
(assuming you have that hadoop-patches/work) dir, or similar). Then attach that 
patch to the JIRA

> S3A committers: make sure there's regular progress() calls
> --
>
> Key: HADOOP-15961
> URL: https://issues.apache.org/jira/browse/HADOOP-15961
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Minor
> Attachments: HADOOP-15961-001.patch, HADOOP-15961-002.patch
>
>
> MAPREDUCE-7164 highlights how inside job/task commit more context.progress() 
> callbacks are needed, just for HDFS.
> the S3A committers should be reviewed similarly.
> At a glance:
> StagingCommitter.commitTaskInternal() is at risk if a task write upload 
> enough data to the localfs that the upload takes longer than the timeout.
> it should call progress it every single file commits, or better: modify 
> {{uploadFileToPendingCommit}} to take a Progressable for progress callbacks 
> after every part upload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16057) IndexOutOfBoundsException in ITestS3GuardToolLocal

2019-01-21 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16748027#comment-16748027
 ] 

Gabor Bota edited comment on HADOOP-16057 at 1/21/19 3:33 PM:
--

I'm +1 about rolling back that change and re-open the issue than fixing the 
issues it created. It would be easier to resolve the test failures in place and 
to have a clean test run.
Adam's ooo atm. If fixing this issue is urgent, I can work on that this week. 
Besides these tests are also failing with dynamo.


was (Author: gabor.bota):
I'm +1 about rolling back that change and re-open the issue than fixing the 
issues it created. It would be easier to resolve the test failures in place and 
to have a clean test run.
Adam's ooo atm. If fixing this issue is urgent, I can work on that this week.

> IndexOutOfBoundsException in ITestS3GuardToolLocal
> --
>
> Key: HADOOP-16057
> URL: https://issues.apache.org/jira/browse/HADOOP-16057
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Adam Antal
>Priority: Major
>
> A new test from HADOOP-15843 is failing: {{testDestroyNoArgs}}; one arg too 
> short in the command line.
> Test run with {{ -Ds3guard -Ddynamodb}}
> {code}
> [ERROR] 
> testDestroyNoArgs(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocal)  
> Time elapsed: 0.761 s  <<< ERROR!
> java.lang.IndexOutOfBoundsException: toIndex = 1
>   at java.util.ArrayList.subListRangeCheck(ArrayList.java:1004)
>   at java.util.ArrayList.subList(ArrayList.java:996)
>   at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:89)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.parseArgs(S3GuardTool.java:371)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Destroy.run(S3GuardTool.java:626)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:399)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.lambda$testDestroyNoArgs$4(AbstractS3GuardToolTestBase.java:403)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16058) S3A tests to include Terasort

2019-01-21 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16748028#comment-16748028
 ] 

Steve Loughran commented on HADOOP-16058:
-

Patch 002

Pulls up the code to set up MR clusters for committer tests into a new 
intermediate base class, one which does not contain the code to actually set up 
those base clusters. instead the cluster setup/teradown is done in the 
@BeforeClass/@AfterClass operations of the subclasses, so guaranteeing 
isolation and a lifecycle which matches those child classes.

Having done this, it hasn't made the terasort conflict go away; I've concluded 
now that that's due to some code in Terasort which uses LocalFS to save a 
partition list. Rather than do dramatic things to Terasort (e.g. add the 
ability to declare new local paths), I've just serialized the Terasorrt tests 
-after shrinking down their test size

I haven't reverted the design which pushes cluster setup/teardown into the 
child classes, even though I'm not sure it is needed, just because it does make 
clear the lifecycle of class-level data types.

+address checkstyle warnings from the previous patch, where possible.
+address checkstyle warnings from the previous patch, where possible.

Testing: S3A ireland, S3guard, ddb, auth, scale

The scale test runs now take 17 minutes, which is long enough to become 
inconvenient, especially because that's with 12 VMs: the laptop isn't usable 
for anything else.
{code:java}
bin/hadoop fs -cat 
s3a://hwdev-steve-ireland-new/terasort-ITestTerasortMagicCommitter/results.csv
"Operation" "Duration"
"Generate"  "0:28.596s"
"Terasort"  "0:32.456s"
"Validate"  "0:30.000s"
"Completed" "1:33.824s"
{code}
{code:java}
fs -cat 
s3a://hwdev-steve-ireland-new/terasort-ITestTerasortDirectoryCommitter/results.csv
"Operation" "Duration"
"Generate"  "0:17.602s"
"Terasort"  "0:25.151s"
"Validate"  "0:26.132s"
"Completed" "1:11.496s"
{code}
One test failure: HADOOP-16057

{code}
 ERROR] 
testDestroyNoArgs(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocal) Time 
elapsed: 1.167 s <<< ERROR!
 java.lang.IndexOutOfBoundsException: toIndex = 1
 at java.util.ArrayList.subListRangeCheck(ArrayList.java:1004)
 at java.util.ArrayList.subList(ArrayList.java:996)
 at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:89)
{code}

> S3A tests to include Terasort
> -
>
> Key: HADOOP-16058
> URL: https://issues.apache.org/jira/browse/HADOOP-16058
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16058-001.patch, HADOOP-16058-002.patch
>
>
> Add S3A tests to run terasort for the magic and directory committers.
> MAPREDUCE-7091 is a requirement for this
> Bonus feature: print the results to see which committers are faster in the 
> specific test setup. As that's a function of latency to the store, bandwidth 
> and size of jobs, it's not at all meaningful, just interesting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16057) IndexOutOfBoundsException in ITestS3GuardToolLocal

2019-01-21 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16748027#comment-16748027
 ] 

Gabor Bota commented on HADOOP-16057:
-

I'm +1 about rolling back that change and re-open the issue than fixing the 
issues it created. It would be easier to resolve the test failures in place and 
to have a clean test run.
Adam's ooo atm. If fixing this issue is urgent, I can work on that this week.

> IndexOutOfBoundsException in ITestS3GuardToolLocal
> --
>
> Key: HADOOP-16057
> URL: https://issues.apache.org/jira/browse/HADOOP-16057
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Adam Antal
>Priority: Major
>
> A new test from HADOOP-15843 is failing: {{testDestroyNoArgs}}; one arg too 
> short in the command line.
> Test run with {{ -Ds3guard -Ddynamodb}}
> {code}
> [ERROR] 
> testDestroyNoArgs(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocal)  
> Time elapsed: 0.761 s  <<< ERROR!
> java.lang.IndexOutOfBoundsException: toIndex = 1
>   at java.util.ArrayList.subListRangeCheck(ArrayList.java:1004)
>   at java.util.ArrayList.subList(ArrayList.java:996)
>   at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:89)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.parseArgs(S3GuardTool.java:371)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Destroy.run(S3GuardTool.java:626)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:399)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.lambda$testDestroyNoArgs$4(AbstractS3GuardToolTestBase.java:403)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16058) S3A tests to include Terasort

2019-01-21 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16058:

Status: Patch Available  (was: Open)

> S3A tests to include Terasort
> -
>
> Key: HADOOP-16058
> URL: https://issues.apache.org/jira/browse/HADOOP-16058
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16058-001.patch, HADOOP-16058-002.patch
>
>
> Add S3A tests to run terasort for the magic and directory committers.
> MAPREDUCE-7091 is a requirement for this
> Bonus feature: print the results to see which committers are faster in the 
> specific test setup. As that's a function of latency to the store, bandwidth 
> and size of jobs, it's not at all meaningful, just interesting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16057) IndexOutOfBoundsException in ITestS3GuardToolLocal

2019-01-21 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16057:

Status: Patch Available  (was: Open)

> IndexOutOfBoundsException in ITestS3GuardToolLocal
> --
>
> Key: HADOOP-16057
> URL: https://issues.apache.org/jira/browse/HADOOP-16057
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Adam Antal
>Priority: Major
>
> A new test from HADOOP-15843 is failing: {{testDestroyNoArgs}}; one arg too 
> short in the command line.
> Test run with {{ -Ds3guard -Ddynamodb}}
> {code}
> [ERROR] 
> testDestroyNoArgs(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocal)  
> Time elapsed: 0.761 s  <<< ERROR!
> java.lang.IndexOutOfBoundsException: toIndex = 1
>   at java.util.ArrayList.subListRangeCheck(ArrayList.java:1004)
>   at java.util.ArrayList.subList(ArrayList.java:996)
>   at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:89)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.parseArgs(S3GuardTool.java:371)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Destroy.run(S3GuardTool.java:626)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:399)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.lambda$testDestroyNoArgs$4(AbstractS3GuardToolTestBase.java:403)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16062) Add ability to disable Configuration reload registry

2019-01-21 Thread Zoltan Haindrich (JIRA)
Zoltan Haindrich created HADOOP-16062:
-

 Summary: Add ability to disable Configuration reload registry
 Key: HADOOP-16062
 URL: https://issues.apache.org/jira/browse/HADOOP-16062
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Zoltan Haindrich


In Hive we see that after an extensive amount of usage there are a lot of 
Configuration objects not fully reclaimed because of Configuration's REGISTRY 
weak hashmap.


https://github.com/apache/hadoop/blob/abde1e1f58d5b699e4b8e460cff68e154738169b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L321



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16058) S3A tests to include Terasort

2019-01-21 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16058:

Attachment: HADOOP-16058-002.patch

> S3A tests to include Terasort
> -
>
> Key: HADOOP-16058
> URL: https://issues.apache.org/jira/browse/HADOOP-16058
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16058-001.patch, HADOOP-16058-002.patch
>
>
> Add S3A tests to run terasort for the magic and directory committers.
> MAPREDUCE-7091 is a requirement for this
> Bonus feature: print the results to see which committers are faster in the 
> specific test setup. As that's a function of latency to the store, bandwidth 
> and size of jobs, it's not at all meaningful, just interesting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16062) Add ability to disable Configuration reload registry

2019-01-21 Thread Zoltan Haindrich (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16748020#comment-16748020
 ] 

Zoltan Haindrich commented on HADOOP-16062:
---

As these objects are belong to already closed sessions; they are just occupying 
heap space - and will force a "bigger" garbage collection eventually.
I would like to propose to add a "toggle" to make it possible to disable this 
feature.


> Add ability to disable Configuration reload registry
> 
>
> Key: HADOOP-16062
> URL: https://issues.apache.org/jira/browse/HADOOP-16062
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Priority: Major
>
> In Hive we see that after an extensive amount of usage there are a lot of 
> Configuration objects not fully reclaimed because of Configuration's REGISTRY 
> weak hashmap.
> https://github.com/apache/hadoop/blob/abde1e1f58d5b699e4b8e460cff68e154738169b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L321



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16062) Add ability to disable Configuration reload registry

2019-01-21 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HADOOP-16062:
--
Description: 
In Hive we see that after an extensive amount of usage there are a lot of 
Configuration objects not fully reclaimed because of Configuration's REGISTRY 
weak hashmap.


https://github.com/apache/hadoop/blob/abde1e1f58d5b699e4b8e460cff68e154738169b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L321

 !jxray_registry.png! 

  was:
In Hive we see that after an extensive amount of usage there are a lot of 
Configuration objects not fully reclaimed because of Configuration's REGISTRY 
weak hashmap.


https://github.com/apache/hadoop/blob/abde1e1f58d5b699e4b8e460cff68e154738169b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L321


> Add ability to disable Configuration reload registry
> 
>
> Key: HADOOP-16062
> URL: https://issues.apache.org/jira/browse/HADOOP-16062
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Priority: Major
> Attachments: jxray_registry.png
>
>
> In Hive we see that after an extensive amount of usage there are a lot of 
> Configuration objects not fully reclaimed because of Configuration's REGISTRY 
> weak hashmap.
> https://github.com/apache/hadoop/blob/abde1e1f58d5b699e4b8e460cff68e154738169b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L321
>  !jxray_registry.png! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16062) Add ability to disable Configuration reload registry

2019-01-21 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HADOOP-16062:
--
Attachment: jxray_registry.png

> Add ability to disable Configuration reload registry
> 
>
> Key: HADOOP-16062
> URL: https://issues.apache.org/jira/browse/HADOOP-16062
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Priority: Major
> Attachments: jxray_registry.png
>
>
> In Hive we see that after an extensive amount of usage there are a lot of 
> Configuration objects not fully reclaimed because of Configuration's REGISTRY 
> weak hashmap.
> https://github.com/apache/hadoop/blob/abde1e1f58d5b699e4b8e460cff68e154738169b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L321



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16057) IndexOutOfBoundsException in ITestS3GuardToolLocal

2019-01-21 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16748013#comment-16748013
 ] 

Steve Loughran commented on HADOOP-16057:
-

bq. Maybe the integration tests were not checked before submitting the patch?

maybe. Certainly I don't believe ITestS3GuardToolLocal could ever have been 
executed.

How about rolling back that change & re-open that JIRA for [~adam.antal] to do 
another iteration, and this time I'll run the tests myself before committing. 

Adam: It's OK to break the build from time to time —we all do—, but I'm afraid 
you get the homework of fixing it. Sorry

> IndexOutOfBoundsException in ITestS3GuardToolLocal
> --
>
> Key: HADOOP-16057
> URL: https://issues.apache.org/jira/browse/HADOOP-16057
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Adam Antal
>Priority: Major
>
> A new test from HADOOP-15843 is failing: {{testDestroyNoArgs}}; one arg too 
> short in the command line.
> Test run with {{ -Ds3guard -Ddynamodb}}
> {code}
> [ERROR] 
> testDestroyNoArgs(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocal)  
> Time elapsed: 0.761 s  <<< ERROR!
> java.lang.IndexOutOfBoundsException: toIndex = 1
>   at java.util.ArrayList.subListRangeCheck(ArrayList.java:1004)
>   at java.util.ArrayList.subList(ArrayList.java:996)
>   at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:89)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.parseArgs(S3GuardTool.java:371)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Destroy.run(S3GuardTool.java:626)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:399)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.lambda$testDestroyNoArgs$4(AbstractS3GuardToolTestBase.java:403)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15787) [JDK11] TestIPC.testRTEDuringConnectionSetup fails

2019-01-21 Thread Zsolt Venczel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel updated HADOOP-15787:
---
Attachment: HADOOP-15787.02.patch

> [JDK11] TestIPC.testRTEDuringConnectionSetup fails
> --
>
> Key: HADOOP-15787
> URL: https://issues.apache.org/jira/browse/HADOOP-15787
> Project: Hadoop Common
>  Issue Type: Sub-task
> Environment: Java 11+28, CentOS 7.5
>Reporter: Akira Ajisaka
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HADOOP-15787.01.patch, HADOOP-15787.02.patch
>
>
> {noformat}
> [INFO] Running org.apache.hadoop.ipc.TestIPC
> [ERROR] Tests run: 40, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 82.577 s <<< FAILURE! - in org.apache.hadoop.ipc.TestIPC
> [ERROR] testRTEDuringConnectionSetup(org.apache.hadoop.ipc.TestIPC)  Time 
> elapsed: 0.462 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.ipc.TestIPC.testRTEDuringConnectionSetup(TestIPC.java:625)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16049) DistCp result has data and checksum mismatch when blocks per chunk > 0

2019-01-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16747985#comment-16747985
 ] 

Hadoop QA commented on HADOOP-16049:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 28m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
40s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-tools/hadoop-distcp: The patch generated 0 
new + 149 unchanged - 2 fixed = 149 total (was 151) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m  
1s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:a5f678f |
| JIRA Issue | HADOOP-16049 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12955653/HADOOP-16049-branch-2-005.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 59fddceb8797 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / d3b06d1 |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Default Java | 1.7.0_181 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15815/testReport/ |
| Max. process+thread count | 215 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15815/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> DistCp result has data and checksum mismatch when blocks per chunk > 0
> --
>
> Key: HADOOP-16049
> URL: https://issues.apache.org/jira/browse/HADOOP-16049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.2
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Attachments: 

[jira] [Updated] (HADOOP-15787) [JDK11] TestIPC.testRTEDuringConnectionSetup fails

2019-01-21 Thread Zsolt Venczel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel updated HADOOP-15787:
---
Attachment: HADOOP-15787.01.patch
Status: Patch Available  (was: Open)

> [JDK11] TestIPC.testRTEDuringConnectionSetup fails
> --
>
> Key: HADOOP-15787
> URL: https://issues.apache.org/jira/browse/HADOOP-15787
> Project: Hadoop Common
>  Issue Type: Sub-task
> Environment: Java 11+28, CentOS 7.5
>Reporter: Akira Ajisaka
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HADOOP-15787.01.patch
>
>
> {noformat}
> [INFO] Running org.apache.hadoop.ipc.TestIPC
> [ERROR] Tests run: 40, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 82.577 s <<< FAILURE! - in org.apache.hadoop.ipc.TestIPC
> [ERROR] testRTEDuringConnectionSetup(org.apache.hadoop.ipc.TestIPC)  Time 
> elapsed: 0.462 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.ipc.TestIPC.testRTEDuringConnectionSetup(TestIPC.java:625)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15787) [JDK11] TestIPC.testRTEDuringConnectionSetup fails

2019-01-21 Thread Zsolt Venczel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel reassigned HADOOP-15787:
--

Assignee: Zsolt Venczel

> [JDK11] TestIPC.testRTEDuringConnectionSetup fails
> --
>
> Key: HADOOP-15787
> URL: https://issues.apache.org/jira/browse/HADOOP-15787
> Project: Hadoop Common
>  Issue Type: Sub-task
> Environment: Java 11+28, CentOS 7.5
>Reporter: Akira Ajisaka
>Assignee: Zsolt Venczel
>Priority: Major
>
> {noformat}
> [INFO] Running org.apache.hadoop.ipc.TestIPC
> [ERROR] Tests run: 40, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 82.577 s <<< FAILURE! - in org.apache.hadoop.ipc.TestIPC
> [ERROR] testRTEDuringConnectionSetup(org.apache.hadoop.ipc.TestIPC)  Time 
> elapsed: 0.462 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.ipc.TestIPC.testRTEDuringConnectionSetup(TestIPC.java:625)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16059) Use SASL Factories Cache to Improove Performance

2019-01-21 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HADOOP-16059:
--
Attachment: HADOOP-16059-02.patch

> Use SASL Factories Cache to Improove Performance
> 
>
> Key: HADOOP-16059
> URL: https://issues.apache.org/jira/browse/HADOOP-16059
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
> Attachments: HADOOP-16059-01.patch, HADOOP-16059-02.patch
>
>
> SASL Client factories can be cached and SASL Server Factories and SASL Client 
> Factories can be together extended at SaslParticipant  to improve performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16049) DistCp result has data and checksum mismatch when blocks per chunk > 0

2019-01-21 Thread Kai Xie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Xie updated HADOOP-16049:
-
Status: Open  (was: Patch Available)

> DistCp result has data and checksum mismatch when blocks per chunk > 0
> --
>
> Key: HADOOP-16049
> URL: https://issues.apache.org/jira/browse/HADOOP-16049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.2
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Attachments: HADOOP-16049-branch-2-003.patch, 
> HADOOP-16049-branch-2-003.patch, HADOOP-16049-branch-2-004.patch
>
>
> In 2.9.2 RetriableFileCopyCommand.copyBytes,
> {code:java}
> int bytesRead = readBytes(inStream, buf, sourceOffset);
> while (bytesRead >= 0) {
>   ...
>   if (action == FileAction.APPEND) {
> sourceOffset += bytesRead;
>   }
>   ... // write to dst
>   bytesRead = readBytes(inStream, buf, sourceOffset);
> }{code}
> it does a positioned read but the position (`sourceOffset` here) is never 
> updated when blocks per chunk is set to > 0 (which always disables append 
> action). So for chunk with offset != 0, it will keep copying the first few 
> bytes again and again, causing result to have data & checksum mismatch.
> To re-produce this issue, in branch-2, update BLOCK_SIZE to 10240 (> default 
> copy buffer size) in class TestDistCpSystem and run it.
> HADOOP-15292 has resolved the issue reported in this ticket in 
> trunk/branch-3.1/branch-3.2 by not using the positioned read, but has not 
> been backported to branch-2 yet
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16049) DistCp result has data and checksum mismatch when blocks per chunk > 0

2019-01-21 Thread Kai Xie (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16747933#comment-16747933
 ] 

Kai Xie commented on HADOOP-16049:
--

submitted branch-2-005 with the checkstyle fix.

btw, I can see that the timeout / unit test failure is caused by OpenJDK 7

(taken from 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15811/artifact/out/patch-asflicense.txt)
{code:java}
===
==/testptch/hadoop/hadoop-tools/hadoop-distcp/hs_err_pid2744.log
===
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  Internal Error (safepoint.cpp:325), pid=2744, tid=139788518442752
#  guarantee(PageArmed == 0) failed: invariant
#
# JRE version: OpenJDK Runtime Environment (7.0_181-b01) (build 1.7.0_181-b01)
# Java VM: OpenJDK 64-Bit Server VM (24.181-b01 mixed mode linux-amd64 
compressed oops)
# Derivative: IcedTea 2.6.14
# Distribution: Ubuntu 14.04 LTS, package 7u181-2.6.14-0ubuntu0.3
# Failed to write core dump. Core dumps have been disabled. To enable core 
dumping, try "ulimit -c unlimited" before starting Java again
#
# If you would like to submit a bug report, please include
# instructions on how to reproduce the bug and visit:
#   http://icedtea.classpath.org/bugzilla
#

---  T H R E A D  ---

Current thread (0x7f2318272800):  VMThread [stack: 
0x7f230cec4000,0x7f230cfc5000] [id=2762]

Stack: [0x7f230cec4000,0x7f230cfc5000],  sp=0x7f230cfc3b10,  free 
space=1022k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V  [libjvm.so+0x966c25]
V  [libjvm.so+0x49b96e]
V  [libjvm.so+0x872b51]
V  [libjvm.so+0x96b69a]
V  [libjvm.so+0x96baf2]
V  [libjvm.so+0x7da992]

VM_Operation (0x7f2321b956b0): RevokeBias, mode: safepoint, requested by 
thread 0x7f231800a000


---  P R O C E S S  ---

Java Threads: ( => current thread )
  0x7f231936e800 JavaThread "nioEventLoopGroup-3-32" [_thread_blocked, 
id=3316, stack(0x7f22f9769000,0x7f22f986a000)]
  0x7f231936d000 JavaThread "nioEventLoopGroup-3-31" [_thread_blocked, 
id=3315, stack(0x7f22f986a000,0x7f22f996b000)]
  0x7f231936b000 JavaThread "nioEventLoopGroup-3-30" [_thread_blocked, 
id=3314, stack(0x7f22f996b000,0x7f22f9a6c000)]
  0x7f2319368800 JavaThread "nioEventLoopGroup-3-29" [_thread_blocked, 
id=3313, stack(0x7f22f9a6c000,0x7f22f9b6d000)]
  0x7f2319366800 JavaThread "nioEventLoopGroup-3-28" [_thread_blocked, 
id=3312, stack(0x7f22f9b6d000,0x7f22f9c6e000)]
  0x7f2319364800 JavaThread "nioEventLoopGroup-3-27" [_thread_blocked, 
id=3311, stack(0x7f22f9c6e000,0x7f22f9d6f000)]
  0x7f2319362800 JavaThread "nioEventLoopGroup-3-26" [_thread_blocked, 
id=3310, stack(0x7f22f9d6f000,0x7f22f9e7)]
  0x7f2319360800 JavaThread "nioEventLoopGroup-3-25" [_thread_blocked, 
id=3309, stack(0x7f22f9e7,0x7f22f9f71000)]
  0x7f231935e800 JavaThread "nioEventLoopGroup-3-24" [_thread_blocked, 
id=3308, stack(0x7f22f9f71000,0x7f22fa072000)]
  0x7f231935c800 JavaThread "nioEventLoopGroup-3-23" [_thread_blocked, 
id=3307, stack(0x7f22fa072000,0x7f22fa173000)]
  0x7f231935a800 JavaThread "nioEventLoopGroup-3-22" [_thread_blocked, 
id=3306, stack(0x7f22fa173000,0x7f22fa274000)]
  0x7f2319358800 JavaThread "nioEventLoopGroup-3-21" [_thread_blocked, 
id=3305, stack(0x7f22fa274000,0x7f22fa375000)]
  0x7f2319356800 JavaThread "nioEventLoopGroup-3-20" [_thread_blocked, 
id=3304, stack(0x7f22fa375000,0x7f22fa476000)]
  0x7f2319354000 JavaThread "nioEventLoopGroup-3-19" [_thread_blocked, 
id=3303, stack(0x7f22fa476000,0x7f22fa577000)]


{code}

> DistCp result has data and checksum mismatch when blocks per chunk > 0
> --
>
> Key: HADOOP-16049
> URL: https://issues.apache.org/jira/browse/HADOOP-16049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.2
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Attachments: HADOOP-16049-branch-2-003.patch, 
> HADOOP-16049-branch-2-003.patch, HADOOP-16049-branch-2-004.patch, 
> HADOOP-16049-branch-2-005.patch
>
>
> In 2.9.2 RetriableFileCopyCommand.copyBytes,
> {code:java}
> int bytesRead = readBytes(inStream, buf, sourceOffset);
> while (bytesRead >= 0) {
>   ...
>   if (action == FileAction.APPEND) {
> sourceOffset += bytesRead;
>   }
>   ... // write to dst
>   bytesRead = readBytes(inStream, buf, sourceOffset);
> }{code}
> it does a positioned read but the position (`sourceOffset` here) is never 
> updated when 

[jira] [Updated] (HADOOP-16049) DistCp result has data and checksum mismatch when blocks per chunk > 0

2019-01-21 Thread Kai Xie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Xie updated HADOOP-16049:
-
Attachment: HADOOP-16049-branch-2-005.patch

> DistCp result has data and checksum mismatch when blocks per chunk > 0
> --
>
> Key: HADOOP-16049
> URL: https://issues.apache.org/jira/browse/HADOOP-16049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.2
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Attachments: HADOOP-16049-branch-2-003.patch, 
> HADOOP-16049-branch-2-003.patch, HADOOP-16049-branch-2-004.patch, 
> HADOOP-16049-branch-2-005.patch
>
>
> In 2.9.2 RetriableFileCopyCommand.copyBytes,
> {code:java}
> int bytesRead = readBytes(inStream, buf, sourceOffset);
> while (bytesRead >= 0) {
>   ...
>   if (action == FileAction.APPEND) {
> sourceOffset += bytesRead;
>   }
>   ... // write to dst
>   bytesRead = readBytes(inStream, buf, sourceOffset);
> }{code}
> it does a positioned read but the position (`sourceOffset` here) is never 
> updated when blocks per chunk is set to > 0 (which always disables append 
> action). So for chunk with offset != 0, it will keep copying the first few 
> bytes again and again, causing result to have data & checksum mismatch.
> To re-produce this issue, in branch-2, update BLOCK_SIZE to 10240 (> default 
> copy buffer size) in class TestDistCpSystem and run it.
> HADOOP-15292 has resolved the issue reported in this ticket in 
> trunk/branch-3.1/branch-3.2 by not using the positioned read, but has not 
> been backported to branch-2 yet
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16049) DistCp result has data and checksum mismatch when blocks per chunk > 0

2019-01-21 Thread Kai Xie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Xie updated HADOOP-16049:
-
Status: Patch Available  (was: Open)

> DistCp result has data and checksum mismatch when blocks per chunk > 0
> --
>
> Key: HADOOP-16049
> URL: https://issues.apache.org/jira/browse/HADOOP-16049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.2
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Attachments: HADOOP-16049-branch-2-003.patch, 
> HADOOP-16049-branch-2-003.patch, HADOOP-16049-branch-2-004.patch, 
> HADOOP-16049-branch-2-005.patch
>
>
> In 2.9.2 RetriableFileCopyCommand.copyBytes,
> {code:java}
> int bytesRead = readBytes(inStream, buf, sourceOffset);
> while (bytesRead >= 0) {
>   ...
>   if (action == FileAction.APPEND) {
> sourceOffset += bytesRead;
>   }
>   ... // write to dst
>   bytesRead = readBytes(inStream, buf, sourceOffset);
> }{code}
> it does a positioned read but the position (`sourceOffset` here) is never 
> updated when blocks per chunk is set to > 0 (which always disables append 
> action). So for chunk with offset != 0, it will keep copying the first few 
> bytes again and again, causing result to have data & checksum mismatch.
> To re-produce this issue, in branch-2, update BLOCK_SIZE to 10240 (> default 
> copy buffer size) in class TestDistCpSystem and run it.
> HADOOP-15292 has resolved the issue reported in this ticket in 
> trunk/branch-3.1/branch-3.2 by not using the positioned read, but has not 
> been backported to branch-2 yet
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16049) DistCp result has data and checksum mismatch when blocks per chunk > 0

2019-01-21 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16747906#comment-16747906
 ] 

Steve Loughran commented on HADOOP-16049:
-

checkstyle
{code}
./hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java:300:
  private static int readBytes(ThrottledInputStream inStream, byte buf[]):71: 
Array brackets at illegal position.
{code}

must be: {{byte[] buf}}}

ASF license is from a crashed jvm
{code}
Lines that start with ? in the ASF License  report indicate files that do 
not have an Apache license header:
 !? /testptch/hadoop/hadoop-tools/hadoop-distcp/hs_err_pid2744.log
{code}

Don't see anything in the tests resembling failures., though there was a 
timeout. 

There is a warning in the logs about azure storage versions, 
{code}
[WARNING] Some problems were encountered while building the effective model for 
org.apache.hadoop:hadoop-distcp:jar:2.10.0-SNAPSHOT
[WARNING] 
'dependencyManagement.dependencies.dependency.(groupId:artifactId:type:classifier)'
 must be unique: com.microsoft.azure:azure-storage:jar -> version 7.0.0 vs 
5.4.0 @ org.apache.hadoop:hadoop-project:2.10.0-SNAPSHOT, 
/testptch/hadoop/hadoop-project/pom.xml, line 1151, column 19
{code}

How about you fix the checkstyle and resubmit it -we can see if that timeout 
was a transient error or not

> DistCp result has data and checksum mismatch when blocks per chunk > 0
> --
>
> Key: HADOOP-16049
> URL: https://issues.apache.org/jira/browse/HADOOP-16049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.2
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Attachments: HADOOP-16049-branch-2-003.patch, 
> HADOOP-16049-branch-2-003.patch, HADOOP-16049-branch-2-004.patch
>
>
> In 2.9.2 RetriableFileCopyCommand.copyBytes,
> {code:java}
> int bytesRead = readBytes(inStream, buf, sourceOffset);
> while (bytesRead >= 0) {
>   ...
>   if (action == FileAction.APPEND) {
> sourceOffset += bytesRead;
>   }
>   ... // write to dst
>   bytesRead = readBytes(inStream, buf, sourceOffset);
> }{code}
> it does a positioned read but the position (`sourceOffset` here) is never 
> updated when blocks per chunk is set to > 0 (which always disables append 
> action). So for chunk with offset != 0, it will keep copying the first few 
> bytes again and again, causing result to have data & checksum mismatch.
> To re-produce this issue, in branch-2, update BLOCK_SIZE to 10240 (> default 
> copy buffer size) in class TestDistCpSystem and run it.
> HADOOP-15292 has resolved the issue reported in this ticket in 
> trunk/branch-3.1/branch-3.2 by not using the positioned read, but has not 
> been backported to branch-2 yet
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15040) Upgrade AWS SDK to 1.11.271: NPE in 1.11.199 bug spams logs w/ Yarn Log Aggregation

2019-01-21 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15040:

Summary: Upgrade AWS SDK to 1.11.271: NPE in 1.11.199 bug spams logs w/ 
Yarn Log Aggregation  (was: Upgrade AWS SDK to 1.11.271: NPE bug spams logs w/ 
Yarn Log Aggregation)

> Upgrade AWS SDK to 1.11.271: NPE in 1.11.199 bug spams logs w/ Yarn Log 
> Aggregation
> ---
>
> Key: HADOOP-15040
> URL: https://issues.apache.org/jira/browse/HADOOP-15040
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Blocker
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HADOOP-15040.001.patch
>
>
> My colleagues working with Yarn log aggregation found that they were getting 
> this message spammed in their logs when they used an s3a:// URI for logs 
> (yarn.nodemanager.remote-app-log-dir):
> {noformat}
> getting attribute Region of com.amazonaws.management:type=AwsSdkMetrics threw 
> an exception
> javax.management.RuntimeMBeanException: java.lang.NullPointerException
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839)
>   at 
> 
> Caused by: java.lang.NullPointerException
>   at com.amazonaws.metrics.AwsSdkMetrics.getRegion(AwsSdkMetrics.java:729)
>   at com.amazonaws.metrics.MetricAdmin.getRegion(MetricAdmin.java:67)
>   at sun.reflect.GeneratedMethodAccessor132.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
> {noformat}
> This happens even though the aws sdk cloudwatch metrics reporting was 
> disabled (default), which is a bug. 
> I filed a [github issue|https://github.com/aws/aws-sdk-java/issues/1375|] and 
> it looks like a fix should be coming around SDK release 1.11.229 or so.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16055) Upgrade AWS SDK to fix license issue in branch-2.8 and branch-2.7

2019-01-21 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16747901#comment-16747901
 ] 

Steve Loughran commented on HADOOP-16055:
-

Didn't notice that branch-2 was on the 199 version, that generates lots of NPE 
warnings in the logs unless AWS metrics are wired up. 

How about

# we look @ '271 to make sure its shaded Jackson is considered safe.
# otherwise, upgrade them all across 2.9 + with whatever fixes for regressions 
are needed.
# And for 2.8, cherry pick in the shaded code from 2.9 as well

At least that way we are consistent, and with the shading its going to be less 
transitively traumatic. The only impact is the shaded jar is *big*, which 
doesn't just hurt the distro, it can slow down jetty scanning it for servlets. 
Only a second or two, but did create problems with Ambari. Too bad.

bq. 2.7.x line is much harder, maybe we can stop the maintenance work rather 
than upgrading AWS SDK versions.

something to discuss. If it's kept alive, maybe exclude the aws SDK From the 
.tarball, though that leaves the problem about POMs open.

> Upgrade AWS SDK to fix license issue in branch-2.8 and branch-2.7
> -
>
> Key: HADOOP-16055
> URL: https://issues.apache.org/jira/browse/HADOOP-16055
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-16055-branch-2.8-01.patch, 
> HADOOP-16055-branch-2.8-02.patch
>
>
> Per HADOOP-13794, we must exclude the JSON license.
> The upgrade will contain incompatible changes, however, the license issue is 
> much more important.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16059) Use SASL Factories Cache to Improove Performance

2019-01-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16747884#comment-16747884
 ] 

Hadoop QA commented on HADOOP-16059:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
0s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m  
7s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 30s{color} | {color:orange} root: The patch generated 5 new + 338 unchanged 
- 6 fixed = 343 total (was 344) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
57s{color} | {color:red} hadoop-common-project/hadoop-common generated 2 new + 
0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
0s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
50s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
21s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}139m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Incorrect lazy initialization of static field 
org.apache.hadoop.security.SaslRpcClient.saslFactory in 
org.apache.hadoop.security.SaslRpcClient.init(Configuration)  At 
SaslRpcClient.java:field org.apache.hadoop.security.SaslRpcClient.saslFactory 
in org.apache.hadoop.security.SaslRpcClient.init(Configuration)  At 
SaslRpcClient.java:[lines 107-109] |
|  |  Write to static field 

[jira] [Comment Edited] (HADOOP-16057) IndexOutOfBoundsException in ITestS3GuardToolLocal

2019-01-21 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16747827#comment-16747827
 ] 

Gabor Bota edited comment on HADOOP-16057 at 1/21/19 10:25 AM:
---

It's connected, as the cause of these issues were both submitted with 
HADOOP-15843. Maybe the integration tests were not checked before submitting 
the patch?
I'll create another jira for the failing {{testDynamoTableTagging}}.


was (Author: gabor.bota):
It's connected, as the causes of these issues were both submitted with 
HADOOP-15843. Maybe the integration tests were not checked before submitting 
the patch?
I'll create another jira for the failing {{testDynamoTableTagging}}.

> IndexOutOfBoundsException in ITestS3GuardToolLocal
> --
>
> Key: HADOOP-16057
> URL: https://issues.apache.org/jira/browse/HADOOP-16057
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Adam Antal
>Priority: Major
>
> A new test from HADOOP-15843 is failing: {{testDestroyNoArgs}}; one arg too 
> short in the command line.
> Test run with {{ -Ds3guard -Ddynamodb}}
> {code}
> [ERROR] 
> testDestroyNoArgs(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocal)  
> Time elapsed: 0.761 s  <<< ERROR!
> java.lang.IndexOutOfBoundsException: toIndex = 1
>   at java.util.ArrayList.subListRangeCheck(ArrayList.java:1004)
>   at java.util.ArrayList.subList(ArrayList.java:996)
>   at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:89)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.parseArgs(S3GuardTool.java:371)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Destroy.run(S3GuardTool.java:626)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:399)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.lambda$testDestroyNoArgs$4(AbstractS3GuardToolTestBase.java:403)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16057) IndexOutOfBoundsException in ITestS3GuardToolLocal

2019-01-21 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16747820#comment-16747820
 ] 

Gabor Bota commented on HADOOP-16057:
-

while testing HADOOP-15229 I've found this as well.
there is another issue with the tests:
{noformat}
[ERROR] 
testDynamoTableTagging(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)
  Time elapsed: 4.352 s  <<< ERROR!
42: No metastore or filesystem specified
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.usageError(S3GuardTool.java:1562)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.initMetadataStore(S3GuardTool.java:304)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Init.run(S3GuardTool.java:505)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:399)
at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB.testDynamoTableTagging(ITestS3GuardToolDynamoDB.java:134)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
{noformat}

I'll check if these two are connected.

> IndexOutOfBoundsException in ITestS3GuardToolLocal
> --
>
> Key: HADOOP-16057
> URL: https://issues.apache.org/jira/browse/HADOOP-16057
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Adam Antal
>Priority: Major
>
> A new test from HADOOP-15843 is failing: {{testDestroyNoArgs}}; one arg too 
> short in the command line.
> Test run with {{ -Ds3guard -Ddynamodb}}
> {code}
> [ERROR] 
> testDestroyNoArgs(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocal)  
> Time elapsed: 0.761 s  <<< ERROR!
> java.lang.IndexOutOfBoundsException: toIndex = 1
>   at java.util.ArrayList.subListRangeCheck(ArrayList.java:1004)
>   at java.util.ArrayList.subList(ArrayList.java:996)
>   at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:89)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.parseArgs(S3GuardTool.java:371)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Destroy.run(S3GuardTool.java:626)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:399)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.lambda$testDestroyNoArgs$4(AbstractS3GuardToolTestBase.java:403)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16057) IndexOutOfBoundsException in ITestS3GuardToolLocal

2019-01-21 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16747827#comment-16747827
 ] 

Gabor Bota commented on HADOOP-16057:
-

It's connected, as the causes of these issues were both submitted with 
HADOOP-15843. Maybe the integration tests were not checked before submitting 
the patch?
I'll create another jira for the failing {{testDynamoTableTagging}}.

> IndexOutOfBoundsException in ITestS3GuardToolLocal
> --
>
> Key: HADOOP-16057
> URL: https://issues.apache.org/jira/browse/HADOOP-16057
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Adam Antal
>Priority: Major
>
> A new test from HADOOP-15843 is failing: {{testDestroyNoArgs}}; one arg too 
> short in the command line.
> Test run with {{ -Ds3guard -Ddynamodb}}
> {code}
> [ERROR] 
> testDestroyNoArgs(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocal)  
> Time elapsed: 0.761 s  <<< ERROR!
> java.lang.IndexOutOfBoundsException: toIndex = 1
>   at java.util.ArrayList.subListRangeCheck(ArrayList.java:1004)
>   at java.util.ArrayList.subList(ArrayList.java:996)
>   at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:89)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.parseArgs(S3GuardTool.java:371)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Destroy.run(S3GuardTool.java:626)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:399)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.lambda$testDestroyNoArgs$4(AbstractS3GuardToolTestBase.java:403)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] tiwalter commented on a change in pull request #449: HDFS-14158. Fix the Checkpointer not to ignore the configured "dfs.namenode.checkpoint.period" > 5 minutes

2019-01-21 Thread GitBox
tiwalter commented on a change in pull request #449: HDFS-14158. Fix the 
Checkpointer not to ignore the configured "dfs.namenode.checkpoint.period" > 5 
minutes
URL: https://github.com/apache/hadoop/pull/449#discussion_r249385501
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Checkpointer.java
 ##
 @@ -160,7 +159,7 @@ public void run() {
 break;
   }
   try {
-Thread.sleep(periodMSec);
+Thread.sleep(LongMath.gcd(periodMSec, checkpointPeriodMSec));
 
 Review comment:
   Yes, exactly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek commented on a change in pull request #462: HDDS-764. Run S3 smoke tests with replication STANDARD.

2019-01-21 Thread GitBox
elek commented on a change in pull request #462: HDDS-764. Run S3 smoke tests 
with replication STANDARD.
URL: https://github.com/apache/hadoop/pull/462#discussion_r249379948
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/test.sh
 ##
 @@ -24,6 +23,41 @@ mkdir -p "$DIR/$RESULT_DIR"
 #Should be writeable from the docker containers where user is different.
 chmod ogu+w "$DIR/$RESULT_DIR"
 
+## @description wait until 3 datanodes are up (or 30 seconds)
+## @param the docker-compose file
+wait_for_datanodes(){
 
 Review comment:
   Yes, we continue.
   
   I also considered to fail from the bash script itself but the always 
continue may be better:
   
* You will get all of the test results even if one cluster can't be scaled 
up.
* The bash script could iterate over if the scale up is failed without exit 
with -1 but I am not sure the visibility of the problem in this case
* The robot tests will be failed anyway and the result will be part of the 
test result.
   
   But I can be convinced to do in a different way.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek commented on a change in pull request #462: HDDS-764. Run S3 smoke tests with replication STANDARD.

2019-01-21 Thread GitBox
elek commented on a change in pull request #462: HDDS-764. Run S3 smoke tests 
with replication STANDARD.
URL: https://github.com/apache/hadoop/pull/462#discussion_r249378537
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/test.sh
 ##
 @@ -38,8 +72,8 @@ execute_tests(){
   echo "-"
   docker-compose -f "$COMPOSE_FILE" down
   docker-compose -f "$COMPOSE_FILE" up -d
-  echo "Waiting 30s for cluster start up..."
-  sleep 30
+  docker-compose -f "$COMPOSE_FILE" scale datanode=3
 
 Review comment:
   Sure, we can. (If you see the same error message, then the osx already has 
new enough docker-compoe). 
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek commented on a change in pull request #462: HDDS-764. Run S3 smoke tests with replication STANDARD.

2019-01-21 Thread GitBox
elek commented on a change in pull request #462: HDDS-764. Run S3 smoke tests 
with replication STANDARD.
URL: https://github.com/apache/hadoop/pull/462#discussion_r249377544
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/test.sh
 ##
 @@ -24,6 +23,41 @@ mkdir -p "$DIR/$RESULT_DIR"
 #Should be writeable from the docker containers where user is different.
 chmod ogu+w "$DIR/$RESULT_DIR"
 
+## @description wait until 3 datanodes are up (or 30 seconds)
+## @param the docker-compose file
+wait_for_datanodes(){
+
+  #Reset the timer
+  SECONDS=0
+
+  #Don't give it up until 30 seconds
 
 Review comment:
   I think it's fine. the value of sleep is independent as we check the elapsed 
time based on the $SLEEP environment variable. It will iterate at every 2 
seconds until 30 seconds  (If I didn't miss something)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16059) Use SASL Factories Cache to Improove Performance

2019-01-21 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HADOOP-16059:
--
Status: Patch Available  (was: Open)

> Use SASL Factories Cache to Improove Performance
> 
>
> Key: HADOOP-16059
> URL: https://issues.apache.org/jira/browse/HADOOP-16059
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
> Attachments: HADOOP-16059-01.patch
>
>
> SASL Client factories can be cached and SASL Server Factories and SASL Client 
> Factories can be together extended at SaslParticipant  to improve performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16059) Use SASL Factories Cache to Improove Performance

2019-01-21 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HADOOP-16059:
--
Attachment: HADOOP-16059-01.patch

> Use SASL Factories Cache to Improove Performance
> 
>
> Key: HADOOP-16059
> URL: https://issues.apache.org/jira/browse/HADOOP-16059
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
> Attachments: HADOOP-16059-01.patch
>
>
> SASL Client factories can be cached and SASL Server Factories and SASL Client 
> Factories can be together extended at SaslParticipant  to improve performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] aajisaka commented on a change in pull request #449: HDFS-14158. Fix the Checkpointer not to ignore the configured "dfs.namenode.checkpoint.period" > 5 minutes

2019-01-21 Thread GitBox
aajisaka commented on a change in pull request #449: HDFS-14158. Fix the 
Checkpointer not to ignore the configured "dfs.namenode.checkpoint.period" > 5 
minutes
URL: https://github.com/apache/hadoop/pull/449#discussion_r249370983
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Checkpointer.java
 ##
 @@ -160,7 +159,7 @@ public void run() {
 break;
   }
   try {
-Thread.sleep(periodMSec);
+Thread.sleep(LongMath.gcd(periodMSec, checkpointPeriodMSec));
 
 Review comment:
   Now periodMsec is min of checkpointCheckPeriod and checkpointPeriod, so the 
sleep time is smaller than or equal to the configured checkpoint period. Isn't 
it?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org