[jira] [Updated] (HADOOP-16988) Remove source code from branch-2

2020-04-16 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16988:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Remove source code from branch-2
> 
>
> Key: HADOOP-16988
> URL: https://issues.apache.org/jira/browse/HADOOP-16988
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> Now, branch-2 is dead and unused. I think we can delete the entire source 
> code from branch-2 to avoid committing or cherry-picking to the unused branch.
> Chen Liang asked ASF INFRA for help but it didn't help for us: INFRA-19581



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16933) Backport HADOOP-16890- "ABFS: Change in expiry calculation for MSI token provider" & HADOOP-16825 "ITestAzureBlobFileSystemCheckAccess failing" to branch-2

2020-04-16 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085178#comment-17085178
 ] 

Jonathan Hung commented on HADOOP-16933:


Pushed to branch-2.10.

> Backport HADOOP-16890- "ABFS: Change in expiry calculation for MSI token 
> provider" & HADOOP-16825 "ITestAzureBlobFileSystemCheckAccess failing" to 
> branch-2
> ---
>
> Key: HADOOP-16933
> URL: https://issues.apache.org/jira/browse/HADOOP-16933
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Priority: Minor
>
> Backport "ABFS: Change in expiry calculation for MSI token provider" to 
> branch-2



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16933) Backport HADOOP-16890- "ABFS: Change in expiry calculation for MSI token provider" & HADOOP-16825 "ITestAzureBlobFileSystemCheckAccess failing" to branch-2

2020-04-16 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16933:
---
Fix Version/s: 2.10.1

> Backport HADOOP-16890- "ABFS: Change in expiry calculation for MSI token 
> provider" & HADOOP-16825 "ITestAzureBlobFileSystemCheckAccess failing" to 
> branch-2
> ---
>
> Key: HADOOP-16933
> URL: https://issues.apache.org/jira/browse/HADOOP-16933
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Priority: Minor
> Fix For: 2.10.1
>
>
> Backport "ABFS: Change in expiry calculation for MSI token provider" to 
> branch-2



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16734) Backport HADOOP-16455- "ABFS: Implement FileSystem.access() method" to branch-2

2020-04-16 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16734:
---
Fix Version/s: 2.10.1

> Backport HADOOP-16455- "ABFS: Implement FileSystem.access() method" to 
> branch-2
> ---
>
> Key: HADOOP-16734
> URL: https://issues.apache.org/jira/browse/HADOOP-16734
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 2.10.1
>
>
> Backport https://issues.apache.org/jira/browse/HADOOP-16455 to branch-2



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16778) ABFS: Backport HADOOP-16660 ABFS: Make RetryCount in ExponentialRetryPolicy Configurable to Branch-2

2020-04-16 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16778:
---
Fix Version/s: 2.10.1

> ABFS: Backport HADOOP-16660  ABFS: Make RetryCount in ExponentialRetryPolicy 
> Configurable to Branch-2
> -
>
> Key: HADOOP-16778
> URL: https://issues.apache.org/jira/browse/HADOOP-16778
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
> Fix For: 2.10.1
>
>
> Backport:
> HADOOP-16660 ABFS: Make RetryCount in ExponentialRetryPolicy Configurable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16734) Backport HADOOP-16455- "ABFS: Implement FileSystem.access() method" to branch-2

2020-04-16 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085176#comment-17085176
 ] 

Jonathan Hung commented on HADOOP-16734:


Pushed to branch-2.10.

> Backport HADOOP-16455- "ABFS: Implement FileSystem.access() method" to 
> branch-2
> ---
>
> Key: HADOOP-16734
> URL: https://issues.apache.org/jira/browse/HADOOP-16734
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>
> Backport https://issues.apache.org/jira/browse/HADOOP-16455 to branch-2



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16778) ABFS: Backport HADOOP-16660 ABFS: Make RetryCount in ExponentialRetryPolicy Configurable to Branch-2

2020-04-16 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085177#comment-17085177
 ] 

Jonathan Hung commented on HADOOP-16778:


Pushed to branch-2.10.

> ABFS: Backport HADOOP-16660  ABFS: Make RetryCount in ExponentialRetryPolicy 
> Configurable to Branch-2
> -
>
> Key: HADOOP-16778
> URL: https://issues.apache.org/jira/browse/HADOOP-16778
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>
> Backport:
> HADOOP-16660 ABFS: Make RetryCount in ExponentialRetryPolicy Configurable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15743) Jetty and SSL tunings to stabilize KMS performance

2020-03-26 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17068001#comment-17068001
 ] 

Jonathan Hung commented on HADOOP-15743:


Yeah, we've been hitting this issue on an HDFS cluster with heavy swebhdfs load 
with the stack trace in 
[https://bugs.java.com/bugdatabase/view_bug.do?bug_id=8203190]., blocking all 
of the http threads. So it seems like it's not isolated to KMS.

> Jetty and SSL tunings to stabilize KMS performance 
> ---
>
> Key: HADOOP-15743
> URL: https://issues.apache.org/jira/browse/HADOOP-15743
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Priority: Major
>
> The KMS has very low throughput with high client failure rates.  The 
> following config options will "stabilize" the KMS under load:
>  # Disable ECDH algos because java's SSL engine is inexplicably HORRIBLE.
>  # Reduce SSL session cache size (unlimited) and ttl (24h).  The memory cache 
> has very poor performance and causes extreme GC collection pressure. Load 
> balancing diminishes the effectiveness of the cache to 1/N-hosts anyway.
>  ** -Djavax.net.ssl.sessionCacheSize=1000
>  ** -Djavax.net.ssl.sessionCacheTimeout=6
>  # Completely disable thread LowResourceMonitor to stop jetty from 
> immediately closing incoming connections during connection bursts.  Client 
> retries cause jetty to remain in a low resource state until many clients fail 
> and cause thousands of sockets to linger in various close related states.
>  # Set min/max threads to 4x processors.   Jetty recommends only 50 to 500 
> threads.  Java's SSL engine has excessive synchronization that limits 
> performance anyway.
>  # Set https idle timeout to 6s.
>  # Significantly increase max fds to at least 128k.  Recommend using a VIP 
> load balancer with a lower limit.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15743) Jetty and SSL tunings to stabilize KMS performance

2020-03-26 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17067936#comment-17067936
 ] 

Jonathan Hung commented on HADOOP-15743:


[~daryn] where did you find the config {{javax.net.ssl.sessionCacheTimeout}}? I 
didn't see anything online related to this config, or any references to it in 
openjdk. I only see the {{setSessionTimeout}} api (which takes seconds) and no 
associated java property.

> Jetty and SSL tunings to stabilize KMS performance 
> ---
>
> Key: HADOOP-15743
> URL: https://issues.apache.org/jira/browse/HADOOP-15743
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Priority: Major
>
> The KMS has very low throughput with high client failure rates.  The 
> following config options will "stabilize" the KMS under load:
>  # Disable ECDH algos because java's SSL engine is inexplicably HORRIBLE.
>  # Reduce SSL session cache size (unlimited) and ttl (24h).  The memory cache 
> has very poor performance and causes extreme GC collection pressure. Load 
> balancing diminishes the effectiveness of the cache to 1/N-hosts anyway.
>  ** -Djavax.net.ssl.sessionCacheSize=1000
>  ** -Djavax.net.ssl.sessionCacheTimeout=6
>  # Completely disable thread LowResourceMonitor to stop jetty from 
> immediately closing incoming connections during connection bursts.  Client 
> retries cause jetty to remain in a low resource state until many clients fail 
> and cause thousands of sockets to linger in various close related states.
>  # Set min/max threads to 4x processors.   Jetty recommends only 50 to 500 
> threads.  Java's SSL engine has excessive synchronization that limits 
> performance anyway.
>  # Set https idle timeout to 6s.
>  # Significantly increase max fds to at least 128k.  Recommend using a VIP 
> load balancer with a lower limit.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15636) Add ITestDynamoDBMetadataStore

2020-03-23 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17065088#comment-17065088
 ] 

Jonathan Hung commented on HADOOP-15636:


Committed to branch-3.1.

> Add ITestDynamoDBMetadataStore
> --
>
> Key: HADOOP-15636
> URL: https://issues.apache.org/jira/browse/HADOOP-15636
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Reporter: Sean Mackrory
>Assignee: Gabor Bota
>Priority: Minor
> Fix For: 3.2.0, 3.1.4
>
> Attachments: HADOOP-15636.001.patch
>
>
> I committed HADOOP-14918 but I forgot to 'git add' the renamed test file. I 
> would just add it and commit and reference the JIRA, but testTableProvision 
> is now timing out, so we should look into that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14918) Remove the Local Dynamo DB test option

2020-03-23 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17065086#comment-17065086
 ] 

Jonathan Hung commented on HADOOP-14918:


Thanks Gabor. I committed it to branch-3.1 as well.

> Remove the Local Dynamo DB test option
> --
>
> Key: HADOOP-14918
> URL: https://issues.apache.org/jira/browse/HADOOP-14918
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Fix For: 3.2.0, 2.10.1
>
> Attachments: HADOOP-14918-001.patch, HADOOP-14918-002.patch, 
> HADOOP-14918-003.patch, HADOOP-14918-004.patch, 
> HADOOP-14918-branch-2.10.001.patch, HADOOP-14918.005.patch, 
> HADOOP-14918.006.patch
>
>
> I'm going to propose cutting out the localdynamo test option for s3guard
> * the local DDB JAR is unmaintained/lags the SDK We work with...eventually 
> there'll be differences in API.
> * as the local dynamo DB is unshaded. it complicates classpath setup for the 
> build. Remove it and there's no need to worry about versions of anything 
> other than the shaded AWS
> * it complicates test runs. Now we need to test for both localdynamo *and* 
> real dynamo
> * but we can't ignore real dynamo, because that's the one which matters
> While the local option promises to reduce test costs, really, it's just 
> adding complexity. If you are testing with s3guard, you need to have a real 
> table to test against., And with the exception of those people testing s3a 
> against non-AWS, consistent endpoints, everyone should be testing with 
> S3Guard.
> -Straightforward to remove.-



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14918) Remove the Local Dynamo DB test option

2020-03-23 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-14918:
---
Fix Version/s: 3.1.4

> Remove the Local Dynamo DB test option
> --
>
> Key: HADOOP-14918
> URL: https://issues.apache.org/jira/browse/HADOOP-14918
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Fix For: 3.2.0, 3.1.4, 2.10.1
>
> Attachments: HADOOP-14918-001.patch, HADOOP-14918-002.patch, 
> HADOOP-14918-003.patch, HADOOP-14918-004.patch, 
> HADOOP-14918-branch-2.10.001.patch, HADOOP-14918.005.patch, 
> HADOOP-14918.006.patch
>
>
> I'm going to propose cutting out the localdynamo test option for s3guard
> * the local DDB JAR is unmaintained/lags the SDK We work with...eventually 
> there'll be differences in API.
> * as the local dynamo DB is unshaded. it complicates classpath setup for the 
> build. Remove it and there's no need to worry about versions of anything 
> other than the shaded AWS
> * it complicates test runs. Now we need to test for both localdynamo *and* 
> real dynamo
> * but we can't ignore real dynamo, because that's the one which matters
> While the local option promises to reduce test costs, really, it's just 
> adding complexity. If you are testing with s3guard, you need to have a real 
> table to test against., And with the exception of those people testing s3a 
> against non-AWS, consistent endpoints, everyone should be testing with 
> S3Guard.
> -Straightforward to remove.-



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15636) Add ITestDynamoDBMetadataStore

2020-03-23 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-15636:
---
Fix Version/s: 3.1.4

> Add ITestDynamoDBMetadataStore
> --
>
> Key: HADOOP-15636
> URL: https://issues.apache.org/jira/browse/HADOOP-15636
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Reporter: Sean Mackrory
>Assignee: Gabor Bota
>Priority: Minor
> Fix For: 3.2.0, 3.1.4
>
> Attachments: HADOOP-15636.001.patch
>
>
> I committed HADOOP-14918 but I forgot to 'git add' the renamed test file. I 
> would just add it and commit and reference the JIRA, but testTableProvision 
> is now timing out, so we should look into that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14918) Remove the Local Dynamo DB test option

2020-02-27 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046907#comment-17046907
 ] 

Jonathan Hung commented on HADOOP-14918:


Sure, created one here: [https://github.com/apache/hadoop/pull/1864] Thanks 
[~gabor.bota]!

> Remove the Local Dynamo DB test option
> --
>
> Key: HADOOP-14918
> URL: https://issues.apache.org/jira/browse/HADOOP-14918
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-14918-001.patch, HADOOP-14918-002.patch, 
> HADOOP-14918-003.patch, HADOOP-14918-004.patch, 
> HADOOP-14918-branch-2.10.001.patch, HADOOP-14918.005.patch, 
> HADOOP-14918.006.patch
>
>
> I'm going to propose cutting out the localdynamo test option for s3guard
> * the local DDB JAR is unmaintained/lags the SDK We work with...eventually 
> there'll be differences in API.
> * as the local dynamo DB is unshaded. it complicates classpath setup for the 
> build. Remove it and there's no need to worry about versions of anything 
> other than the shaded AWS
> * it complicates test runs. Now we need to test for both localdynamo *and* 
> real dynamo
> * but we can't ignore real dynamo, because that's the one which matters
> While the local option promises to reduce test costs, really, it's just 
> adding complexity. If you are testing with s3guard, you need to have a real 
> table to test against., And with the exception of those people testing s3a 
> against non-AWS, consistent endpoints, everyone should be testing with 
> S3Guard.
> -Straightforward to remove.-



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14918) Remove the Local Dynamo DB test option

2020-02-25 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17044773#comment-17044773
 ] 

Jonathan Hung commented on HADOOP-14918:


[~mackrorysd] [~gabor.bota] [~ste...@apache.org] can we pull this to 
branch-3.1? It applies cleanly.

Also I attached [^HADOOP-14918-branch-2.10.001.patch] for branch-2. Could I get 
a review?

Conflicts:
 * Add HADOOP_TMP_DIR to org.apache.hadoop.fs.s3a.Constants (from HADOOP-13786)
 * Remove MAGIC_COMMITTER_ENABLED functionality (from HADOOP-13786)
 * Remove changes from MetadataStoreTestBase (from HADOOP-9330)

> Remove the Local Dynamo DB test option
> --
>
> Key: HADOOP-14918
> URL: https://issues.apache.org/jira/browse/HADOOP-14918
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-14918-001.patch, HADOOP-14918-002.patch, 
> HADOOP-14918-003.patch, HADOOP-14918-004.patch, 
> HADOOP-14918-branch-2.10.001.patch, HADOOP-14918.005.patch, 
> HADOOP-14918.006.patch
>
>
> I'm going to propose cutting out the localdynamo test option for s3guard
> * the local DDB JAR is unmaintained/lags the SDK We work with...eventually 
> there'll be differences in API.
> * as the local dynamo DB is unshaded. it complicates classpath setup for the 
> build. Remove it and there's no need to worry about versions of anything 
> other than the shaded AWS
> * it complicates test runs. Now we need to test for both localdynamo *and* 
> real dynamo
> * but we can't ignore real dynamo, because that's the one which matters
> While the local option promises to reduce test costs, really, it's just 
> adding complexity. If you are testing with s3guard, you need to have a real 
> table to test against., And with the exception of those people testing s3a 
> against non-AWS, consistent endpoints, everyone should be testing with 
> S3Guard.
> -Straightforward to remove.-



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14918) Remove the Local Dynamo DB test option

2020-02-25 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-14918:
---
Attachment: HADOOP-14918-branch-2.10.001.patch

> Remove the Local Dynamo DB test option
> --
>
> Key: HADOOP-14918
> URL: https://issues.apache.org/jira/browse/HADOOP-14918
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-14918-001.patch, HADOOP-14918-002.patch, 
> HADOOP-14918-003.patch, HADOOP-14918-004.patch, 
> HADOOP-14918-branch-2.10.001.patch, HADOOP-14918.005.patch, 
> HADOOP-14918.006.patch
>
>
> I'm going to propose cutting out the localdynamo test option for s3guard
> * the local DDB JAR is unmaintained/lags the SDK We work with...eventually 
> there'll be differences in API.
> * as the local dynamo DB is unshaded. it complicates classpath setup for the 
> build. Remove it and there's no need to worry about versions of anything 
> other than the shaded AWS
> * it complicates test runs. Now we need to test for both localdynamo *and* 
> real dynamo
> * but we can't ignore real dynamo, because that's the one which matters
> While the local option promises to reduce test costs, really, it's just 
> adding complexity. If you are testing with s3guard, you need to have a real 
> table to test against., And with the exception of those people testing s3a 
> against non-AWS, consistent endpoints, everyone should be testing with 
> S3Guard.
> -Straightforward to remove.-



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14918) Remove the Local Dynamo DB test option

2020-02-25 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-14918:
---
Status: Patch Available  (was: Reopened)

> Remove the Local Dynamo DB test option
> --
>
> Key: HADOOP-14918
> URL: https://issues.apache.org/jira/browse/HADOOP-14918
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0, 2.9.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-14918-001.patch, HADOOP-14918-002.patch, 
> HADOOP-14918-003.patch, HADOOP-14918-004.patch, 
> HADOOP-14918-branch-2.10.001.patch, HADOOP-14918.005.patch, 
> HADOOP-14918.006.patch
>
>
> I'm going to propose cutting out the localdynamo test option for s3guard
> * the local DDB JAR is unmaintained/lags the SDK We work with...eventually 
> there'll be differences in API.
> * as the local dynamo DB is unshaded. it complicates classpath setup for the 
> build. Remove it and there's no need to worry about versions of anything 
> other than the shaded AWS
> * it complicates test runs. Now we need to test for both localdynamo *and* 
> real dynamo
> * but we can't ignore real dynamo, because that's the one which matters
> While the local option promises to reduce test costs, really, it's just 
> adding complexity. If you are testing with s3guard, you need to have a real 
> table to test against., And with the exception of those people testing s3a 
> against non-AWS, consistent endpoints, everyone should be testing with 
> S3Guard.
> -Straightforward to remove.-



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-14918) Remove the Local Dynamo DB test option

2020-02-25 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung reopened HADOOP-14918:


> Remove the Local Dynamo DB test option
> --
>
> Key: HADOOP-14918
> URL: https://issues.apache.org/jira/browse/HADOOP-14918
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-14918-001.patch, HADOOP-14918-002.patch, 
> HADOOP-14918-003.patch, HADOOP-14918-004.patch, 
> HADOOP-14918-branch-2.10.001.patch, HADOOP-14918.005.patch, 
> HADOOP-14918.006.patch
>
>
> I'm going to propose cutting out the localdynamo test option for s3guard
> * the local DDB JAR is unmaintained/lags the SDK We work with...eventually 
> there'll be differences in API.
> * as the local dynamo DB is unshaded. it complicates classpath setup for the 
> build. Remove it and there's no need to worry about versions of anything 
> other than the shaded AWS
> * it complicates test runs. Now we need to test for both localdynamo *and* 
> real dynamo
> * but we can't ignore real dynamo, because that's the one which matters
> While the local option promises to reduce test costs, really, it's just 
> adding complexity. If you are testing with s3guard, you need to have a real 
> table to test against., And with the exception of those people testing s3a 
> against non-AWS, consistent endpoints, everyone should be testing with 
> S3Guard.
> -Straightforward to remove.-



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16735) Make it clearer in config default that EnvironmentVariableCredentialsProvider supports AWS_SESSION_TOKEN

2019-12-09 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16992049#comment-16992049
 ] 

Jonathan Hung commented on HADOOP-16735:


Hi [~liuml07], correct, that's the plan going forward.

> Make it clearer in config default that EnvironmentVariableCredentialsProvider 
> supports AWS_SESSION_TOKEN
> 
>
> Key: HADOOP-16735
> URL: https://issues.apache.org/jira/browse/HADOOP-16735
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs/s3
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Fix For: 3.0.4, 3.3.0, 3.1.4, 3.2.2, 2.10.1
>
>
> In the great doc {{hadoop-aws/tools/hadoop-aws/index.html}}, user can find 
> that authenticating via the AWS Environment Variables supports session token. 
> However, the config description in core-default.xml does not make it clear.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16662) Remove unnecessary InnerNode check in NetworkTopology#add()

2019-12-09 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16662:
---
Fix Version/s: (was: 2.11.0)

> Remove unnecessary InnerNode check in NetworkTopology#add()
> ---
>
> Key: HADOOP-16662
> URL: https://issues.apache.org/jira/browse/HADOOP-16662
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Fix For: 3.3.0, 2.8.6, 2.9.3, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: HADOOP-16662.001.patch
>
>
> The method of NetworkTopology#add() as follow:
> {code:java}
> /** Add a leaf node
>  * Update node counter  rack counter if necessary
>  * @param node node to be added; can be null
>  * @exception IllegalArgumentException if add a node to a leave 
>or node to be added is not a leaf
>  */
> public void add(Node node) {
>   if (node==null) return;
>   int newDepth = NodeBase.locationToDepth(node.getNetworkLocation()) + 1;
>   netlock.writeLock().lock();
>   try {
> if( node instanceof InnerNode ) {
>   throw new IllegalArgumentException(
> "Not allow to add an inner node: "+NodeBase.getPath(node));
> }
> if ((depthOfAllLeaves != -1) && (depthOfAllLeaves != newDepth)) {
>   LOG.error("Error: can't add leaf node {} at depth {} to topology:{}\n",
>   NodeBase.getPath(node), newDepth, this);
>   throw new InvalidTopologyException("Failed to add " + 
> NodeBase.getPath(node) +
>   ": You cannot have a rack and a non-rack node at the same " +
>   "level of the network topology.");
> }
> Node rack = getNodeForNetworkLocation(node);
> if (rack != null && !(rack instanceof InnerNode)) {
>   throw new IllegalArgumentException("Unexpected data node " 
>  + node.toString() 
>  + " at an illegal network location");
> }
> if (clusterMap.add(node)) {
>   LOG.info("Adding a new node: "+NodeBase.getPath(node));
>   if (rack == null) {
> incrementRacks();
>   }
>   if (!(node instanceof InnerNode)) {
> if (depthOfAllLeaves == -1) {
>   depthOfAllLeaves = node.getLevel();
> }
>   }
> }
> LOG.debug("NetworkTopology became:\n{}", this);
>   } finally {
> netlock.writeLock().unlock();
>   }
> }
> {code}
> {code:java}
> if( node instanceof InnerNode ) {
>   throw new IllegalArgumentException(
> "Not allow to add an inner node: "+NodeBase.getPath(node));
> }
> if (!(node instanceof InnerNode)) is invalid,since there is already a 
> judgement before as follow:
> if (!(node instanceof InnerNode)) {
>   if (depthOfAllLeaves == -1) {
> depthOfAllLeaves = node.getLevel();
>   }
> }{code}
> so i think if (!(node instanceof InnerNode)) should be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16734) Backport HADOOP-16455- "ABFS: Implement FileSystem.access() method" to branch-2

2019-12-09 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16734:
---
Fix Version/s: (was: 2.11.0)

> Backport HADOOP-16455- "ABFS: Implement FileSystem.access() method" to 
> branch-2
> ---
>
> Key: HADOOP-16734
> URL: https://issues.apache.org/jira/browse/HADOOP-16734
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>
> Backport https://issues.apache.org/jira/browse/HADOOP-16455 to branch-2



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16652) Backport HADOOP-16587 - "Make AAD endpoint configurable on all Auth flows" to branch-2

2019-12-09 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16652:
---
Fix Version/s: (was: 2.11.0)
   2.10.1

> Backport HADOOP-16587 - "Make AAD endpoint configurable on all Auth flows" to 
> branch-2
> --
>
> Key: HADOOP-16652
> URL: https://issues.apache.org/jira/browse/HADOOP-16652
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 2.10.1
>
>
> Make AAD endpoint configurable on all Auth flows



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16740) Backport HADOOP-16612 - "Track Azure Blob File System client-perceived latency" to branch-2

2019-12-09 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16740:
---
Fix Version/s: (was: 2.11.0)

> Backport HADOOP-16612 - "Track Azure Blob File System client-perceived 
> latency" to branch-2
> ---
>
> Key: HADOOP-16740
> URL: https://issues.apache.org/jira/browse/HADOOP-16740
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>
> Ref: https://issues.apache.org/jira/browse/HADOOP-16612



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16655) Change cipher suite when fetching tomcat tarball for branch-2

2019-12-09 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16655:
---
Fix Version/s: (was: 2.11.0)

> Change cipher suite when fetching tomcat tarball for branch-2
> -
>
> Key: HADOOP-16655
> URL: https://issues.apache.org/jira/browse/HADOOP-16655
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Fix For: 2.10.0
>
> Attachments: HADOOP-16655-branch-2.001.patch, 
> HADOOP-16655-branch-2.002.patch
>
>
> Hit this error when building via docker:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (dist) on project 
> hadoop-kms: An Ant BuildException has occured: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
> [ERROR] around Ant part ... skipexisting="true" verbose="true" 
> src="https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.43/bin/apache-tomcat-8.5.43.tar.gz"/>...
>  @ 5:183 in 
> /build/source/hadoop-common-project/hadoop-kms/target/antrun/build-main.xml
> [ERROR] -> [Help 1] {noformat}
> Seems this is caused by HADOOP-16323 which fetches via https.
> This should only be an issue in branch-2 since this was removed for KMS in 
> HADOOP-13597, and httpfs in HDFS-10860
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16735) Make it clearer in config default that EnvironmentVariableCredentialsProvider supports AWS_SESSION_TOKEN

2019-12-09 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991916#comment-16991916
 ] 

Jonathan Hung commented on HADOOP-16735:


Renaming 2.11.0 fix version to 2.10.1 after branch-2 -> branch-2.10 rename

> Make it clearer in config default that EnvironmentVariableCredentialsProvider 
> supports AWS_SESSION_TOKEN
> 
>
> Key: HADOOP-16735
> URL: https://issues.apache.org/jira/browse/HADOOP-16735
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs/s3
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Fix For: 3.0.4, 3.3.0, 3.1.4, 3.2.2, 2.10.1
>
>
> In the great doc {{hadoop-aws/tools/hadoop-aws/index.html}}, user can find 
> that authenticating via the AWS Environment Variables supports session token. 
> However, the config description in core-default.xml does not make it clear.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16735) Make it clearer in config default that EnvironmentVariableCredentialsProvider supports AWS_SESSION_TOKEN

2019-12-09 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16735:
---
Fix Version/s: (was: 2.11.0)

> Make it clearer in config default that EnvironmentVariableCredentialsProvider 
> supports AWS_SESSION_TOKEN
> 
>
> Key: HADOOP-16735
> URL: https://issues.apache.org/jira/browse/HADOOP-16735
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs/s3
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Fix For: 3.0.4, 3.3.0, 3.1.4, 3.2.2
>
>
> In the great doc {{hadoop-aws/tools/hadoop-aws/index.html}}, user can find 
> that authenticating via the AWS Environment Variables supports session token. 
> However, the config description in core-default.xml does not make it clear.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16700) RpcQueueTime may be negative when the response has to be sent later

2019-12-09 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991914#comment-16991914
 ] 

Jonathan Hung commented on HADOOP-16700:


Removing 2.11.0 fix version after branch-2 -> branch-2.10 rename

> RpcQueueTime may be negative when the response has to be sent later
> ---
>
> Key: HADOOP-16700
> URL: https://issues.apache.org/jira/browse/HADOOP-16700
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: xuzq
>Assignee: xuzq
>Priority: Minor
> Fix For: 3.3.0, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: HADOOP-16700-trunk-001.patch, HADOOP-16700.002.patch
>
>
> RpcQueueTime may be negative when the response has to be sent later.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16700) RpcQueueTime may be negative when the response has to be sent later

2019-12-09 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16700:
---
Fix Version/s: (was: 2.11.0)

> RpcQueueTime may be negative when the response has to be sent later
> ---
>
> Key: HADOOP-16700
> URL: https://issues.apache.org/jira/browse/HADOOP-16700
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: xuzq
>Assignee: xuzq
>Priority: Minor
> Fix For: 3.3.0, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: HADOOP-16700-trunk-001.patch, HADOOP-16700.002.patch
>
>
> RpcQueueTime may be negative when the response has to be sent later.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16735) Make it clearer in config default that EnvironmentVariableCredentialsProvider supports AWS_SESSION_TOKEN

2019-12-09 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16735:
---
Fix Version/s: 2.10.1

> Make it clearer in config default that EnvironmentVariableCredentialsProvider 
> supports AWS_SESSION_TOKEN
> 
>
> Key: HADOOP-16735
> URL: https://issues.apache.org/jira/browse/HADOOP-16735
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs/s3
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Fix For: 3.0.4, 3.3.0, 3.1.4, 3.2.2, 2.10.1
>
>
> In the great doc {{hadoop-aws/tools/hadoop-aws/index.html}}, user can find 
> that authenticating via the AWS Environment Variables supports session token. 
> However, the config description in core-default.xml does not make it clear.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15097) AbstractContractDeleteTest::testDeleteNonEmptyDirRecursive with misleading path

2019-12-09 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-15097:
---
Fix Version/s: (was: 2.11.0)

> AbstractContractDeleteTest::testDeleteNonEmptyDirRecursive with misleading 
> path
> ---
>
> Key: HADOOP-15097
> URL: https://issues.apache.org/jira/browse/HADOOP-15097
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0-beta1
>Reporter: zhoutai.zt
>Assignee: Xieming Li
>Priority: Minor
>  Labels: newbie
> Fix For: 3.3.0, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: HADOOP-15097.001.patch
>
>
>   @Test
>   public void testDeleteNonEmptyDirRecursive() throws Throwable {
> Path path = path("{color:red}testDeleteNonEmptyDirNonRecursive{color}");
> mkdirs(path);
> Path file = new Path(path, "childfile");
> ContractTestUtils.writeTextFile(getFileSystem(), file, "goodbye, world",
> true);
> assertDeleted(path, true);
> assertPathDoesNotExist("not deleted", file);
>   }
> change testDeleteNonEmptyDirNonRecursive to testDeleteNonEmptyDirRecursive



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15097) AbstractContractDeleteTest::testDeleteNonEmptyDirRecursive with misleading path

2019-12-09 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991898#comment-16991898
 ] 

Jonathan Hung commented on HADOOP-15097:


Removing 2.11.0 fix version after branch-2 -> branch-2.10 rename

> AbstractContractDeleteTest::testDeleteNonEmptyDirRecursive with misleading 
> path
> ---
>
> Key: HADOOP-15097
> URL: https://issues.apache.org/jira/browse/HADOOP-15097
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0-beta1
>Reporter: zhoutai.zt
>Assignee: Xieming Li
>Priority: Minor
>  Labels: newbie
> Fix For: 3.3.0, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: HADOOP-15097.001.patch
>
>
>   @Test
>   public void testDeleteNonEmptyDirRecursive() throws Throwable {
> Path path = path("{color:red}testDeleteNonEmptyDirNonRecursive{color}");
> mkdirs(path);
> Path file = new Path(path, "childfile");
> ContractTestUtils.writeTextFile(getFileSystem(), file, "goodbye, world",
> true);
> assertDeleted(path, true);
> assertPathDoesNotExist("not deleted", file);
>   }
> change testDeleteNonEmptyDirNonRecursive to testDeleteNonEmptyDirRecursive



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16598) Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate protobuf classes" to all active branches

2019-12-09 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991886#comment-16991886
 ] 

Jonathan Hung commented on HADOOP-16598:


Removing 2.11.0 fix version after branch-2 -> branch-2.10 rename

> Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate 
> protobuf classes" to all active branches
> ---
>
> Key: HADOOP-16598
> URL: https://issues.apache.org/jira/browse/HADOOP-16598
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.4, 2.9.3, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: HADOOP-16598-branch-2-v1.patch, 
> HADOOP-16598-branch-2-v1.patch, HADOOP-16598-branch-2-v2.patch, 
> HADOOP-16598-branch-2.9-v1.patch, HADOOP-16598-branch-2.9-v1.patch, 
> HADOOP-16598-branch-2.9.patch, HADOOP-16598-branch-2.patch, 
> HADOOP-16598-branch-3.1.patch, HADOOP-16598-branch-3.2.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16598) Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate protobuf classes" to all active branches

2019-12-09 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16598:
---
Fix Version/s: (was: 2.11.0)

> Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate 
> protobuf classes" to all active branches
> ---
>
> Key: HADOOP-16598
> URL: https://issues.apache.org/jira/browse/HADOOP-16598
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.4, 2.9.3, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: HADOOP-16598-branch-2-v1.patch, 
> HADOOP-16598-branch-2-v1.patch, HADOOP-16598-branch-2-v2.patch, 
> HADOOP-16598-branch-2.9-v1.patch, HADOOP-16598-branch-2.9-v1.patch, 
> HADOOP-16598-branch-2.9.patch, HADOOP-16598-branch-2.patch, 
> HADOOP-16598-branch-3.1.patch, HADOOP-16598-branch-3.2.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16652) Backport HADOOP-16587 - "Make AAD endpoint configurable on all Auth flows" to branch-2

2019-10-17 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16652:
---
Fix Version/s: (was: 2.10.0)
   2.11.0

> Backport HADOOP-16587 - "Make AAD endpoint configurable on all Auth flows" to 
> branch-2
> --
>
> Key: HADOOP-16652
> URL: https://issues.apache.org/jira/browse/HADOOP-16652
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 2.11.0
>
>
> Make AAD endpoint configurable on all Auth flows



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16652) Backport HADOOP-16587 - "Make AAD endpoint configurable on all Auth flows" to branch-2

2019-10-17 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953900#comment-16953900
 ] 

Jonathan Hung commented on HADOOP-16652:


branch-2 is currently 2.11.0 since we cut branch-2.10. Changing fix version.

> Backport HADOOP-16587 - "Make AAD endpoint configurable on all Auth flows" to 
> branch-2
> --
>
> Key: HADOOP-16652
> URL: https://issues.apache.org/jira/browse/HADOOP-16652
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 2.10.0
>
>
> Make AAD endpoint configurable on all Auth flows



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16655) Change cipher suite when fetching tomcat tarball for branch-2

2019-10-15 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16655:
---
Fix Version/s: 2.11.0
   2.10.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks! Committed.

> Change cipher suite when fetching tomcat tarball for branch-2
> -
>
> Key: HADOOP-16655
> URL: https://issues.apache.org/jira/browse/HADOOP-16655
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Fix For: 2.10.0, 2.11.0
>
> Attachments: HADOOP-16655-branch-2.001.patch, 
> HADOOP-16655-branch-2.002.patch
>
>
> Hit this error when building via docker:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (dist) on project 
> hadoop-kms: An Ant BuildException has occured: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
> [ERROR] around Ant part ... skipexisting="true" verbose="true" 
> src="https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.43/bin/apache-tomcat-8.5.43.tar.gz"/>...
>  @ 5:183 in 
> /build/source/hadoop-common-project/hadoop-kms/target/antrun/build-main.xml
> [ERROR] -> [Help 1] {noformat}
> Seems this is caused by HADOOP-16323 which fetches via https.
> This should only be an issue in branch-2 since this was removed for KMS in 
> HADOOP-13597, and httpfs in HDFS-10860
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16655) Change cipher suite when fetching tomcat tarball for branch-2

2019-10-15 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16655:
---
Summary: Change cipher suite when fetching tomcat tarball for branch-2  
(was: Use http when fetching tomcat tarball for branch-2)

> Change cipher suite when fetching tomcat tarball for branch-2
> -
>
> Key: HADOOP-16655
> URL: https://issues.apache.org/jira/browse/HADOOP-16655
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: HADOOP-16655-branch-2.001.patch, 
> HADOOP-16655-branch-2.002.patch
>
>
> Hit this error when building via docker:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (dist) on project 
> hadoop-kms: An Ant BuildException has occured: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
> [ERROR] around Ant part ... skipexisting="true" verbose="true" 
> src="https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.43/bin/apache-tomcat-8.5.43.tar.gz"/>...
>  @ 5:183 in 
> /build/source/hadoop-common-project/hadoop-kms/target/antrun/build-main.xml
> [ERROR] -> [Help 1] {noformat}
> Seems this is caused by HADOOP-16323 which fetches via https.
> This should only be an issue in branch-2 since this was removed for KMS in 
> HADOOP-13597, and httpfs in HDFS-10860
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16655) Use http when fetching tomcat tarball for branch-2

2019-10-15 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16655:
---
Attachment: HADOOP-16655-branch-2.002.patch

> Use http when fetching tomcat tarball for branch-2
> --
>
> Key: HADOOP-16655
> URL: https://issues.apache.org/jira/browse/HADOOP-16655
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: HADOOP-16655-branch-2.001.patch, 
> HADOOP-16655-branch-2.002.patch
>
>
> Hit this error when building via docker:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (dist) on project 
> hadoop-kms: An Ant BuildException has occured: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
> [ERROR] around Ant part ... skipexisting="true" verbose="true" 
> src="https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.43/bin/apache-tomcat-8.5.43.tar.gz"/>...
>  @ 5:183 in 
> /build/source/hadoop-common-project/hadoop-kms/target/antrun/build-main.xml
> [ERROR] -> [Help 1] {noformat}
> Seems this is caused by HADOOP-16323 which fetches via https.
> This should only be an issue in branch-2 since this was removed for KMS in 
> HADOOP-13597, and httpfs in HDFS-10860
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16655) Use http when fetching tomcat tarball for branch-2

2019-10-15 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16952380#comment-16952380
 ] 

Jonathan Hung commented on HADOOP-16655:


Oh, interesting. That seems to work too. Attached 002 patch for this. Mind 
taking a look [~weichiu]? Thanks!

> Use http when fetching tomcat tarball for branch-2
> --
>
> Key: HADOOP-16655
> URL: https://issues.apache.org/jira/browse/HADOOP-16655
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: HADOOP-16655-branch-2.001.patch, 
> HADOOP-16655-branch-2.002.patch
>
>
> Hit this error when building via docker:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (dist) on project 
> hadoop-kms: An Ant BuildException has occured: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
> [ERROR] around Ant part ... skipexisting="true" verbose="true" 
> src="https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.43/bin/apache-tomcat-8.5.43.tar.gz"/>...
>  @ 5:183 in 
> /build/source/hadoop-common-project/hadoop-kms/target/antrun/build-main.xml
> [ERROR] -> [Help 1] {noformat}
> Seems this is caused by HADOOP-16323 which fetches via https.
> This should only be an issue in branch-2 since this was removed for KMS in 
> HADOOP-13597, and httpfs in HDFS-10860
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16655) Use http when fetching tomcat tarball for branch-2

2019-10-15 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16952322#comment-16952322
 ] 

Jonathan Hung commented on HADOOP-16655:


[~aajisaka] mind taking a look at this? Not sure about the original motivation 
behind HADOOP-16323. Thanks!

> Use http when fetching tomcat tarball for branch-2
> --
>
> Key: HADOOP-16655
> URL: https://issues.apache.org/jira/browse/HADOOP-16655
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: HADOOP-16655-branch-2.001.patch
>
>
> Hit this error when building via docker:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (dist) on project 
> hadoop-kms: An Ant BuildException has occured: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
> [ERROR] around Ant part ... skipexisting="true" verbose="true" 
> src="https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.43/bin/apache-tomcat-8.5.43.tar.gz"/>...
>  @ 5:183 in 
> /build/source/hadoop-common-project/hadoop-kms/target/antrun/build-main.xml
> [ERROR] -> [Help 1] {noformat}
> Seems this is caused by HADOOP-16323 which fetches via https.
> This should only be an issue in branch-2 since this was removed for KMS in 
> HADOOP-13597, and httpfs in HDFS-10860
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16655) Use http when fetching tomcat tarball for branch-2

2019-10-15 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16655:
---
Status: Patch Available  (was: Open)

> Use http when fetching tomcat tarball for branch-2
> --
>
> Key: HADOOP-16655
> URL: https://issues.apache.org/jira/browse/HADOOP-16655
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: HADOOP-16655-branch-2.001.patch
>
>
> Hit this error when building via docker:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (dist) on project 
> hadoop-kms: An Ant BuildException has occured: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
> [ERROR] around Ant part ... skipexisting="true" verbose="true" 
> src="https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.43/bin/apache-tomcat-8.5.43.tar.gz"/>...
>  @ 5:183 in 
> /build/source/hadoop-common-project/hadoop-kms/target/antrun/build-main.xml
> [ERROR] -> [Help 1] {noformat}
> Seems this is caused by HADOOP-16323 which fetches via https.
> This should only be an issue in branch-2 since this was removed for KMS in 
> HADOOP-13597, and httpfs in HDFS-10860
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16655) Use http when fetching tomcat tarball for branch-2

2019-10-15 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16655:
---
Attachment: HADOOP-16655-branch-2.001.patch

> Use http when fetching tomcat tarball for branch-2
> --
>
> Key: HADOOP-16655
> URL: https://issues.apache.org/jira/browse/HADOOP-16655
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: HADOOP-16655-branch-2.001.patch
>
>
> Hit this error when building via docker:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (dist) on project 
> hadoop-kms: An Ant BuildException has occured: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
> [ERROR] around Ant part ... skipexisting="true" verbose="true" 
> src="https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.43/bin/apache-tomcat-8.5.43.tar.gz"/>...
>  @ 5:183 in 
> /build/source/hadoop-common-project/hadoop-kms/target/antrun/build-main.xml
> [ERROR] -> [Help 1] {noformat}
> Seems this is caused by HADOOP-16323 which fetches via https.
> This should only be an issue in branch-2 since this was removed for KMS in 
> HADOOP-13597, and httpfs in HDFS-10860
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16655) Use http when fetching tomcat tarball for branch-2

2019-10-15 Thread Jonathan Hung (Jira)
Jonathan Hung created HADOOP-16655:
--

 Summary: Use http when fetching tomcat tarball for branch-2
 Key: HADOOP-16655
 URL: https://issues.apache.org/jira/browse/HADOOP-16655
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jonathan Hung
Assignee: Jonathan Hung


Hit this error when building via docker:
{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-antrun-plugin:1.7:run (dist) on project 
hadoop-kms: An Ant BuildException has occured: 
javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
[ERROR] around Ant part ...https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.43/bin/apache-tomcat-8.5.43.tar.gz"/>...
 @ 5:183 in 
/build/source/hadoop-common-project/hadoop-kms/target/antrun/build-main.xml
[ERROR] -> [Help 1] {noformat}
Seems this is caused by HADOOP-16323 which fetches via https.

This should only be an issue in branch-2 since this was removed for KMS in 
HADOOP-13597, and httpfs in HDFS-10860

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10738) Dynamically adjust distcp configuration by adding distcp-site.xml into code base

2019-10-15 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-10738:
---
Target Version/s: 2.10.1  (was: 2.10.0)

> Dynamically adjust distcp configuration by adding distcp-site.xml into code 
> base
> 
>
> Key: HADOOP-10738
> URL: https://issues.apache.org/jira/browse/HADOOP-10738
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Siqi Li
>Priority: Major
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10738-branch-2.001.patch, HADOOP-10738.v1.patch, 
> HADOOP-10738.v2.patch
>
>
> For now, the configuration of distcp resides in hadoop-distcp.jar. This makes 
> it difficult to adjust the configuration dynamically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13091) DistCp masks potential CRC check failures

2019-10-15 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-13091:
---
Target Version/s: 2.10.1  (was: 2.10.0)

> DistCp masks potential CRC check failures
> -
>
> Key: HADOOP-13091
> URL: https://issues.apache.org/jira/browse/HADOOP-13091
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Elliot West
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HADOOP-13091.003.patch, HADOOP-13091.004.patch, 
> HDFS-10338.001.patch, HDFS-10338.002.patch
>
>
> There appear to be edge cases whereby CRC checks may be circumvented when 
> requests for checksums from the source or target file system fail. In this 
> event CRCs could differ between the source and target and yet the DistCp copy 
> would succeed, even when the 'skip CRC check' option is not being used.
> The code in question is contained in the method 
> [{{org.apache.hadoop.tools.util.DistCpUtils#checksumsAreEqual(...)}}|https://github.com/apache/hadoop/blob/release-2.7.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java#L457]
> Specifically this code block suggests that if there is a failure when trying 
> to read the source or target checksum then the method will return {{true}} 
> (i.e.  the checksums are equal), implying that the check succeeded. In actual 
> fact we just failed to obtain the checksum and could not perform the check.
> {code}
> try {
>   sourceChecksum = sourceChecksum != null ? sourceChecksum : 
> sourceFS.getFileChecksum(source);
>   targetChecksum = targetFS.getFileChecksum(target);
> } catch (IOException e) {
>   LOG.error("Unable to retrieve checksum for " + source + " or "
> + target, e);
> }
> return (sourceChecksum == null || targetChecksum == null ||
>   sourceChecksum.equals(targetChecksum));
> {code}
> I believe that at the very least the caught {{IOException}} should be 
> re-thrown. If this is not deemed desirable then I believe an option 
> ({{--strictCrc}}?) should be added to enforce a strict check where we require 
> that both the source and target CRCs are retrieved, are not null, and are 
> then compared for equality. If for any reason either of the CRCs retrievals 
> fail then an exception is thrown.
> Clearly some {{FileSystems}} do not support CRCs and invocations to 
> {{FileSystem.getFileChecksum(...)}} return {{null}} in these instances. I 
> would suggest that these should fail a strict CRC check to prevent users 
> developing a false sense of security in their copy pipeline.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16039) backport HADOOP-15965, "Upgrade ADLS SDK to 2.3.3" to branch-2

2019-10-15 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16039:
---
Target Version/s: 2.10.1  (was: 2.10.0)

> backport HADOOP-15965, "Upgrade ADLS SDK to 2.3.3" to branch-2
> --
>
> Key: HADOOP-16039
> URL: https://issues.apache.org/jira/browse/HADOOP-16039
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.9.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16039-001.patch, HADOOP-16039-branch-2-001.patch
>
>
> Backport the ADLS SDK 2.3.3 update to branch-2, retest. etc



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16647) Support OpenSSL 1.1.1 LTS

2019-10-15 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16647:
---
Target Version/s: 3.3.0, 2.10.1  (was: 2.10.0, 3.3.0)

> Support OpenSSL 1.1.1 LTS
> -
>
> Key: HADOOP-16647
> URL: https://issues.apache.org/jira/browse/HADOOP-16647
> Project: Hadoop Common
>  Issue Type: Task
>  Components: security
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> See Hadoop user mailing list 
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201910.mbox/%3CCADiq6%3DweDFxHTL_7eGwDNnxVCza39y2QYQTSggfLn7mXhMLOdg%40mail.gmail.com%3E
> Hadoop 2 supports OpenSSL 1.0.2.
> Hadoop 3 supports OpenSSL 1.1.0 (HADOOP-14597) and I believe 1.0.2 too.
> Per OpenSSL blog https://www.openssl.org/policies/releasestrat.html
> * 1.1.0 is EOL 2019/09/11
> * 1.0.2 EOL 2019/12/31
> * 1.1.1 is EOL 2023/09/11 (LTS)
> Many Hadoop installation relies on the OpenSSL package provided by Linux 
> distros, but it's not clear to me if Linux distros are going support 
> 1.1.0/1.0.2 beyond this date.
> We should make sure Hadoop works with OpenSSL 1.1.1, as well as document the 
> openssl version supported. File this jira to test/document/fix bugs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14176) distcp reports beyond physical memory limits on 2.X

2019-10-15 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-14176:
---
Target Version/s: 2.10.1  (was: 2.10.0)

> distcp reports beyond physical memory limits on 2.X
> ---
>
> Key: HADOOP-14176
> URL: https://issues.apache.org/jira/browse/HADOOP-14176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-14176-branch-2.001.patch, 
> HADOOP-14176-branch-2.002.patch, HADOOP-14176-branch-2.003.patch, 
> HADOOP-14176-branch-2.004.patch
>
>
> When i run distcp,  i get some errors as follow
> {quote}
> 17/02/21 15:31:18 INFO mapreduce.Job: Task Id : 
> attempt_1487645941615_0037_m_03_0, Status : FAILED
> Container [pid=24661,containerID=container_1487645941615_0037_01_05] is 
> running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical 
> memory used; 4.0 GB of 5 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1487645941615_0037_01_05 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> |- 24661 24659 24661 24661 (bash) 0 0 108650496 301 /bin/bash -c 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN  -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5 
> 1>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stdout
>  
> 2>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stderr
> |- 24665 24661 24661 24661 (java) 1766 336 4235558912 280699 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
> {quote}
> Deep into the code , i find that because distcp configuration covers 
> mapred-site.xml
> {code}
> 
> mapred.job.map.memory.mb
> 1024
> 
> 
> mapred.job.reduce.memory.mb
> 1024
> 
> {code}
> When mapreduce.map.java.opts and mapreduce.map.memory.mb is setting in 
> mapred-default.xml, and the value is larger than setted in 
> distcp-default.xml, the error maybe occur.
> we should remove those two configurations in distcp-default.xml 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16636) No rule to make target PROTOBUF_PROTOC_EXECUTABLE-NOTFOUND when compiling trunk

2019-10-07 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16945647#comment-16945647
 ] 

Jonathan Hung edited comment on HADOOP-16636 at 10/7/19 7:44 AM:
-

Uploaded a test patch here which includes this change, to verify the issue is 
gone: 
https://issues.apache.org/jira/browse/YARN-9760?focusedCommentId=16945307=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16945307

It seems to fix the compile issue.


was (Author: jhung):
Uploaded a test patch here which includes this change: 
https://issues.apache.org/jira/browse/YARN-9760?focusedCommentId=16945307=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16945307

It seems to fix the compile issue.

> No rule to make target PROTOBUF_PROTOC_EXECUTABLE-NOTFOUND when compiling 
> trunk
> ---
>
> Key: HADOOP-16636
> URL: https://issues.apache.org/jira/browse/HADOOP-16636
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: HADOOP-16636.001.patch
>
>
> {noformat}
> [WARNING] make[1]: Leaving directory 
> '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target'
> [WARNING] Makefile:127: recipe for target 'all' failed
> [WARNING] make[2]: *** No rule to make target 
> '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/proto/PROTOBUF_PROTOC_EXECUTABLE-NOTFOUND',
>  needed by 'main/native/libhdfspp/lib/proto/ClientNamenodeProtocol.hrpc.inl'. 
>  Stop.
> [WARNING] make[1]: *** 
> [main/native/libhdfspp/lib/proto/CMakeFiles/proto_obj.dir/all] Error 2
> [WARNING] make[1]: *** Waiting for unfinished jobs
> [WARNING] make: *** [all] Error 2 {noformat}
> e.g. here: 
> [https://builds.apache.org/job/PreCommit-YARN-Build/24911/artifact/out/patch-compile-root.txt]
> Not sure exactly what changed here. But some online resources suggest to 
> install protobuf-compiler.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16636) No rule to make target PROTOBUF_PROTOC_EXECUTABLE-NOTFOUND when compiling trunk

2019-10-07 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16945647#comment-16945647
 ] 

Jonathan Hung commented on HADOOP-16636:


Uploaded a test patch here which includes this change: 
https://issues.apache.org/jira/browse/YARN-9760?focusedCommentId=16945307=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16945307

It seems to fix the compile issue.

> No rule to make target PROTOBUF_PROTOC_EXECUTABLE-NOTFOUND when compiling 
> trunk
> ---
>
> Key: HADOOP-16636
> URL: https://issues.apache.org/jira/browse/HADOOP-16636
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: HADOOP-16636.001.patch
>
>
> {noformat}
> [WARNING] make[1]: Leaving directory 
> '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target'
> [WARNING] Makefile:127: recipe for target 'all' failed
> [WARNING] make[2]: *** No rule to make target 
> '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/proto/PROTOBUF_PROTOC_EXECUTABLE-NOTFOUND',
>  needed by 'main/native/libhdfspp/lib/proto/ClientNamenodeProtocol.hrpc.inl'. 
>  Stop.
> [WARNING] make[1]: *** 
> [main/native/libhdfspp/lib/proto/CMakeFiles/proto_obj.dir/all] Error 2
> [WARNING] make[1]: *** Waiting for unfinished jobs
> [WARNING] make: *** [all] Error 2 {noformat}
> e.g. here: 
> [https://builds.apache.org/job/PreCommit-YARN-Build/24911/artifact/out/patch-compile-root.txt]
> Not sure exactly what changed here. But some online resources suggest to 
> install protobuf-compiler.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16636) No rule to make target PROTOBUF_PROTOC_EXECUTABLE-NOTFOUND when compiling trunk

2019-10-05 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16636:
---
Assignee: Jonathan Hung
  Status: Patch Available  (was: Open)

> No rule to make target PROTOBUF_PROTOC_EXECUTABLE-NOTFOUND when compiling 
> trunk
> ---
>
> Key: HADOOP-16636
> URL: https://issues.apache.org/jira/browse/HADOOP-16636
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: HADOOP-16636.001.patch
>
>
> {noformat}
> [WARNING] make[1]: Leaving directory 
> '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target'
> [WARNING] Makefile:127: recipe for target 'all' failed
> [WARNING] make[2]: *** No rule to make target 
> '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/proto/PROTOBUF_PROTOC_EXECUTABLE-NOTFOUND',
>  needed by 'main/native/libhdfspp/lib/proto/ClientNamenodeProtocol.hrpc.inl'. 
>  Stop.
> [WARNING] make[1]: *** 
> [main/native/libhdfspp/lib/proto/CMakeFiles/proto_obj.dir/all] Error 2
> [WARNING] make[1]: *** Waiting for unfinished jobs
> [WARNING] make: *** [all] Error 2 {noformat}
> e.g. here: 
> [https://builds.apache.org/job/PreCommit-YARN-Build/24911/artifact/out/patch-compile-root.txt]
> Not sure exactly what changed here. But some online resources suggest to 
> install protobuf-compiler.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16636) No rule to make target PROTOBUF_PROTOC_EXECUTABLE-NOTFOUND when compiling trunk

2019-10-05 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16636:
---
Attachment: HADOOP-16636.001.patch

> No rule to make target PROTOBUF_PROTOC_EXECUTABLE-NOTFOUND when compiling 
> trunk
> ---
>
> Key: HADOOP-16636
> URL: https://issues.apache.org/jira/browse/HADOOP-16636
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Priority: Major
> Attachments: HADOOP-16636.001.patch
>
>
> {noformat}
> [WARNING] make[1]: Leaving directory 
> '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target'
> [WARNING] Makefile:127: recipe for target 'all' failed
> [WARNING] make[2]: *** No rule to make target 
> '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/proto/PROTOBUF_PROTOC_EXECUTABLE-NOTFOUND',
>  needed by 'main/native/libhdfspp/lib/proto/ClientNamenodeProtocol.hrpc.inl'. 
>  Stop.
> [WARNING] make[1]: *** 
> [main/native/libhdfspp/lib/proto/CMakeFiles/proto_obj.dir/all] Error 2
> [WARNING] make[1]: *** Waiting for unfinished jobs
> [WARNING] make: *** [all] Error 2 {noformat}
> e.g. here: 
> [https://builds.apache.org/job/PreCommit-YARN-Build/24911/artifact/out/patch-compile-root.txt]
> Not sure exactly what changed here. But some online resources suggest to 
> install protobuf-compiler.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16636) No rule to make target PROTOBUF_PROTOC_EXECUTABLE-NOTFOUND when compiling trunk

2019-10-05 Thread Jonathan Hung (Jira)
Jonathan Hung created HADOOP-16636:
--

 Summary: No rule to make target 
PROTOBUF_PROTOC_EXECUTABLE-NOTFOUND when compiling trunk
 Key: HADOOP-16636
 URL: https://issues.apache.org/jira/browse/HADOOP-16636
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jonathan Hung


{noformat}
[WARNING] make[1]: Leaving directory 
'/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target'
[WARNING] Makefile:127: recipe for target 'all' failed
[WARNING] make[2]: *** No rule to make target 
'/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/proto/PROTOBUF_PROTOC_EXECUTABLE-NOTFOUND',
 needed by 'main/native/libhdfspp/lib/proto/ClientNamenodeProtocol.hrpc.inl'.  
Stop.
[WARNING] make[1]: *** 
[main/native/libhdfspp/lib/proto/CMakeFiles/proto_obj.dir/all] Error 2
[WARNING] make[1]: *** Waiting for unfinished jobs
[WARNING] make: *** [all] Error 2 {noformat}
e.g. here: 
[https://builds.apache.org/job/PreCommit-YARN-Build/24911/artifact/out/patch-compile-root.txt]

Not sure exactly what changed here. But some online resources suggest to 
install protobuf-compiler.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16542) Update commons-beanutils version to 1.9.4

2019-10-02 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16542:
---
Fix Version/s: 3.2.2
   3.1.4

> Update commons-beanutils version to 1.9.4
> -
>
> Key: HADOOP-16542
> URL: https://issues.apache.org/jira/browse/HADOOP-16542
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: kevin su
>Priority: Major
>  Labels: release-blocker
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HADOOP-16542.001.patch, HADOOP-16542.002.patch, 
> HADOOP-16542.003.patch
>
>
> [http://mail-archives.apache.org/mod_mbox/www-announce/201908.mbox/%3cc628798f-315d-4428-8cb1-4ed1ecc95...@apache.org%3e]
>  {quote}
> CVE-2019-10086. Apache Commons Beanutils does not suppresses the class 
> property in PropertyUtilsBean
> by default.
> Severity: Medium
> Vendor: The Apache Software Foundation
> Versions Affected: commons-beanutils-1.9.3 and earlier
> Description: A special BeanIntrospector class was added in version 1.9.2.
> This can be used to stop attackers from using the class property of
> Java objects to get access to the classloader.
> However this protection was not enabled by default.
> PropertyUtilsBean (and consequently BeanUtilsBean) now disallows class
> level property access by default, thus protecting against
> CVE-2014-0114.
> Mitigation: 1.X users should migrate to 1.9.4.
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16542) Update commons-beanutils version to 1.9.4

2019-10-02 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943336#comment-16943336
 ] 

Jonathan Hung commented on HADOOP-16542:


Committed to branch-3.2/branch-3.1.

> Update commons-beanutils version to 1.9.4
> -
>
> Key: HADOOP-16542
> URL: https://issues.apache.org/jira/browse/HADOOP-16542
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: kevin su
>Priority: Major
>  Labels: release-blocker
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HADOOP-16542.001.patch, HADOOP-16542.002.patch, 
> HADOOP-16542.003.patch
>
>
> [http://mail-archives.apache.org/mod_mbox/www-announce/201908.mbox/%3cc628798f-315d-4428-8cb1-4ed1ecc95...@apache.org%3e]
>  {quote}
> CVE-2019-10086. Apache Commons Beanutils does not suppresses the class 
> property in PropertyUtilsBean
> by default.
> Severity: Medium
> Vendor: The Apache Software Foundation
> Versions Affected: commons-beanutils-1.9.3 and earlier
> Description: A special BeanIntrospector class was added in version 1.9.2.
> This can be used to stop attackers from using the class property of
> Java objects to get access to the classloader.
> However this protection was not enabled by default.
> PropertyUtilsBean (and consequently BeanUtilsBean) now disallows class
> level property access by default, thus protecting against
> CVE-2014-0114.
> Mitigation: 1.X users should migrate to 1.9.4.
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16588) Update commons-beanutils version to 1.9.4 in branch-2

2019-10-02 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943325#comment-16943325
 ] 

Jonathan Hung commented on HADOOP-16588:


Thx [~iwasakims] and [~weichiu]!

> Update commons-beanutils version to 1.9.4 in branch-2
> -
>
> Key: HADOOP-16588
> URL: https://issues.apache.org/jira/browse/HADOOP-16588
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Critical
>  Labels: release-blocker
> Fix For: 2.10.0
>
> Attachments: HADOOP-16588-branch-2.002.patch, 
> HADOOP-16588.branch-2.001.patch
>
>
> Similar to HADOOP-16542 but we need to do it differently.
> In branch-2, we pull in commons-beanutils through commons-configuration 1.6 
> --> commons-digester 1.8
> {noformat}
> [INFO] +- commons-configuration:commons-configuration:jar:1.6:compile
> [INFO] |  +- commons-digester:commons-digester:jar:1.8:compile
> [INFO] |  |  \- commons-beanutils:commons-beanutils:jar:1.7.0:compile
> [INFO] |  \- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile
> {noformat}
> I have a patch to update version of the transitive dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16588) Update commons-beanutils version to 1.9.4 in branch-2

2019-10-02 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943185#comment-16943185
 ] 

Jonathan Hung commented on HADOOP-16588:


Attached 002 patch based on [~iwasakims]'s comment.

> Update commons-beanutils version to 1.9.4 in branch-2
> -
>
> Key: HADOOP-16588
> URL: https://issues.apache.org/jira/browse/HADOOP-16588
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Critical
>  Labels: release-blocker
> Attachments: HADOOP-16588-branch-2.002.patch, 
> HADOOP-16588.branch-2.001.patch
>
>
> Similar to HADOOP-16542 but we need to do it differently.
> In branch-2, we pull in commons-beanutils through commons-configuration 1.6 
> --> commons-digester 1.8
> {noformat}
> [INFO] +- commons-configuration:commons-configuration:jar:1.6:compile
> [INFO] |  +- commons-digester:commons-digester:jar:1.8:compile
> [INFO] |  |  \- commons-beanutils:commons-beanutils:jar:1.7.0:compile
> [INFO] |  \- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile
> {noformat}
> I have a patch to update version of the transitive dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16588) Update commons-beanutils version to 1.9.4 in branch-2

2019-10-02 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16588:
---
Attachment: HADOOP-16588-branch-2.002.patch

> Update commons-beanutils version to 1.9.4 in branch-2
> -
>
> Key: HADOOP-16588
> URL: https://issues.apache.org/jira/browse/HADOOP-16588
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Critical
>  Labels: release-blocker
> Attachments: HADOOP-16588-branch-2.002.patch, 
> HADOOP-16588.branch-2.001.patch
>
>
> Similar to HADOOP-16542 but we need to do it differently.
> In branch-2, we pull in commons-beanutils through commons-configuration 1.6 
> --> commons-digester 1.8
> {noformat}
> [INFO] +- commons-configuration:commons-configuration:jar:1.6:compile
> [INFO] |  +- commons-digester:commons-digester:jar:1.8:compile
> [INFO] |  |  \- commons-beanutils:commons-beanutils:jar:1.7.0:compile
> [INFO] |  \- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile
> {noformat}
> I have a patch to update version of the transitive dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16588) Update commons-beanutils version to 1.9.4 in branch-2

2019-10-01 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942286#comment-16942286
 ] 

Jonathan Hung commented on HADOOP-16588:


Hi [~jojochuang], does this approach sound OK? If so, mind uploading a patch 
for this? Thanks :)

> Update commons-beanutils version to 1.9.4 in branch-2
> -
>
> Key: HADOOP-16588
> URL: https://issues.apache.org/jira/browse/HADOOP-16588
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Critical
>  Labels: release-blocker
> Attachments: HADOOP-16588.branch-2.001.patch
>
>
> Similar to HADOOP-16542 but we need to do it differently.
> In branch-2, we pull in commons-beanutils through commons-configuration 1.6 
> --> commons-digester 1.8
> {noformat}
> [INFO] +- commons-configuration:commons-configuration:jar:1.6:compile
> [INFO] |  +- commons-digester:commons-digester:jar:1.8:compile
> [INFO] |  |  \- commons-beanutils:commons-beanutils:jar:1.7.0:compile
> [INFO] |  \- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile
> {noformat}
> I have a patch to update version of the transitive dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16588) Update commons-beanutils version to 1.9.4 in branch-2

2019-09-30 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941380#comment-16941380
 ] 

Jonathan Hung commented on HADOOP-16588:


[~iwasakims]'s suggestion makes sense to me. commons-beanutils-core was removed 
as part of HADOOP-13660 which is incompatible, so we can just exclude 
commons-beanutils-core.

> Update commons-beanutils version to 1.9.4 in branch-2
> -
>
> Key: HADOOP-16588
> URL: https://issues.apache.org/jira/browse/HADOOP-16588
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Critical
>  Labels: release-blocker
> Attachments: HADOOP-16588.branch-2.001.patch
>
>
> Similar to HADOOP-16542 but we need to do it differently.
> In branch-2, we pull in commons-beanutils through commons-configuration 1.6 
> --> commons-digester 1.8
> {noformat}
> [INFO] +- commons-configuration:commons-configuration:jar:1.6:compile
> [INFO] |  +- commons-digester:commons-digester:jar:1.8:compile
> [INFO] |  |  \- commons-beanutils:commons-beanutils:jar:1.7.0:compile
> [INFO] |  \- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile
> {noformat}
> I have a patch to update version of the transitive dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16544) update io.netty in branch-2

2019-09-30 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941196#comment-16941196
 ] 

Jonathan Hung commented on HADOOP-16544:


Thanks [~iwasakims] for working on this. TestNameNodeHttpServerXFrame passes 
locally for me. +1 for the latest patch.

Once this is committed, should we also commit HADOOP-15849 to branch-3.2 and 
branch-3.1?

> update io.netty in branch-2
> ---
>
> Key: HADOOP-16544
> URL: https://issues.apache.org/jira/browse/HADOOP-16544
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: release-blocker
> Attachments: HADOOP-16544-branch-2.001.patch, 
> HADOOP-16544-branch-2.002.patch, HADOOP-16544-branch-2.003.patch, 
> HADOOP-16544-branch-2.004.patch
>
>
> branch-2 pulls in io.netty 3.6.2.Final which is more than 5 years old.
> The latest is 3.10.6Final. I know updating netty is sensitive but it deserves 
> some attention.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12928) Update netty to 3.10.5.Final to sync with zookeeper

2019-09-13 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-12928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929617#comment-16929617
 ] 

Jonathan Hung commented on HADOOP-12928:


Hi [~ozawa] / [~eddyxu], I see this never got committed to branch-2, any reason 
for that? There was a ticket filed for upgrading netty in branch-2 here: 
HADOOP-16544. Thanks!

> Update netty to 3.10.5.Final to sync with zookeeper
> ---
>
> Key: HADOOP-12928
> URL: https://issues.apache.org/jira/browse/HADOOP-12928
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.7.2
>Reporter: Hendy Irawan
>Assignee: Lei (Eddy) Xu
>Priority: Major
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12928-branch-2.00.patch, 
> HADOOP-12928-branch-2.01.patch, HADOOP-12928-branch-2.02.patch, 
> HADOOP-12928.01.patch, HADOOP-12928.02.patch, HADOOP-12928.03.patch, 
> HDFS-12928.00.patch
>
>
> Update netty to 3.7.1.Final because hadoop-client 2.7.2 depends on zookeeper 
> 3.4.6 which depends on netty 3.7.x. Related to HADOOP-12927
> Pull request: https://github.com/apache/hadoop/pull/85



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16530) Update xercesImpl in branch-2

2019-09-09 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16925986#comment-16925986
 ] 

Jonathan Hung commented on HADOOP-16530:


Thanks for working on this [~iwasakims]/[~jojochuang], is this ready to be 
committed to branch-2?

> Update xercesImpl in branch-2
> -
>
> Key: HADOOP-16530
> URL: https://issues.apache.org/jira/browse/HADOOP-16530
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: release-blocker
> Attachments: HADOOP-16530.branch-2.001.patch
>
>
> Hadoop 2 depends on xercesImpl 2.9.1, which is more than 10 years old. The 
> latest version is 2.12.0, released last year Let's update this dependency.
> HDFS-12221 removed xercesImpl in Hadoop 3. Looking at HDFS-12221, the impact 
> of this dependency is very minimal: only used by offlineimageviewer. 
> TestOfflineEditsViewer passed for me after the update. Not sure about the 
> impact of downstream applications though.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16544) update io.netty in branch-2

2019-09-03 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921910#comment-16921910
 ] 

Jonathan Hung commented on HADOOP-16544:


Thanks for raising this, [~jojochuang], I see there was some good discussion 
here: HADOOP-12928

If we upgrade to 3.10.6Final (or 3.10.5Final as per HADOOP-12928) we may also 
need to upgrade zookeeper to 3.4.9. 

I see HADOOP-12928-branch-2.02.patch never made it to branch-2, we can commit 
that to branch-2 (assuming it still applies).

Thoughts?

> update io.netty in branch-2
> ---
>
> Key: HADOOP-16544
> URL: https://issues.apache.org/jira/browse/HADOOP-16544
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Priority: Major
>  Labels: release-blocker
>
> branch-2 pulls in io.netty 3.6.2.Final which is more than 5 years old.
> The latest is 3.10.6Final. I know updating netty is sensitive but it deserves 
> some attention.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16439) Upgrade bundled Tomcat in branch-2

2019-09-03 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921748#comment-16921748
 ] 

Jonathan Hung commented on HADOOP-16439:


Thanks [~iwasakims]/[~jojochuang] for working on this, is this ready to be 
committed?

> Upgrade bundled Tomcat in branch-2
> --
>
> Key: HADOOP-16439
> URL: https://issues.apache.org/jira/browse/HADOOP-16439
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: httpfs, kms
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: release-blocker
> Attachments: HADOOP-16439-branch-2.000.patch, 
> HADOOP-16439-branch-2.001.patch
>
>
> proposed by  [~jojochuang] in mailing list:
> {quote}We migrated from Tomcat to Jetty in Hadoop3, because Tomcat 6 went EOL 
> in
>  2016. But we did not realize three years after Tomcat 6's EOL, a majority
>  of Hadoop users are still in Hadoop 2, and it looks like Hadoop 2 will stay
>  alive for another few years.
> Backporting Jetty to Hadoop2 is probably too big of an imcompatibility.
>  How about migrating to Tomcat9?
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15711) Move branch-2 precommit/nightly test builds to java 8

2019-02-08 Thread Jonathan Hung (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-15711:
---
   Resolution: Fixed
Fix Version/s: 2.10.0
   Status: Resolved  (was: Patch Available)

Committed to branch-2.

> Move branch-2 precommit/nightly test builds to java 8
> -
>
> Key: HADOOP-15711
> URL: https://issues.apache.org/jira/browse/HADOOP-15711
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Critical
> Fix For: 2.10.0
>
> Attachments: HADOOP-15711-branch-2.002.patch, 
> HADOOP-15711.001.branch-2.patch
>
>
> Branch-2 builds have been disabled for a while: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86/
> A test run here causes hdfs tests to hang: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/4/
> Running hadoop-hdfs tests locally reveal some errors such 
> as:{noformat}[ERROR] 
> testComplexAppend2(org.apache.hadoop.hdfs.TestFileAppend2)  Time elapsed: 
> 0.059 s  <<< ERROR!
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1164)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1128)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:174)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403)
> at 
> org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:234)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1080)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473)
> at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend(TestFileAppend2.java:489)
> at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend2(TestFileAppend2.java:543)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){noformat}
> I was able to get more tests passing locally by increasing the max user 
> process count on my machine. But the error suggests that there's an issue in 
> the tests themselves. Not sure if the error seen locally is the same reason 
> as why jenkins builds are failing, I wasn't able to confirm based on the 
> jenkins builds' lack of output.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15711) Move branch-2 precommit/nightly test builds to java 8

2019-02-08 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763959#comment-16763959
 ] 

Jonathan Hung commented on HADOOP-15711:


Thanks Anthony/Arun, will make the changes EOD unless there's any objections.

> Move branch-2 precommit/nightly test builds to java 8
> -
>
> Key: HADOOP-15711
> URL: https://issues.apache.org/jira/browse/HADOOP-15711
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Critical
> Attachments: HADOOP-15711-branch-2.002.patch, 
> HADOOP-15711.001.branch-2.patch
>
>
> Branch-2 builds have been disabled for a while: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86/
> A test run here causes hdfs tests to hang: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/4/
> Running hadoop-hdfs tests locally reveal some errors such 
> as:{noformat}[ERROR] 
> testComplexAppend2(org.apache.hadoop.hdfs.TestFileAppend2)  Time elapsed: 
> 0.059 s  <<< ERROR!
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1164)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1128)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:174)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403)
> at 
> org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:234)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1080)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473)
> at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend(TestFileAppend2.java:489)
> at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend2(TestFileAppend2.java:543)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){noformat}
> I was able to get more tests passing locally by increasing the max user 
> process count on my machine. But the error suggests that there's an issue in 
> the tests themselves. Not sure if the error seen locally is the same reason 
> as why jenkins builds are failing, I wasn't able to confirm based on the 
> jenkins builds' lack of output.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15711) Move branch-2 precommit/nightly test builds to java 8

2019-02-08 Thread Jonathan Hung (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung reassigned HADOOP-15711:
--

Assignee: Jonathan Hung

> Move branch-2 precommit/nightly test builds to java 8
> -
>
> Key: HADOOP-15711
> URL: https://issues.apache.org/jira/browse/HADOOP-15711
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Critical
> Attachments: HADOOP-15711-branch-2.002.patch, 
> HADOOP-15711.001.branch-2.patch
>
>
> Branch-2 builds have been disabled for a while: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86/
> A test run here causes hdfs tests to hang: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/4/
> Running hadoop-hdfs tests locally reveal some errors such 
> as:{noformat}[ERROR] 
> testComplexAppend2(org.apache.hadoop.hdfs.TestFileAppend2)  Time elapsed: 
> 0.059 s  <<< ERROR!
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1164)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1128)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:174)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403)
> at 
> org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:234)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1080)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473)
> at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend(TestFileAppend2.java:489)
> at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend2(TestFileAppend2.java:543)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){noformat}
> I was able to get more tests passing locally by increasing the max user 
> process count on my machine. But the error suggests that there's an issue in 
> the tests themselves. Not sure if the error seen locally is the same reason 
> as why jenkins builds are failing, I wasn't able to confirm based on the 
> jenkins builds' lack of output.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15711) Fix branch-2 builds

2019-02-07 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763241#comment-16763241
 ] 

Jonathan Hung commented on HADOOP-15711:


Attached 002 patch:
 * Install openjdk8 in Dockerfile
 * Set default to openjdk7
 * add -Dhttps.protocols=TLSv1.2 to MAVEN_OPTS (hit an issue similar to 
HBASE-21074):
{noformat}
ERROR] [ERROR] Some problems were encountered while processing the POMs:
[ERROR] Unresolveable build extension: Plugin 
org.apache.felix:maven-bundle-plugin:2.5.0 or one of its dependencies could not 
be resolved: Failed to read artifact descriptor for 
org.apache.felix:maven-bundle-plugin:jar:2.5.0 @
@
[ERROR] The build could not read 1 project -> [Help 1]
[ERROR]   
[ERROR]   The project org.apache.hadoop:hadoop-main:2.10.0-SNAPSHOT 
(/build/source/pom.xml) has 1 error
[ERROR]     Unresolveable build extension: Plugin 
org.apache.felix:maven-bundle-plugin:2.5.0 or one of its dependencies could not 
be resolved: Failed to read artifact descriptor for 
org.apache.felix:maven-bundle-plugin:jar:2.5.0: Could not transfer artifact 
org.apache.felix:maven-bundle-plugin:pom:2.5.0 from/to central 
(https://repo.maven.apache.org/maven2): Received fatal alert: protocol_version 
-> [Help 2]{noformat}
Also set these configs in the 
[precommit-HADOOP|https://builds.apache.org/view/H-L/view/Hadoop/job/PreCommit-HADOOP-Build/]
 jenkins job: 
 * 
{noformat}
YETUS_ARGS+=("--java-home=/usr/lib/jvm/java-8-openjdk-amd64")
YETUS_ARGS+=("--multijdkdirs=/usr/lib/jvm/java-7-openjdk-amd64")
YETUS_ARGS+=("--multijdktests=compile"){noformat}
Assuming all goes well, I will set these configs in 
[precommit-HDFS|https://builds.apache.org/view/H-L/view/Hadoop/job/PreCommit-HDFS-Build/],
 
[precommit-YARN|https://builds.apache.org/view/H-L/view/Hadoop/job/PreCommit-YARN-Build/],
 
[precommit-MAPREDUCE|https://builds.apache.org/view/H-L/view/Hadoop/job/PreCommit-MAPREDUCE-Build/],
 and the nightly [branch-2 
build|https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86]
 (and re-enable the latter).

> Fix branch-2 builds
> ---
>
> Key: HADOOP-15711
> URL: https://issues.apache.org/jira/browse/HADOOP-15711
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Jonathan Hung
>Priority: Critical
> Attachments: HADOOP-15711-branch-2.002.patch, 
> HADOOP-15711.001.branch-2.patch
>
>
> Branch-2 builds have been disabled for a while: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86/
> A test run here causes hdfs tests to hang: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/4/
> Running hadoop-hdfs tests locally reveal some errors such 
> as:{noformat}[ERROR] 
> testComplexAppend2(org.apache.hadoop.hdfs.TestFileAppend2)  Time elapsed: 
> 0.059 s  <<< ERROR!
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1164)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1128)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:174)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403)
> at 
> org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:234)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1080)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473)
> at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend(TestFileAppend2.java:489)
> at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend2(TestFileAppend2.java:543)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){noformat}
> I was able to get more tests passing locally by increasing the max user 
> process count on my machine. But the error suggests that there's an issue in 
> the tests themselves. Not sure if the error seen locally is the same reason 
> as why jenkins builds are failing, I wasn't able to confirm based on the 
> jenkins builds' lack of output.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HADOOP-15711) Fix branch-2 builds

2019-02-07 Thread Jonathan Hung (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-15711:
---
Attachment: HADOOP-15711-branch-2.002.patch

> Fix branch-2 builds
> ---
>
> Key: HADOOP-15711
> URL: https://issues.apache.org/jira/browse/HADOOP-15711
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Jonathan Hung
>Priority: Critical
> Attachments: HADOOP-15711-branch-2.002.patch, 
> HADOOP-15711.001.branch-2.patch
>
>
> Branch-2 builds have been disabled for a while: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86/
> A test run here causes hdfs tests to hang: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/4/
> Running hadoop-hdfs tests locally reveal some errors such 
> as:{noformat}[ERROR] 
> testComplexAppend2(org.apache.hadoop.hdfs.TestFileAppend2)  Time elapsed: 
> 0.059 s  <<< ERROR!
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1164)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1128)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:174)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403)
> at 
> org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:234)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1080)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473)
> at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend(TestFileAppend2.java:489)
> at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend2(TestFileAppend2.java:543)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){noformat}
> I was able to get more tests passing locally by increasing the max user 
> process count on my machine. But the error suggests that there's an issue in 
> the tests themselves. Not sure if the error seen locally is the same reason 
> as why jenkins builds are failing, I wasn't able to confirm based on the 
> jenkins builds' lack of output.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16053) Backport HADOOP-14816 to branch-2

2019-02-01 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758646#comment-16758646
 ] 

Jonathan Hung commented on HADOOP-16053:


[~ajisakaa], +1 LGTM, thanks

> Backport HADOOP-14816 to branch-2
> -
>
> Key: HADOOP-16053
> URL: https://issues.apache.org/jira/browse/HADOOP-16053
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16053-branch-2-01.patch, 
> HADOOP-16053-branch-2-02.patch
>
>
> Ubuntu Trusty becomes EoL in April 2019, let's upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15711) Fix branch-2 builds

2019-01-28 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754287#comment-16754287
 ] 

Jonathan Hung commented on HADOOP-15711:


Thanks Allen and Akira for the thoughts. I agree with the risks. Just started a 
discuss thread on [common|yarn|hdfs]-dev for this.

> Fix branch-2 builds
> ---
>
> Key: HADOOP-15711
> URL: https://issues.apache.org/jira/browse/HADOOP-15711
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Jonathan Hung
>Priority: Critical
> Attachments: HADOOP-15711.001.branch-2.patch
>
>
> Branch-2 builds have been disabled for a while: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86/
> A test run here causes hdfs tests to hang: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/4/
> Running hadoop-hdfs tests locally reveal some errors such 
> as:{noformat}[ERROR] 
> testComplexAppend2(org.apache.hadoop.hdfs.TestFileAppend2)  Time elapsed: 
> 0.059 s  <<< ERROR!
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1164)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1128)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:174)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403)
> at 
> org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:234)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1080)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473)
> at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend(TestFileAppend2.java:489)
> at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend2(TestFileAppend2.java:543)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){noformat}
> I was able to get more tests passing locally by increasing the max user 
> process count on my machine. But the error suggests that there's an issue in 
> the tests themselves. Not sure if the error seen locally is the same reason 
> as why jenkins builds are failing, I wasn't able to confirm based on the 
> jenkins builds' lack of output.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15711) Fix branch-2 builds

2019-01-25 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16752594#comment-16752594
 ] 

Jonathan Hung commented on HADOOP-15711:


My proposal is to port  HADOOP-14816 and -HADOOP-15610- to branch-2 to use 
openjdk8 in branch-2 instead of openjdk7. Any objections?

> Fix branch-2 builds
> ---
>
> Key: HADOOP-15711
> URL: https://issues.apache.org/jira/browse/HADOOP-15711
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Jonathan Hung
>Priority: Critical
> Attachments: HADOOP-15711.001.branch-2.patch
>
>
> Branch-2 builds have been disabled for a while: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86/
> A test run here causes hdfs tests to hang: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/4/
> Running hadoop-hdfs tests locally reveal some errors such 
> as:{noformat}[ERROR] 
> testComplexAppend2(org.apache.hadoop.hdfs.TestFileAppend2)  Time elapsed: 
> 0.059 s  <<< ERROR!
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1164)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1128)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:174)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403)
> at 
> org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:234)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1080)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473)
> at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend(TestFileAppend2.java:489)
> at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend2(TestFileAppend2.java:543)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){noformat}
> I was able to get more tests passing locally by increasing the max user 
> process count on my machine. But the error suggests that there's an issue in 
> the tests themselves. Not sure if the error seen locally is the same reason 
> as why jenkins builds are failing, I wasn't able to confirm based on the 
> jenkins builds' lack of output.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15711) Fix branch-2 builds

2019-01-24 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751846#comment-16751846
 ] 

Jonathan Hung edited comment on HADOOP-15711 at 1/25/19 3:24 AM:
-

In the qbt runs there's fatal errors in the logs such as
{noformat}
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  Internal Error (safepoint.cpp:325), pid=30102, tid=140265819887360
#  guarantee(PageArmed == 0) failed: invariant
#
# JRE version: OpenJDK Runtime Environment (7.0_181-b01) (build 1.7.0_181-b01)
# Java VM: OpenJDK 64-Bit Server VM (24.181-b01 mixed mode linux-amd64 
compressed oops)
# Derivative: IcedTea 2.6.14
# Distribution: Ubuntu 14.04 LTS, package 7u181-2.6.14-0ubuntu0.3
# Core dump written. Default location: 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/core or core.30102
#
# If you would like to submit a bug report, please include
# instructions on how to reproduce the bug and visit:
#   http://icedtea.classpath.org/bugzilla
#

---  T H R E A D  ---

Current thread (0x7f923c31d800):  VMThread [stack: 
0x7f922e4e5000,0x7f922e5e6000] [id=30122]


Stack: [0x7f922e4e5000,0x7f922e5e6000],  sp=0x7f922e5e4b10,  free 
space=1022k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V  [libjvm.so+0x966c25]
V  [libjvm.so+0x49b96e]
V  [libjvm.so+0x872b51]
V  [libjvm.so+0x96b69a]
V  [libjvm.so+0x96baf2]
V  [libjvm.so+0x7da992]

VM_Operation (0x7f9210b2b920): RevokeBias, mode: safepoint, requested by 
thread 0x7f923dd0f800

{noformat}
Suspected it might be related to 
[https://bugs.openjdk.java.net/browse/JDK-6869327,] so I tried adding 
{{-XX:+UseCountedLoopSafepoints}} to one of the runs but it didn't seem to do 
anything

Then tried porting HADOOP-14816 (and HADOOP-15610) to a test branch forked off 
branch-2, getting similar results as reported in HDFS-12711, here's a test run 
: 
[https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/39/]
 (run with openjdk8) - so at least it appears the unit tests are running to 
completion with openjdk8.


was (Author: jhung):
In the qbt runs there's fatal errors in the logs such as
{noformat}
---  T H R E A D  ---



Current thread (0x7f3cc031d800):  VMThread [stack: 
0x7f3ca0dce000,0x7f3ca0ecf000] [id=23500]



Stack: [0x7f3ca0dce000,0x7f3ca0ecf000],  sp=0x7f3ca0ecdb10,  free 
space=1022k

Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)

V  [libjvm.so+0x966c25]

V  [libjvm.so+0x49b96e]

V  [libjvm.so+0x872b51]

V  [libjvm.so+0x96b69a]

V  [libjvm.so+0x96baf2]

V  [libjvm.so+0x7da992]



VM_Operation (0x7f3c95bafad0): RevokeBias, mode: safepoint, requested by 
thread 0x7f3cc0744800


{noformat}
Suspected it might be related to 
[https://bugs.openjdk.java.net/browse/JDK-6869327,] so I tried adding 
{{-XX:+UseCountedLoopSafepoints}} to one of the runs but it didn't seem to do 
anything

Then tried porting HADOOP-14816 (and HADOOP-15610) to a test branch forked off 
branch-2, getting similar results as reported in HDFS-12711, here's a test run 
: 
[https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/39/]
 (run with openjdk8) - so at least it appears the unit tests are running to 
completion with openjdk8.

> Fix branch-2 builds
> ---
>
> Key: HADOOP-15711
> URL: https://issues.apache.org/jira/browse/HADOOP-15711
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Jonathan Hung
>Priority: Critical
> Attachments: HADOOP-15711.001.branch-2.patch
>
>
> Branch-2 builds have been disabled for a while: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86/
> A test run here causes hdfs tests to hang: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/4/
> Running hadoop-hdfs tests locally reveal some errors such 
> as:{noformat}[ERROR] 
> testComplexAppend2(org.apache.hadoop.hdfs.TestFileAppend2)  Time elapsed: 
> 0.059 s  <<< ERROR!
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1164)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1128)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:174)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403)
> at 
> 

[jira] [Comment Edited] (HADOOP-15711) Fix branch-2 builds

2019-01-24 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751846#comment-16751846
 ] 

Jonathan Hung edited comment on HADOOP-15711 at 1/25/19 3:23 AM:
-

In the qbt runs there's fatal errors in the logs such as
{noformat}
---  T H R E A D  ---



Current thread (0x7f3cc031d800):  VMThread [stack: 
0x7f3ca0dce000,0x7f3ca0ecf000] [id=23500]



Stack: [0x7f3ca0dce000,0x7f3ca0ecf000],  sp=0x7f3ca0ecdb10,  free 
space=1022k

Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)

V  [libjvm.so+0x966c25]

V  [libjvm.so+0x49b96e]

V  [libjvm.so+0x872b51]

V  [libjvm.so+0x96b69a]

V  [libjvm.so+0x96baf2]

V  [libjvm.so+0x7da992]



VM_Operation (0x7f3c95bafad0): RevokeBias, mode: safepoint, requested by 
thread 0x7f3cc0744800


{noformat}
Suspected it might be related to 
[https://bugs.openjdk.java.net/browse/JDK-6869327,] so I tried adding 
{{-XX:+UseCountedLoopSafepoints}} to one of the runs but it didn't seem to do 
anything

Then tried porting HADOOP-14816 (and HADOOP-15610) to a test branch forked off 
branch-2, getting similar results as reported in HDFS-12711, here's a test run 
: 
[https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/39/]
 (run with openjdk8) - so at least it appears the unit tests are running to 
completion with openjdk8.


was (Author: jhung):
In the qbt runs there's fatal errors in the logs such as
{noformat}
---  T H R E A D  ---



Current thread (0x7f3cc031d800):  VMThread [stack: 
0x7f3ca0dce000,0x7f3ca0ecf000] [id=23500]



Stack: [0x7f3ca0dce000,0x7f3ca0ecf000],  sp=0x7f3ca0ecdb10,  free 
space=1022k

Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)

V  [libjvm.so+0x966c25]

V  [libjvm.so+0x49b96e]

V  [libjvm.so+0x872b51]

V  [libjvm.so+0x96b69a]

V  [libjvm.so+0x96baf2]

V  [libjvm.so+0x7da992]



VM_Operation (0x7f3c95bafad0): RevokeBias, mode: safepoint, requested by 
thread 0x7f3cc0744800


{noformat}
Suspected it might be related to 
[https://bugs.openjdk.java.net/browse/JDK-6869327,] so I tried adding 
{{-XX:+UseCountedLoopSafepoints}} to one of the runs but it didn't seem to do 
anything

Then tried porting HADOOP-14816 (and HADOOP-15610) to a test branch forked off 
branch-2, getting similar results as reported in HDFS-12711, here's a test run 
: 
[https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/39/]
 (run with openjdk8)

> Fix branch-2 builds
> ---
>
> Key: HADOOP-15711
> URL: https://issues.apache.org/jira/browse/HADOOP-15711
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Jonathan Hung
>Priority: Critical
> Attachments: HADOOP-15711.001.branch-2.patch
>
>
> Branch-2 builds have been disabled for a while: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86/
> A test run here causes hdfs tests to hang: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/4/
> Running hadoop-hdfs tests locally reveal some errors such 
> as:{noformat}[ERROR] 
> testComplexAppend2(org.apache.hadoop.hdfs.TestFileAppend2)  Time elapsed: 
> 0.059 s  <<< ERROR!
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1164)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1128)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:174)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403)
> at 
> org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:234)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1080)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473)
> at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend(TestFileAppend2.java:489)
> at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend2(TestFileAppend2.java:543)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> 

[jira] [Commented] (HADOOP-15711) Fix branch-2 builds

2019-01-24 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751846#comment-16751846
 ] 

Jonathan Hung commented on HADOOP-15711:


In the qbt runs there's fatal errors in the logs such as
{noformat}
---  T H R E A D  ---



Current thread (0x7f3cc031d800):  VMThread [stack: 
0x7f3ca0dce000,0x7f3ca0ecf000] [id=23500]



Stack: [0x7f3ca0dce000,0x7f3ca0ecf000],  sp=0x7f3ca0ecdb10,  free 
space=1022k

Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)

V  [libjvm.so+0x966c25]

V  [libjvm.so+0x49b96e]

V  [libjvm.so+0x872b51]

V  [libjvm.so+0x96b69a]

V  [libjvm.so+0x96baf2]

V  [libjvm.so+0x7da992]



VM_Operation (0x7f3c95bafad0): RevokeBias, mode: safepoint, requested by 
thread 0x7f3cc0744800


{noformat}
Suspected it might be related to 
[https://bugs.openjdk.java.net/browse/JDK-6869327,] so I tried adding 
{{-XX:+UseCountedLoopSafepoints}} to one of the runs but it didn't seem to do 
anything

Then tried porting HADOOP-14816 (and HADOOP-15610) to a test branch forked off 
branch-2, getting similar results as reported in HDFS-12711, here's a test run 
: 
[https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/39/]
 (run with openjdk8)

> Fix branch-2 builds
> ---
>
> Key: HADOOP-15711
> URL: https://issues.apache.org/jira/browse/HADOOP-15711
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Jonathan Hung
>Priority: Critical
> Attachments: HADOOP-15711.001.branch-2.patch
>
>
> Branch-2 builds have been disabled for a while: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86/
> A test run here causes hdfs tests to hang: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/4/
> Running hadoop-hdfs tests locally reveal some errors such 
> as:{noformat}[ERROR] 
> testComplexAppend2(org.apache.hadoop.hdfs.TestFileAppend2)  Time elapsed: 
> 0.059 s  <<< ERROR!
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1164)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1128)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:174)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403)
> at 
> org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:234)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1080)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473)
> at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend(TestFileAppend2.java:489)
> at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend2(TestFileAppend2.java:543)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){noformat}
> I was able to get more tests passing locally by increasing the max user 
> process count on my machine. But the error suggests that there's an issue in 
> the tests themselves. Not sure if the error seen locally is the same reason 
> as why jenkins builds are failing, I wasn't able to confirm based on the 
> jenkins builds' lack of output.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15711) Fix branch-2 builds

2018-12-19 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16725270#comment-16725270
 ] 

Jonathan Hung commented on HADOOP-15711:


Sure [~aw], set it back to 5000

> Fix branch-2 builds
> ---
>
> Key: HADOOP-15711
> URL: https://issues.apache.org/jira/browse/HADOOP-15711
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Jonathan Hung
>Priority: Critical
> Attachments: HADOOP-15711.001.branch-2.patch
>
>
> Branch-2 builds have been disabled for a while: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86/
> A test run here causes hdfs tests to hang: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/4/
> Running hadoop-hdfs tests locally reveal some errors such 
> as:{noformat}[ERROR] 
> testComplexAppend2(org.apache.hadoop.hdfs.TestFileAppend2)  Time elapsed: 
> 0.059 s  <<< ERROR!
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1164)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1128)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:174)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403)
> at 
> org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:234)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1080)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473)
> at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend(TestFileAppend2.java:489)
> at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend2(TestFileAppend2.java:543)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){noformat}
> I was able to get more tests passing locally by increasing the max user 
> process count on my machine. But the error suggests that there's an issue in 
> the tests themselves. Not sure if the error seen locally is the same reason 
> as why jenkins builds are failing, I wasn't able to confirm based on the 
> jenkins builds' lack of output.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15711) Fix branch-2 builds

2018-09-17 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16618260#comment-16618260
 ] 

Jonathan Hung commented on HADOOP-15711:


[~asuresh]/[~xkrogen]/[~shv] helped me investigate, basically the last email we 
got from hadoop-qbt-branch2-java7-linux-x86  which actually ran unit tests was 
on Feb 26. I see this was committed on Feb 26 to branch-2/branch-2.9 as well: 
{noformat}commit 762125b864ab812512bad9a59344ca79af7f43ac
Author: Chris Douglas 
Date:   Mon Feb 26 16:32:06 2018 -0800

Backport HADOOP-13514 (surefire upgrade) to branch-2{noformat}

I see this was committed to branch-2.8 as well but eventually reverted.

So I am wondering if we can try a test run with this patch reverted so we can 
see the results. [~aw] thoughts on this? Do you know if reverting this will 
cause issues on the jenkins infra?

> Fix branch-2 builds
> ---
>
> Key: HADOOP-15711
> URL: https://issues.apache.org/jira/browse/HADOOP-15711
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Jonathan Hung
>Priority: Critical
>
> Branch-2 builds have been disabled for a while: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86/
> A test run here causes hdfs tests to hang: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/4/
> Running hadoop-hdfs tests locally reveal some errors such 
> as:{noformat}[ERROR] 
> testComplexAppend2(org.apache.hadoop.hdfs.TestFileAppend2)  Time elapsed: 
> 0.059 s  <<< ERROR!
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1164)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1128)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:174)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403)
> at 
> org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:234)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1080)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473)
> at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend(TestFileAppend2.java:489)
> at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend2(TestFileAppend2.java:543)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){noformat}
> I was able to get more tests passing locally by increasing the max user 
> process count on my machine. But the error suggests that there's an issue in 
> the tests themselves. Not sure if the error seen locally is the same reason 
> as why jenkins builds are failing, I wasn't able to confirm based on the 
> jenkins builds' lack of output.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15711) Fix branch-2 builds

2018-09-07 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607451#comment-16607451
 ] 

Jonathan Hung commented on HADOOP-15711:


FYI I am also seeing something similar in hadoop-yarn-client module - 
https://issues.apache.org/jira/browse/YARN-8200?focusedCommentId=16607445=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16607445

> Fix branch-2 builds
> ---
>
> Key: HADOOP-15711
> URL: https://issues.apache.org/jira/browse/HADOOP-15711
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Jonathan Hung
>Priority: Critical
>
> Branch-2 builds have been disabled for a while: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86/
> A test run here causes hdfs tests to hang: 
> https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/4/
> Running hadoop-hdfs tests locally reveal some errors such 
> as:{noformat}[ERROR] 
> testComplexAppend2(org.apache.hadoop.hdfs.TestFileAppend2)  Time elapsed: 
> 0.059 s  <<< ERROR!
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1164)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1128)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:174)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403)
> at 
> org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:234)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1080)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473)
> at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend(TestFileAppend2.java:489)
> at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend2(TestFileAppend2.java:543)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){noformat}
> I was able to get more tests passing locally by increasing the max user 
> process count on my machine. But the error suggests that there's an issue in 
> the tests themselves. Not sure if the error seen locally is the same reason 
> as why jenkins builds are failing, I wasn't able to confirm based on the 
> jenkins builds' lack of output.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15711) Fix branch-2 builds

2018-08-31 Thread Jonathan Hung (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-15711:
---
Description: 
Branch-2 builds have been disabled for a while: 
https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86/

A test run here causes hdfs tests to hang: 
https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/4/

Running hadoop-hdfs tests locally reveal some errors such as:{noformat}[ERROR] 
testComplexAppend2(org.apache.hadoop.hdfs.TestFileAppend2)  Time elapsed: 0.059 
s  <<< ERROR!
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1164)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1128)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:174)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403)
at 
org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:234)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1080)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473)
at 
org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend(TestFileAppend2.java:489)
at 
org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend2(TestFileAppend2.java:543)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){noformat}

I was able to get more tests passing locally by increasing the max user process 
count on my machine. But the error suggests that there's an issue in the tests 
themselves. Not sure if the error seen locally is the same reason as why 
jenkins builds are failing, I wasn't able to confirm based on the jenkins 
builds' lack of output.

  was:
Branch-2 builds have been disabled for a while: 
https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86/

A test run here causes hdfs tests to hang: 
https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/4/

Running hadoop-hdfs tests locally reveal some errors such as:{noformat}[ERROR] 
testComplexAppend2(org.apache.hadoop.hdfs.TestFileAppend2)  Time elapsed: 0.059 
s  <<< ERROR!
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1164)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1128)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:174)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403)
at 
org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:234)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1080)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473)
at 
org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend(TestFileAppend2.java:489)
at 
org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend2(TestFileAppend2.java:543)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){noformat}

I was able to get more tests passing locally by increasing the max user process 
count on my machine. But the error suggests that there's an issue in the tests 
themselves.


> Fix branch-2 builds
> ---
>
> Key: HADOOP-15711
> URL: https://issues.apache.org/jira/browse/HADOOP-15711
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Jonathan Hung
>Priority: Critical
>
> Branch-2 

[jira] [Created] (HADOOP-15711) Fix branch-2 builds

2018-08-31 Thread Jonathan Hung (JIRA)
Jonathan Hung created HADOOP-15711:
--

 Summary: Fix branch-2 builds
 Key: HADOOP-15711
 URL: https://issues.apache.org/jira/browse/HADOOP-15711
 Project: Hadoop Common
  Issue Type: Task
Reporter: Jonathan Hung


Branch-2 builds have been disabled for a while: 
https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86/

A test run here causes hdfs tests to hang: 
https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/4/

Running hadoop-hdfs tests locally reveal some errors such as:{noformat}[ERROR] 
testComplexAppend2(org.apache.hadoop.hdfs.TestFileAppend2)  Time elapsed: 0.059 
s  <<< ERROR!
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1164)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1128)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:174)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403)
at 
org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:234)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1080)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473)
at 
org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend(TestFileAppend2.java:489)
at 
org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend2(TestFileAppend2.java:543)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){noformat}

I was able to get more tests passing locally by increasing the max user process 
count on my machine. But the error suggests that there's an issue in the tests 
themselves.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14891) Remove references to Guava Objects.toStringHelper

2018-06-22 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520830#comment-16520830
 ] 

Jonathan Hung commented on HADOOP-14891:


Thanks Konstantin. I have committed this to branch-2.7.

> Remove references to Guava Objects.toStringHelper
> -
>
> Key: HADOOP-14891
> URL: https://issues.apache.org/jira/browse/HADOOP-14891
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 2.8.1, 2.7.8
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Major
> Fix For: 2.9.0, 2.8.3, 2.7.8
>
> Attachments: HADOOP-14891.001-branch-2.patch
>
>
> Use provided a guava 23.0 jar as part of the job submission.
> {code}
> 2017-09-20 16:10:42,897 [INFO] [main] |service.AbstractService|: Service 
> org.apache.tez.dag.app.DAGAppMaster failed in state STARTED; cause: 
> org.apache.hadoop.service.ServiceStateException: java.lang.NoSuchMethodError: 
> com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper;
> org.apache.hadoop.service.ServiceStateException: java.lang.NoSuchMethodError: 
> com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper;
>   at 
> org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
>   at 
> org.apache.tez.dag.app.DAGAppMaster.startServices(DAGAppMaster.java:1989)
>   at 
> org.apache.tez.dag.app.DAGAppMaster.serviceStart(DAGAppMaster.java:2056)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at org.apache.tez.dag.app.DAGAppMaster$9.run(DAGAppMaster.java:2707)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1936)
>   at 
> org.apache.tez.dag.app.DAGAppMaster.initAndStartAppMaster(DAGAppMaster.java:2703)
>   at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2508)
> Caused by: java.lang.NoSuchMethodError: 
> com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper;
>   at 
> org.apache.hadoop.metrics2.lib.MetricsRegistry.toString(MetricsRegistry.java:419)
>   at java.lang.String.valueOf(String.java:2994)
>   at java.lang.StringBuilder.append(StringBuilder.java:131)
>   at org.apache.hadoop.ipc.metrics.RpcMetrics.(RpcMetrics.java:74)
>   at org.apache.hadoop.ipc.metrics.RpcMetrics.create(RpcMetrics.java:80)
>   at org.apache.hadoop.ipc.Server.(Server.java:2658)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:968)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:810)
>   at 
> org.apache.tez.dag.api.client.DAGClientServer.createServer(DAGClientServer.java:134)
>   at 
> org.apache.tez.dag.api.client.DAGClientServer.serviceStart(DAGClientServer.java:82)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.tez.dag.app.DAGAppMaster$ServiceWithDependency.start(DAGAppMaster.java:1909)
>   at 
> org.apache.tez.dag.app.DAGAppMaster$ServiceThread.run(DAGAppMaster.java:1930)
> 2017-09-20 16:10:42,898 [ERROR] [main] |rm.TaskSchedulerManager|: Failed to 
> do a clean initiateStop for Scheduler: [0:TezYarn]
> {code}
> Metrics2 has been relying on deprecated toStringHelper for some time now 
> which was finally removed in guava 21.0. Removing the dependency on this 
> method will free up the user to supplying their own guava jar again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14891) Remove references to Guava Objects.toStringHelper

2018-06-22 Thread Jonathan Hung (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-14891:
---
Fix Version/s: 2.7.8

> Remove references to Guava Objects.toStringHelper
> -
>
> Key: HADOOP-14891
> URL: https://issues.apache.org/jira/browse/HADOOP-14891
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 2.8.1, 2.7.8
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Major
> Fix For: 2.9.0, 2.8.3, 2.7.8
>
> Attachments: HADOOP-14891.001-branch-2.patch
>
>
> Use provided a guava 23.0 jar as part of the job submission.
> {code}
> 2017-09-20 16:10:42,897 [INFO] [main] |service.AbstractService|: Service 
> org.apache.tez.dag.app.DAGAppMaster failed in state STARTED; cause: 
> org.apache.hadoop.service.ServiceStateException: java.lang.NoSuchMethodError: 
> com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper;
> org.apache.hadoop.service.ServiceStateException: java.lang.NoSuchMethodError: 
> com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper;
>   at 
> org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
>   at 
> org.apache.tez.dag.app.DAGAppMaster.startServices(DAGAppMaster.java:1989)
>   at 
> org.apache.tez.dag.app.DAGAppMaster.serviceStart(DAGAppMaster.java:2056)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at org.apache.tez.dag.app.DAGAppMaster$9.run(DAGAppMaster.java:2707)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1936)
>   at 
> org.apache.tez.dag.app.DAGAppMaster.initAndStartAppMaster(DAGAppMaster.java:2703)
>   at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2508)
> Caused by: java.lang.NoSuchMethodError: 
> com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper;
>   at 
> org.apache.hadoop.metrics2.lib.MetricsRegistry.toString(MetricsRegistry.java:419)
>   at java.lang.String.valueOf(String.java:2994)
>   at java.lang.StringBuilder.append(StringBuilder.java:131)
>   at org.apache.hadoop.ipc.metrics.RpcMetrics.(RpcMetrics.java:74)
>   at org.apache.hadoop.ipc.metrics.RpcMetrics.create(RpcMetrics.java:80)
>   at org.apache.hadoop.ipc.Server.(Server.java:2658)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:968)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:810)
>   at 
> org.apache.tez.dag.api.client.DAGClientServer.createServer(DAGClientServer.java:134)
>   at 
> org.apache.tez.dag.api.client.DAGClientServer.serviceStart(DAGClientServer.java:82)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.tez.dag.app.DAGAppMaster$ServiceWithDependency.start(DAGAppMaster.java:1909)
>   at 
> org.apache.tez.dag.app.DAGAppMaster$ServiceThread.run(DAGAppMaster.java:1930)
> 2017-09-20 16:10:42,898 [ERROR] [main] |rm.TaskSchedulerManager|: Failed to 
> do a clean initiateStop for Scheduler: [0:TezYarn]
> {code}
> Metrics2 has been relying on deprecated toStringHelper for some time now 
> which was finally removed in guava 21.0. Removing the dependency on this 
> method will free up the user to supplying their own guava jar again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14891) Remove references to Guava Objects.toStringHelper

2018-06-21 Thread Jonathan Hung (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-14891:
---
Affects Version/s: 2.7.8

> Remove references to Guava Objects.toStringHelper
> -
>
> Key: HADOOP-14891
> URL: https://issues.apache.org/jira/browse/HADOOP-14891
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 2.8.1, 2.7.8
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Major
> Fix For: 2.9.0, 2.8.3
>
> Attachments: HADOOP-14891.001-branch-2.patch
>
>
> Use provided a guava 23.0 jar as part of the job submission.
> {code}
> 2017-09-20 16:10:42,897 [INFO] [main] |service.AbstractService|: Service 
> org.apache.tez.dag.app.DAGAppMaster failed in state STARTED; cause: 
> org.apache.hadoop.service.ServiceStateException: java.lang.NoSuchMethodError: 
> com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper;
> org.apache.hadoop.service.ServiceStateException: java.lang.NoSuchMethodError: 
> com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper;
>   at 
> org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
>   at 
> org.apache.tez.dag.app.DAGAppMaster.startServices(DAGAppMaster.java:1989)
>   at 
> org.apache.tez.dag.app.DAGAppMaster.serviceStart(DAGAppMaster.java:2056)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at org.apache.tez.dag.app.DAGAppMaster$9.run(DAGAppMaster.java:2707)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1936)
>   at 
> org.apache.tez.dag.app.DAGAppMaster.initAndStartAppMaster(DAGAppMaster.java:2703)
>   at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2508)
> Caused by: java.lang.NoSuchMethodError: 
> com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper;
>   at 
> org.apache.hadoop.metrics2.lib.MetricsRegistry.toString(MetricsRegistry.java:419)
>   at java.lang.String.valueOf(String.java:2994)
>   at java.lang.StringBuilder.append(StringBuilder.java:131)
>   at org.apache.hadoop.ipc.metrics.RpcMetrics.(RpcMetrics.java:74)
>   at org.apache.hadoop.ipc.metrics.RpcMetrics.create(RpcMetrics.java:80)
>   at org.apache.hadoop.ipc.Server.(Server.java:2658)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:968)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:810)
>   at 
> org.apache.tez.dag.api.client.DAGClientServer.createServer(DAGClientServer.java:134)
>   at 
> org.apache.tez.dag.api.client.DAGClientServer.serviceStart(DAGClientServer.java:82)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.tez.dag.app.DAGAppMaster$ServiceWithDependency.start(DAGAppMaster.java:1909)
>   at 
> org.apache.tez.dag.app.DAGAppMaster$ServiceThread.run(DAGAppMaster.java:1930)
> 2017-09-20 16:10:42,898 [ERROR] [main] |rm.TaskSchedulerManager|: Failed to 
> do a clean initiateStop for Scheduler: [0:TezYarn]
> {code}
> Metrics2 has been relying on deprecated toStringHelper for some time now 
> which was finally removed in guava 21.0. Removing the dependency on this 
> method will free up the user to supplying their own guava jar again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14891) Remove references to Guava Objects.toStringHelper

2018-06-21 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519786#comment-16519786
 ] 

Jonathan Hung commented on HADOOP-14891:


Hey [~shv], planning on committing this to branch-2.7. If it looks good to you 
then I will commit it. I checked and it applies cleanly. Thanks!

> Remove references to Guava Objects.toStringHelper
> -
>
> Key: HADOOP-14891
> URL: https://issues.apache.org/jira/browse/HADOOP-14891
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 2.8.1, 2.7.8
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Major
> Fix For: 2.9.0, 2.8.3
>
> Attachments: HADOOP-14891.001-branch-2.patch
>
>
> Use provided a guava 23.0 jar as part of the job submission.
> {code}
> 2017-09-20 16:10:42,897 [INFO] [main] |service.AbstractService|: Service 
> org.apache.tez.dag.app.DAGAppMaster failed in state STARTED; cause: 
> org.apache.hadoop.service.ServiceStateException: java.lang.NoSuchMethodError: 
> com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper;
> org.apache.hadoop.service.ServiceStateException: java.lang.NoSuchMethodError: 
> com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper;
>   at 
> org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
>   at 
> org.apache.tez.dag.app.DAGAppMaster.startServices(DAGAppMaster.java:1989)
>   at 
> org.apache.tez.dag.app.DAGAppMaster.serviceStart(DAGAppMaster.java:2056)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at org.apache.tez.dag.app.DAGAppMaster$9.run(DAGAppMaster.java:2707)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1936)
>   at 
> org.apache.tez.dag.app.DAGAppMaster.initAndStartAppMaster(DAGAppMaster.java:2703)
>   at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2508)
> Caused by: java.lang.NoSuchMethodError: 
> com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper;
>   at 
> org.apache.hadoop.metrics2.lib.MetricsRegistry.toString(MetricsRegistry.java:419)
>   at java.lang.String.valueOf(String.java:2994)
>   at java.lang.StringBuilder.append(StringBuilder.java:131)
>   at org.apache.hadoop.ipc.metrics.RpcMetrics.(RpcMetrics.java:74)
>   at org.apache.hadoop.ipc.metrics.RpcMetrics.create(RpcMetrics.java:80)
>   at org.apache.hadoop.ipc.Server.(Server.java:2658)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:968)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:810)
>   at 
> org.apache.tez.dag.api.client.DAGClientServer.createServer(DAGClientServer.java:134)
>   at 
> org.apache.tez.dag.api.client.DAGClientServer.serviceStart(DAGClientServer.java:82)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.tez.dag.app.DAGAppMaster$ServiceWithDependency.start(DAGAppMaster.java:1909)
>   at 
> org.apache.tez.dag.app.DAGAppMaster$ServiceThread.run(DAGAppMaster.java:1930)
> 2017-09-20 16:10:42,898 [ERROR] [main] |rm.TaskSchedulerManager|: Failed to 
> do a clean initiateStop for Scheduler: [0:TezYarn]
> {code}
> Metrics2 has been relying on deprecated toStringHelper for some time now 
> which was finally removed in guava 21.0. Removing the dependency on this 
> method will free up the user to supplying their own guava jar again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14828) RetryUpToMaximumTimeWithFixedSleep is not bounded by maximum time

2017-09-05 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16153996#comment-16153996
 ] 

Jonathan Hung commented on HADOOP-14828:


Yes, seems so. Will close this as duplicate, thanks.

> RetryUpToMaximumTimeWithFixedSleep is not bounded by maximum time
> -
>
> Key: HADOOP-14828
> URL: https://issues.apache.org/jira/browse/HADOOP-14828
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>
> In RetryPolicies.java, RetryUpToMaximumTimeWithFixedSleep is converted to a 
> RetryUpToMaximumCountWithFixedSleep, whose count is the maxTime / sleepTime: 
> {noformat}public RetryUpToMaximumTimeWithFixedSleep(long maxTime, long 
> sleepTime,
> TimeUnit timeUnit) {
>   super((int) (maxTime / sleepTime), sleepTime, timeUnit);
>   this.maxTime = maxTime;
>   this.timeUnit = timeUnit;
> }
> {noformat}
> But if retries take a long time, then the maxTime passed to the 
> RetryUpToMaximumTimeWithFixedSleep is exceeded.
> As an example, while doing NM restarts, we saw an issue where the NMProxy 
> creates a retry policy which specifies a maximum wait time of 15 minutes and 
> a 10 sec interval (which is converted to a MaximumCount policy with 15 min / 
> 10 sec = 90 tries). But each NMProxy retry policy invokes o.a.h.ipc.Client's 
> retry policy: {noformat}  if (connectionRetryPolicy == null) {
> final int max = conf.getInt(
> CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_MAX_RETRIES_KEY,
> 
> CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_MAX_RETRIES_DEFAULT);
> final int retryInterval = conf.getInt(
> 
> CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_RETRY_INTERVAL_KEY,
> CommonConfigurationKeysPublic
> .IPC_CLIENT_CONNECT_RETRY_INTERVAL_DEFAULT);
> connectionRetryPolicy = 
> RetryPolicies.retryUpToMaximumCountWithFixedSleep(
> max, retryInterval, TimeUnit.MILLISECONDS);
>   }{noformat}
> So the time it takes the NMProxy to fail is actually (90 retries) * (10 sec 
> NMProxy interval + o.a.h.ipc.Client retry time). In the default case, ipc 
> client retries 10 times with a 1 sec interval, meaning the time it takes for 
> NMProxy to fail is (90)(10 sec + 10 sec) = 30 min instead of the 15 min 
> specified by NMProxy configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14828) RetryUpToMaximumTimeWithFixedSleep is not bounded by maximum time

2017-09-05 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung resolved HADOOP-14828.

Resolution: Duplicate

> RetryUpToMaximumTimeWithFixedSleep is not bounded by maximum time
> -
>
> Key: HADOOP-14828
> URL: https://issues.apache.org/jira/browse/HADOOP-14828
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>
> In RetryPolicies.java, RetryUpToMaximumTimeWithFixedSleep is converted to a 
> RetryUpToMaximumCountWithFixedSleep, whose count is the maxTime / sleepTime: 
> {noformat}public RetryUpToMaximumTimeWithFixedSleep(long maxTime, long 
> sleepTime,
> TimeUnit timeUnit) {
>   super((int) (maxTime / sleepTime), sleepTime, timeUnit);
>   this.maxTime = maxTime;
>   this.timeUnit = timeUnit;
> }
> {noformat}
> But if retries take a long time, then the maxTime passed to the 
> RetryUpToMaximumTimeWithFixedSleep is exceeded.
> As an example, while doing NM restarts, we saw an issue where the NMProxy 
> creates a retry policy which specifies a maximum wait time of 15 minutes and 
> a 10 sec interval (which is converted to a MaximumCount policy with 15 min / 
> 10 sec = 90 tries). But each NMProxy retry policy invokes o.a.h.ipc.Client's 
> retry policy: {noformat}  if (connectionRetryPolicy == null) {
> final int max = conf.getInt(
> CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_MAX_RETRIES_KEY,
> 
> CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_MAX_RETRIES_DEFAULT);
> final int retryInterval = conf.getInt(
> 
> CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_RETRY_INTERVAL_KEY,
> CommonConfigurationKeysPublic
> .IPC_CLIENT_CONNECT_RETRY_INTERVAL_DEFAULT);
> connectionRetryPolicy = 
> RetryPolicies.retryUpToMaximumCountWithFixedSleep(
> max, retryInterval, TimeUnit.MILLISECONDS);
>   }{noformat}
> So the time it takes the NMProxy to fail is actually (90 retries) * (10 sec 
> NMProxy interval + o.a.h.ipc.Client retry time). In the default case, ipc 
> client retries 10 times with a 1 sec interval, meaning the time it takes for 
> NMProxy to fail is (90)(10 sec + 10 sec) = 30 min instead of the 15 min 
> specified by NMProxy configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14828) RetryUpToMaximumTimeWithFixedSleep is not bounded by maximum time

2017-09-01 Thread Jonathan Hung (JIRA)
Jonathan Hung created HADOOP-14828:
--

 Summary: RetryUpToMaximumTimeWithFixedSleep is not bounded by 
maximum time
 Key: HADOOP-14828
 URL: https://issues.apache.org/jira/browse/HADOOP-14828
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jonathan Hung


In RetryPolicies.java, RetryUpToMaximumTimeWithFixedSleep is converted to a 
RetryUpToMaximumCountWithFixedSleep, whose count is the maxTime / sleepTime: 
{noformat}public RetryUpToMaximumTimeWithFixedSleep(long maxTime, long 
sleepTime,
TimeUnit timeUnit) {
  super((int) (maxTime / sleepTime), sleepTime, timeUnit);
  this.maxTime = maxTime;
  this.timeUnit = timeUnit;
}
{noformat}
But if retries take a long time, then the maxTime passed to the 
RetryUpToMaximumTimeWithFixedSleep is exceeded.

As an example, while doing NM restarts, we saw an issue where the NMProxy 
creates a retry policy which specifies a maximum wait time of 15 minutes and a 
10 sec interval (which is converted to a MaximumCount policy with 15 min / 10 
sec = 90 tries). But each NMProxy retry policy invokes o.a.h.ipc.Client's retry 
policy: {noformat}  if (connectionRetryPolicy == null) {
final int max = conf.getInt(
CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_MAX_RETRIES_KEY,

CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_MAX_RETRIES_DEFAULT);
final int retryInterval = conf.getInt(
CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_RETRY_INTERVAL_KEY,
CommonConfigurationKeysPublic
.IPC_CLIENT_CONNECT_RETRY_INTERVAL_DEFAULT);

connectionRetryPolicy = 
RetryPolicies.retryUpToMaximumCountWithFixedSleep(
max, retryInterval, TimeUnit.MILLISECONDS);
  }{noformat}
So the time it takes the NMProxy to fail is actually (90 retries) * (10 sec 
NMProxy interval + o.a.h.ipc.Client retry time). In the default case, ipc 
client retries 10 times with a 1 sec interval, meaning the time it takes for 
NMProxy to fail is (90)(10 sec + 10 sec) = 30 min instead of the 15 min 
specified by NMProxy configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12825) Log slow name resolutions

2017-05-30 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-12825:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Re-closing, since this was originally required for HADOOP-14463.

> Log slow name resolutions 
> --
>
> Key: HADOOP-12825
> URL: https://issues.apache.org/jira/browse/HADOOP-12825
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Fix For: 3.0.0-alpha1, 2.7.2, 2.8.0
>
> Attachments: getByName-call-graph.txt, HADOOP-12825.001.patch, 
> HADOOP-12825.002.patch, HADOOP-12825-branch-2.7.001.patch
>
>
> Logging slow name resolutions would be useful in identifying DNS performance 
> issues in a cluster. Most resolutions go through 
> {{org.apache.hadoop.security.SecurityUtil.getByName}} ( see attached call 
> graph ). Adding additional logging to this method would expose such issues.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14463) Port HADOOP-12954 to branch-2.8, branch-2.7

2017-05-30 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-14463:
---
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

TestAMRMClient passed locally here: 
https://issues.apache.org/jira/browse/YARN-4925?focusedCommentId=16030530=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16030530
So closing as won't fix.

> Port HADOOP-12954 to branch-2.8, branch-2.7
> ---
>
> Key: HADOOP-14463
> URL: https://issues.apache.org/jira/browse/HADOOP-14463
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: HADOOP-12954-branch-2.8.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12825) Log slow name resolutions

2017-05-30 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030259#comment-16030259
 ] 

Jonathan Hung commented on HADOOP-12825:


Btw, there was a "conflict", when applying to branch-2.7 it couldn't find the 
{{import javax.annotation.Nullable;}} line, which is why I added a separate 
patch. Otherwise it applies cleanly.

> Log slow name resolutions 
> --
>
> Key: HADOOP-12825
> URL: https://issues.apache.org/jira/browse/HADOOP-12825
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Fix For: 2.8.0, 2.7.2, 3.0.0-alpha1
>
> Attachments: getByName-call-graph.txt, HADOOP-12825.001.patch, 
> HADOOP-12825.002.patch, HADOOP-12825-branch-2.7.001.patch
>
>
> Logging slow name resolutions would be useful in identifying DNS performance 
> issues in a cluster. Most resolutions go through 
> {{org.apache.hadoop.security.SecurityUtil.getByName}} ( see attached call 
> graph ). Adding additional logging to this method would expose such issues.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12825) Log slow name resolutions

2017-05-30 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030079#comment-16030079
 ] 

Jonathan Hung commented on HADOOP-12825:


[~shv] it seems this was never committed to branch-2.7 but the fix version was 
set to 2.7.2. I attached a branch-2.7 patch. Can we get this committed? Thanks!

> Log slow name resolutions 
> --
>
> Key: HADOOP-12825
> URL: https://issues.apache.org/jira/browse/HADOOP-12825
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Fix For: 2.8.0, 2.7.2, 3.0.0-alpha1
>
> Attachments: getByName-call-graph.txt, HADOOP-12825.001.patch, 
> HADOOP-12825.002.patch, HADOOP-12825-branch-2.7.001.patch
>
>
> Logging slow name resolutions would be useful in identifying DNS performance 
> issues in a cluster. Most resolutions go through 
> {{org.apache.hadoop.security.SecurityUtil.getByName}} ( see attached call 
> graph ). Adding additional logging to this method would expose such issues.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12825) Log slow name resolutions

2017-05-30 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-12825:
---
Attachment: HADOOP-12825-branch-2.7.001.patch

> Log slow name resolutions 
> --
>
> Key: HADOOP-12825
> URL: https://issues.apache.org/jira/browse/HADOOP-12825
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Fix For: 2.8.0, 2.7.2, 3.0.0-alpha1
>
> Attachments: getByName-call-graph.txt, HADOOP-12825.001.patch, 
> HADOOP-12825.002.patch, HADOOP-12825-branch-2.7.001.patch
>
>
> Logging slow name resolutions would be useful in identifying DNS performance 
> issues in a cluster. Most resolutions go through 
> {{org.apache.hadoop.security.SecurityUtil.getByName}} ( see attached call 
> graph ). Adding additional logging to this method would expose such issues.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12825) Log slow name resolutions

2017-05-30 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-12825:
---
Status: Patch Available  (was: Reopened)

> Log slow name resolutions 
> --
>
> Key: HADOOP-12825
> URL: https://issues.apache.org/jira/browse/HADOOP-12825
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Fix For: 3.0.0-alpha1, 2.7.2, 2.8.0
>
> Attachments: getByName-call-graph.txt, HADOOP-12825.001.patch, 
> HADOOP-12825.002.patch, HADOOP-12825-branch-2.7.001.patch
>
>
> Logging slow name resolutions would be useful in identifying DNS performance 
> issues in a cluster. Most resolutions go through 
> {{org.apache.hadoop.security.SecurityUtil.getByName}} ( see attached call 
> graph ). Adding additional logging to this method would expose such issues.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-12825) Log slow name resolutions

2017-05-30 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung reopened HADOOP-12825:


> Log slow name resolutions 
> --
>
> Key: HADOOP-12825
> URL: https://issues.apache.org/jira/browse/HADOOP-12825
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Fix For: 2.8.0, 2.7.2, 3.0.0-alpha1
>
> Attachments: getByName-call-graph.txt, HADOOP-12825.001.patch, 
> HADOOP-12825.002.patch
>
>
> Logging slow name resolutions would be useful in identifying DNS performance 
> issues in a cluster. Most resolutions go through 
> {{org.apache.hadoop.security.SecurityUtil.getByName}} ( see attached call 
> graph ). Adding additional logging to this method would expose such issues.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14463) Port HADOOP-12954 to branch-2.8, branch-2.7

2017-05-27 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-14463:
---
Status: Patch Available  (was: Open)

> Port HADOOP-12954 to branch-2.8, branch-2.7
> ---
>
> Key: HADOOP-14463
> URL: https://issues.apache.org/jira/browse/HADOOP-14463
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: HADOOP-12954-branch-2.8.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14463) Port HADOOP-12954 to branch-2.8, branch-2.7

2017-05-27 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-14463:
---
Attachment: HADOOP-12954-branch-2.8.001.patch

> Port HADOOP-12954 to branch-2.8, branch-2.7
> ---
>
> Key: HADOOP-14463
> URL: https://issues.apache.org/jira/browse/HADOOP-14463
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: HADOOP-12954-branch-2.8.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14463) Port HADOOP-12954 to branch-2.8, branch-2.7

2017-05-27 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-14463:
---
Status: Open  (was: Patch Available)

> Port HADOOP-12954 to branch-2.8, branch-2.7
> ---
>
> Key: HADOOP-14463
> URL: https://issues.apache.org/jira/browse/HADOOP-14463
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14463) Port HADOOP-12954 to branch-2.8, branch-2.7

2017-05-27 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-14463:
---
Status: Patch Available  (was: Open)

> Port HADOOP-12954 to branch-2.8, branch-2.7
> ---
>
> Key: HADOOP-14463
> URL: https://issues.apache.org/jira/browse/HADOOP-14463
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14463) Port HADOOP-12954 to branch-2.8, branch-2.7

2017-05-27 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-14463:
---
Status: Open  (was: Patch Available)

> Port HADOOP-12954 to branch-2.8, branch-2.7
> ---
>
> Key: HADOOP-14463
> URL: https://issues.apache.org/jira/browse/HADOOP-14463
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >