[jira] [Commented] (HADOOP-16670) Stripping Submarine code from Hadoop codebase.

2019-10-24 Thread Wangda Tan (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16959310#comment-16959310
 ] 

Wangda Tan commented on HADOOP-16670:
-

+1,

> Stripping Submarine code from Hadoop codebase.
> --
>
> Key: HADOOP-16670
> URL: https://issues.apache.org/jira/browse/HADOOP-16670
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> Now that Submarine is getting out of Hadoop and has its own repo, it's time 
> to stripe the Submarine code from Hadoop codebase in Hadoop 3.3.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16031) TestSecureLogins#testValidKerberosName fails

2019-01-07 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-16031:

Target Version/s: 3.3.0, 3.2.1, 3.1.3  (was: 3.3.0, 3.1.2, 3.2.1)

> TestSecureLogins#testValidKerberosName fails
> 
>
> Key: HADOOP-16031
> URL: https://issues.apache.org/jira/browse/HADOOP-16031
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16031.01.patch
>
>
> {noformat}
> [INFO] Running org.apache.hadoop.registry.secure.TestSecureLogins
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 2.724 s <<< FAILURE! - in org.apache.hadoop.registry.secure.TestSecureLogins
> [ERROR] 
> testValidKerberosName(org.apache.hadoop.registry.secure.TestSecureLogins)  
> Time elapsed: 0.01 s  <<< ERROR!
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to zookeeper/localhost
>   at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:429)
>   at 
> org.apache.hadoop.registry.secure.TestSecureLogins.testValidKerberosName(TestSecureLogins.java:203)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16016) TestSSLFactory#testServerWeakCiphers sporadically fails in precommit builds

2019-01-07 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-16016:

Target Version/s: 3.0.4, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3  (was: 3.0.4, 
3.3.0, 3.1.2, 2.8.6, 3.2.1, 2.9.3)

> TestSSLFactory#testServerWeakCiphers sporadically fails in precommit builds
> ---
>
> Key: HADOOP-16016
> URL: https://issues.apache.org/jira/browse/HADOOP-16016
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
> Environment: Java 1.8.0_191 or upper
>Reporter: Jason Lowe
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16016-002.patch, HADOOP-16016.01.patch, 
> HADOOP-16016.03.patch
>
>
> I have seen a couple of precommit builds across JIRAs fail in 
> TestSSLFactory#testServerWeakCiphers with the error:
> {noformat}
> [ERROR]   TestSSLFactory.testServerWeakCiphers:240 Expected to find 'no 
> cipher suites in common' but got unexpected 
> exception:javax.net.ssl.SSLHandshakeException: No appropriate protocol 
> (protocol is disabled or cipher suites are inappropriate)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15922) DelegationTokenAuthenticationFilter get wrong doAsUser since it does not decode URL

2019-01-07 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15922:

Target Version/s: 3.3.0, 3.2.1, 3.1.3  (was: 3.3.0, 3.1.2, 3.2.1)

> DelegationTokenAuthenticationFilter get wrong doAsUser since it does not 
> decode URL
> ---
>
> Key: HADOOP-15922
> URL: https://issues.apache.org/jira/browse/HADOOP-15922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15922.001.patch, HADOOP-15922.002.patch, 
> HADOOP-15922.003.patch, HADOOP-15922.004.patch, HADOOP-15922.005.patch, 
> HADOOP-15922.006.patch, HADOOP-15922.007.patch
>
>
> DelegationTokenAuthenticationFilter get wrong doAsUser when proxy user from 
> client is complete kerberos name (e.g., user/hostn...@realm.com, actually it 
> is acceptable), because DelegationTokenAuthenticationFilter does not decode 
> DOAS parameter in URL which is encoded by {{URLEncoder}} at client.
> e.g. KMS as example:
> a. KMSClientProvider creates connection to KMS Server using 
> DelegationTokenAuthenticatedURL#openConnection.
> b. If KMSClientProvider is a doAsUser, KMSClientProvider will put {{doas}} 
> with url encoded user as one parameter of http request. 
> {code:java}
> // proxyuser
> if (doAs != null) {
>   extraParams.put(DO_AS, URLEncoder.encode(doAs, "UTF-8"));
> }
> {code}
> c. when KMS server receives the request, it does not decode the proxy user.
> As result, KMS Server will get the wrong proxy user if this proxy user is 
> complete Kerberos Name or it includes some special character. Some other 
> authentication and authorization exception will throws next to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2018-11-16 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15870:

Target Version/s: 2.8.5, 2.9.1, 3.2.0, 3.0.4, 3.1.3  (was: 2.9.1, 3.2.0, 
2.8.5, 3.0.4, 3.1.2)

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15747) warning about user:pass in URI to explicitly call out Hadoop 3.2 as removal

2018-11-16 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15747:

Target Version/s: 2.10.0, 3.0.4, 3.1.3  (was: 2.10.0, 3.0.4, 3.1.2)

> warning about user:pass in URI to explicitly call out Hadoop 3.2 as removal
> ---
>
> Key: HADOOP-15747
> URL: https://issues.apache.org/jira/browse/HADOOP-15747
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.9.1, 3.1.1, 3.0.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Now that HADOOP-14833 has removed user:secret from URIs, change the warning 
> printed when people to that to explicitly declare Hadoop 3.3 as when it will 
> happen. Do the same in the docs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15890) Some S3A committer tests don't match ITest* pattern; don't run in maven

2018-11-16 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15890:

Target Version/s: 3.2.1, 3.1.3  (was: 3.1.2, 3.2.1)

> Some S3A committer tests don't match ITest* pattern; don't run in maven
> ---
>
> Key: HADOOP-15890
> URL: https://issues.apache.org/jira/browse/HADOOP-15890
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> some of the s3A committer tests don't have the right prefix for the maven IT 
> test runs to pick up
> {code}
> ITMagicCommitMRJob.java
> ITStagingCommitMRJobBad
> ITDirectoryCommitMRJob
> ITStagingCommitMRJob
> {code}
> They all work when run by name or in the IDE (which is where I developed 
> them), but they don't run in maven builds.
> Fix: rename. There are some new tests in branch-3.2 from HADOOP-15107 which 
> aren't in 3.1; need patches for both.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15781) S3A assumed role tests failing due to changed error text in AWS exceptions

2018-11-16 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15781:

Target Version/s: 3.2.0, 3.1.3  (was: 3.2.0, 3.1.2)

> S3A assumed role tests failing due to changed error text in AWS exceptions
> --
>
> Key: HADOOP-15781
> URL: https://issues.apache.org/jira/browse/HADOOP-15781
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Affects Versions: 3.1.0, 3.2.0
> Environment: some of the fault-catching tests in {{ITestAssumeRole}} 
> are failing as the SDK update of HADOOP-15642 changed the text. Fix the 
> tests, perhaps by removing the text check entirely 
> —it's clearly too brittle
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15781-001.patch, HADOOP-15781-branch-3.1-002.patch
>
>
> This is caused by HADOOP-15642 but I'd missed it because I'd been playing 
> with assumed roles locally (restricting their rights) and mistook the 
> failures for "steve's misconfigured the test role", not "the SDK



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15602) Support SASL Rpc request handling in separate Handlers

2018-11-16 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15602:

Target Version/s: 3.1.3  (was: 3.1.2)

> Support SASL Rpc request handling in separate Handlers 
> ---
>
> Key: HADOOP-15602
> URL: https://issues.apache.org/jira/browse/HADOOP-15602
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Attachments: HADOOP-15602.01.patch
>
>
> Right now, during RPC Connection establishment, all SASL requests are 
> considered as OutOfBand requests and handled within the same Reader thread.
> SASL handling involves authentication with Kerberos and SecretManagers(for 
> Token validation). During this time, Reader thread would be blocked, hence 
> blocking all the incoming RPC requests on other established connections. Some 
> secretManager impls require to communicate to external systems (ex: ZK) for 
> verification.
> SASL RPC handling in separate dedicated handlers, would enable Reader threads 
> to read RPC requests from established connections without blocking.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15501) [Umbrella] Upgrade efforts to Hadoop 3.x

2018-11-16 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15501:

Target Version/s: 3.2.0, 3.1.3  (was: 3.2.0, 3.1.2)

> [Umbrella] Upgrade efforts to Hadoop 3.x
> 
>
> Key: HADOOP-15501
> URL: https://issues.apache.org/jira/browse/HADOOP-15501
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Sunil Govindan
>Priority: Major
>
> This is an umbrella ticket to manage all similar efforts to close gaps for 
> upgrade efforts to 3.x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15398) StagingTestBase uses methods not available in Mockito 1.8.5

2018-11-16 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15398:

Target Version/s: 3.2.0, 3.1.3  (was: 3.2.0, 3.1.2)

> StagingTestBase uses methods not available in Mockito 1.8.5
> ---
>
> Key: HADOOP-15398
> URL: https://issues.apache.org/jira/browse/HADOOP-15398
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
> Attachments: HADOOP-15398.001.patch, HADOOP-15398.002.patch
>
>
> *Problem:* hadoop trunk compilation is failing
>  *Root Cause:*
>  compilation error is coming from 
> {{org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase}}. Compilation 
> error is "The method getArgumentAt(int, Class) is 
> undefined for the type InvocationOnMock".
> StagingTestBase is using getArgumentAt(int, Class) method 
> which is not available in mockito-all 1.8.5 version. getArgumentAt(int, 
> Class) method is available only from version 2.0.0-beta
> *Expectations:*
>  Either mockito-all version to be upgraded or test case to be written only 
> with available functions in 1.8.5.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15205) maven release: missing source attachments for hadoop-mapreduce-client-core

2018-11-16 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15205:

Target Version/s: 3.1.3  (was: 3.1.2)

> maven release: missing source attachments for hadoop-mapreduce-client-core
> --
>
> Key: HADOOP-15205
> URL: https://issues.apache.org/jira/browse/HADOOP-15205
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.5, 3.0.0, 3.1.0, 3.0.1
>Reporter: Zoltan Haindrich
>Priority: Major
> Attachments: chk.bash
>
>
> I wanted to use the source attachment; however it looks like since 2.7.5 that 
> artifact is not present at maven central ; it looks like the last release 
> which had source attachments / javadocs was 2.7.4
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.5/
> this seems to be not limited to mapreduce; as the same change is present for 
> yarn-common as well
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-yarn-common/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-yarn-common/2.7.5/
> and also hadoop-common
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/2.7.5/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/3.0.0/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/3.1.0/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2018-11-16 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15124:

Target Version/s: 2.10.0, 3.2.0, 3.0.4, 2.8.6, 2.9.3, 3.1.3  (was: 2.10.0, 
3.2.0, 3.0.4, 3.1.2, 2.8.6, 2.9.3)

> Slow FileSystem.Statistics counters implementation
> --
>
> Key: HADOOP-15124
> URL: https://issues.apache.org/jira/browse/HADOOP-15124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 2.9.0, 2.8.3, 2.7.5, 3.0.0, 3.1.0
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
>  Labels: common, filesystem, fs, statistics
> Attachments: HADOOP-15124.001.patch
>
>
> While profiling 1TB TeraGen job on Hadoop 2.8.2 cluster (Google Dataproc, 2 
> workers, GCS connector) I saw that FileSystem.Statistics code paths Wall time 
> is 5.58% and CPU time is 26.5% of total execution time.
> After switching FileSystem.Statistics implementation to LongAdder, consumed 
> Wall time decreased to 0.006% and CPU time to 0.104% of total execution time.
> Total job runtime decreased from 66 mins to 61 mins.
> These results are not conclusive, because I didn't benchmark multiple times 
> to average results, but regardless of performance gains switching to 
> LongAdder simplifies code and reduces its complexity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15832) Upgrade BouncyCastle to 1.60

2018-10-19 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16657075#comment-16657075
 ] 

Wangda Tan commented on HADOOP-15832:
-

Thanks [~rkanter]!

> Upgrade BouncyCastle to 1.60
> 
>
> Key: HADOOP-15832
> URL: https://issues.apache.org/jira/browse/HADOOP-15832
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15832.001.patch, HADOOP-15832.addendum.patch
>
>
> As part of my work on YARN-6586, I noticed that we're using a very old 
> version of BouncyCastle:
> {code:xml}
> 
>org.bouncycastle
>bcprov-jdk16
>1.46
>test
> 
> {code}
> The *-jdk16 artifacts have been discontinued and are not recommended (see 
> [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]).
>  
>  In particular, the newest release, 1.46, is from {color:#FF}2011{color}! 
>  [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16]
> The currently maintained and recommended artifacts are *-jdk15on:
>  [https://www.bouncycastle.org/latest_releases.html]
>  They're currently on version 1.60, released only a few months ago.
> We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 
> release. It's currently a test-only artifact, so there should be no 
> backwards-compatibility issues with updating this. It's also needed for 
> YARN-6586, where we'll actually be shipping it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15832) Upgrade BouncyCastle to 1.60

2018-10-18 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655763#comment-16655763
 ] 

Wangda Tan commented on HADOOP-15832:
-

[~rkanter], should we revert the two problematic JIRAs while doing 
investigation? We should not break trunk for too long.

> Upgrade BouncyCastle to 1.60
> 
>
> Key: HADOOP-15832
> URL: https://issues.apache.org/jira/browse/HADOOP-15832
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15832.001.patch, HADOOP-15832.addendum.patch
>
>
> As part of my work on YARN-6586, I noticed that we're using a very old 
> version of BouncyCastle:
> {code:xml}
> 
>org.bouncycastle
>bcprov-jdk16
>1.46
>test
> 
> {code}
> The *-jdk16 artifacts have been discontinued and are not recommended (see 
> [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]).
>  
>  In particular, the newest release, 1.46, is from {color:#FF}2011{color}! 
>  [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16]
> The currently maintained and recommended artifacts are *-jdk15on:
>  [https://www.bouncycastle.org/latest_releases.html]
>  They're currently on version 1.60, released only a few months ago.
> We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 
> release. It's currently a test-only artifact, so there should be no 
> backwards-compatibility issues with updating this. It's also needed for 
> YARN-6586, where we'll actually be shipping it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15820) ZStandardDecompressor native code sets an integer field as a long

2018-10-05 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15820:

Priority: Blocker  (was: Major)

> ZStandardDecompressor native code sets an integer field as a long
> -
>
> Key: HADOOP-15820
> URL: https://issues.apache.org/jira/browse/HADOOP-15820
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Blocker
> Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.1.2
>
> Attachments: HADOOP-15820.001.patch
>
>
> Java_org_apache_hadoop_io_compress_zstd_ZStandardDecompressor_init in 
> ZStandardDecompressor.c sets the {{remaining}} field as a long when it 
> actually is an integer.
> Kudos to Ben Lau from our HBase team for discovering this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15501) [Umbrella] Upgrade efforts to Hadoop 3.x

2018-07-31 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15501:

Target Version/s: 3.2.0, 3.1.2  (was: 3.2.0)

> [Umbrella] Upgrade efforts to Hadoop 3.x
> 
>
> Key: HADOOP-15501
> URL: https://issues.apache.org/jira/browse/HADOOP-15501
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Sunil Govindan
>Priority: Major
>
> This is an umbrella ticket to manage all similar efforts to close gaps for 
> upgrade efforts to 3.x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15602) Support SASL Rpc request handling in separate Handlers

2018-07-31 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15602:

Target Version/s: 3.1.2  (was: 3.1.1)

> Support SASL Rpc request handling in separate Handlers 
> ---
>
> Key: HADOOP-15602
> URL: https://issues.apache.org/jira/browse/HADOOP-15602
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Attachments: HADOOP-15602.01.patch
>
>
> Right now, during RPC Connection establishment, all SASL requests are 
> considered as OutOfBand requests and handled within the same Reader thread.
> SASL handling involves authentication with Kerberos and SecretManagers(for 
> Token validation). During this time, Reader thread would be blocked, hence 
> blocking all the incoming RPC requests on other established connections. Some 
> secretManager impls require to communicate to external systems (ex: ZK) for 
> verification.
> SASL RPC handling in separate dedicated handlers, would enable Reader threads 
> to read RPC requests from established connections without blocking.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15205) maven release: missing source attachments for hadoop-mapreduce-client-core

2018-07-31 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15205:

Target Version/s: 3.1.2  (was: 3.1.1)

> maven release: missing source attachments for hadoop-mapreduce-client-core
> --
>
> Key: HADOOP-15205
> URL: https://issues.apache.org/jira/browse/HADOOP-15205
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.5, 3.0.0, 3.1.0, 3.0.1
>Reporter: Zoltan Haindrich
>Priority: Major
>
> I wanted to use the source attachment; however it looks like since 2.7.5 that 
> artifact is not present at maven central ; it looks like the last release 
> which had source attachments / javadocs was 2.7.4
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.5/
> this seems to be not limited to mapreduce; as the same change is present for 
> yarn-common as well
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-yarn-common/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-yarn-common/2.7.5/
> and also hadoop-common
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/2.7.5/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/3.0.0/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/3.1.0/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15501) [Umbrella] Upgrade efforts to Hadoop 3.x

2018-07-31 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15501:

Target Version/s: 3.2.0  (was: 3.2.0, 3.1.1)

> [Umbrella] Upgrade efforts to Hadoop 3.x
> 
>
> Key: HADOOP-15501
> URL: https://issues.apache.org/jira/browse/HADOOP-15501
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Sunil Govindan
>Priority: Major
>
> This is an umbrella ticket to manage all similar efforts to close gaps for 
> upgrade efforts to 3.x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15398) StagingTestBase uses methods not available in Mockito 1.8.5

2018-07-31 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15398:

Target Version/s: 3.2.0, 3.1.2  (was: 3.2.0, 3.1.1)

> StagingTestBase uses methods not available in Mockito 1.8.5
> ---
>
> Key: HADOOP-15398
> URL: https://issues.apache.org/jira/browse/HADOOP-15398
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
> Attachments: HADOOP-15398.001.patch, HADOOP-15398.002.patch
>
>
> *Problem:* hadoop trunk compilation is failing
>  *Root Cause:*
>  compilation error is coming from 
> {{org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase}}. Compilation 
> error is "The method getArgumentAt(int, Class) is 
> undefined for the type InvocationOnMock".
> StagingTestBase is using getArgumentAt(int, Class) method 
> which is not available in mockito-all 1.8.5 version. getArgumentAt(int, 
> Class) method is available only from version 2.0.0-beta
> *Expectations:*
>  Either mockito-all version to be upgraded or test case to be written only 
> with available functions in 1.8.5.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2018-07-31 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15124:

Target Version/s: 3.0.3, 3.1.0, 2.6.6, 2.10.0, 3.2.0, 2.9.2, 2.8.5, 2.7.8, 
3.0.4, 3.1.2  (was: 2.6.6, 3.1.0, 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3, 2.8.5, 
2.7.8, 3.0.4)

> Slow FileSystem.Statistics counters implementation
> --
>
> Key: HADOOP-15124
> URL: https://issues.apache.org/jira/browse/HADOOP-15124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 2.9.0, 2.8.3, 2.7.5, 3.0.0, 3.1.0
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
>  Labels: common, filesystem, fs, statistics
> Attachments: HADOOP-15124.001.patch
>
>
> While profiling 1TB TeraGen job on Hadoop 2.8.2 cluster (Google Dataproc, 2 
> workers, GCS connector) I saw that FileSystem.Statistics code paths Wall time 
> is 5.58% and CPU time is 26.5% of total execution time.
> After switching FileSystem.Statistics implementation to LongAdder, consumed 
> Wall time decreased to 0.006% and CPU time to 0.104% of total execution time.
> Total job runtime decreased from 66 mins to 61 mins.
> These results are not conclusive, because I didn't benchmark multiple times 
> to average results, but regardless of performance gains switching to 
> LongAdder simplifies code and reduces its complexity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15593) UserGroupInformation TGT renewer throws NPE

2018-07-31 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16564259#comment-16564259
 ] 

Wangda Tan commented on HADOOP-15593:
-

cherry-picked to branch-3.1.1

> UserGroupInformation TGT renewer throws NPE
> ---
>
> Key: HADOOP-15593
> URL: https://issues.apache.org/jira/browse/HADOOP-15593
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Blocker
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HADOOP-15593.001.patch, HADOOP-15593.002.patch, 
> HADOOP-15593.003.patch, HADOOP-15593.004.patch, HADOOP-15593.005.patch
>
>
> Found the following NPE thrown in UGI tgt renewer. The NPE was thrown within 
> an exception handler so the original exception was hidden, though it's likely 
> caused by expired tgt.
> {noformat}
> 18/07/02 10:30:57 ERROR util.SparkUncaughtExceptionHandler: Uncaught 
> exception in thread Thread[TGT Renewer for f...@example.com,5,main]
> java.lang.NullPointerException
> at 
> javax.security.auth.kerberos.KerberosTicket.getEndTime(KerberosTicket.java:482)
> at 
> org.apache.hadoop.security.UserGroupInformation$1.run(UserGroupInformation.java:894)
> at java.lang.Thread.run(Thread.java:748){noformat}
> Suspect it's related to [https://bugs.openjdk.java.net/browse/JDK-8154889].
> The relevant code was added in HADOOP-13590. File this jira to handle the 
> exception better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15593) UserGroupInformation TGT renewer throws NPE

2018-07-31 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15593:

Fix Version/s: (was: 3.1.2)
   3.1.1

> UserGroupInformation TGT renewer throws NPE
> ---
>
> Key: HADOOP-15593
> URL: https://issues.apache.org/jira/browse/HADOOP-15593
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Blocker
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HADOOP-15593.001.patch, HADOOP-15593.002.patch, 
> HADOOP-15593.003.patch, HADOOP-15593.004.patch, HADOOP-15593.005.patch
>
>
> Found the following NPE thrown in UGI tgt renewer. The NPE was thrown within 
> an exception handler so the original exception was hidden, though it's likely 
> caused by expired tgt.
> {noformat}
> 18/07/02 10:30:57 ERROR util.SparkUncaughtExceptionHandler: Uncaught 
> exception in thread Thread[TGT Renewer for f...@example.com,5,main]
> java.lang.NullPointerException
> at 
> javax.security.auth.kerberos.KerberosTicket.getEndTime(KerberosTicket.java:482)
> at 
> org.apache.hadoop.security.UserGroupInformation$1.run(UserGroupInformation.java:894)
> at java.lang.Thread.run(Thread.java:748){noformat}
> Suspect it's related to [https://bugs.openjdk.java.net/browse/JDK-8154889].
> The relevant code was added in HADOOP-13590. File this jira to handle the 
> exception better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15593) UserGroupInformation TGT renewer throws NPE

2018-07-30 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16562361#comment-16562361
 ] 

Wangda Tan commented on HADOOP-15593:
-

Updated fixed version to 3.1.2 given this don't exist in branch-3.1.1.
But I think it is critical to get it backported to branch-3.1.1, I'm going to 
do this in a couple of hours, please let me know if you think different.

cc: [~gabor.bota], [~eyang], [~xiaochen]

> UserGroupInformation TGT renewer throws NPE
> ---
>
> Key: HADOOP-15593
> URL: https://issues.apache.org/jira/browse/HADOOP-15593
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Blocker
> Fix For: 3.2.0, 3.1.2
>
> Attachments: HADOOP-15593.001.patch, HADOOP-15593.002.patch, 
> HADOOP-15593.003.patch, HADOOP-15593.004.patch, HADOOP-15593.005.patch
>
>
> Found the following NPE thrown in UGI tgt renewer. The NPE was thrown within 
> an exception handler so the original exception was hidden, though it's likely 
> caused by expired tgt.
> {noformat}
> 18/07/02 10:30:57 ERROR util.SparkUncaughtExceptionHandler: Uncaught 
> exception in thread Thread[TGT Renewer for f...@example.com,5,main]
> java.lang.NullPointerException
> at 
> javax.security.auth.kerberos.KerberosTicket.getEndTime(KerberosTicket.java:482)
> at 
> org.apache.hadoop.security.UserGroupInformation$1.run(UserGroupInformation.java:894)
> at java.lang.Thread.run(Thread.java:748){noformat}
> Suspect it's related to [https://bugs.openjdk.java.net/browse/JDK-8154889].
> The relevant code was added in HADOOP-13590. File this jira to handle the 
> exception better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15593) UserGroupInformation TGT renewer throws NPE

2018-07-30 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15593:

Fix Version/s: (was: 3.1.1)
   3.1.2

> UserGroupInformation TGT renewer throws NPE
> ---
>
> Key: HADOOP-15593
> URL: https://issues.apache.org/jira/browse/HADOOP-15593
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Blocker
> Fix For: 3.2.0, 3.1.2
>
> Attachments: HADOOP-15593.001.patch, HADOOP-15593.002.patch, 
> HADOOP-15593.003.patch, HADOOP-15593.004.patch, HADOOP-15593.005.patch
>
>
> Found the following NPE thrown in UGI tgt renewer. The NPE was thrown within 
> an exception handler so the original exception was hidden, though it's likely 
> caused by expired tgt.
> {noformat}
> 18/07/02 10:30:57 ERROR util.SparkUncaughtExceptionHandler: Uncaught 
> exception in thread Thread[TGT Renewer for f...@example.com,5,main]
> java.lang.NullPointerException
> at 
> javax.security.auth.kerberos.KerberosTicket.getEndTime(KerberosTicket.java:482)
> at 
> org.apache.hadoop.security.UserGroupInformation$1.run(UserGroupInformation.java:894)
> at java.lang.Thread.run(Thread.java:748){noformat}
> Suspect it's related to [https://bugs.openjdk.java.net/browse/JDK-8154889].
> The relevant code was added in HADOOP-13590. File this jira to handle the 
> exception better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-15637) LocalFs#listLocatedStatus does not filter out hidden .crc files

2018-07-30 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan reopened HADOOP-15637:
-

[~xkrogen], [~vagarychen], could u help to check the comments from 
[~bibinchundatt] below. 

And updated fixed version to 3.1.2 given this don't exist in branch-3.1.1.

> LocalFs#listLocatedStatus does not filter out hidden .crc files
> ---
>
> Key: HADOOP-15637
> URL: https://issues.apache.org/jira/browse/HADOOP-15637
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Fix For: 3.2.0, 2.9.2, 2.8.5, 3.0.4, 3.1.2
>
> Attachments: HADOOP-15637.000.patch
>
>
> After HADOOP-7165, {{LocalFs#listLocatedStatus}} incorrectly returns the 
> hidden {{.crc}} files used to store checksum information. This is because 
> HADOOP-7165 implemented {{listLocatedStatus}} on {{FilterFs}}, so the default 
> implementation is no longer used, and {{FilterFs}} directly calls the raw FS 
> since {{listLocatedStatus}} is not defined in {{ChecksumFs}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15637) LocalFs#listLocatedStatus does not filter out hidden .crc files

2018-07-30 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15637:

Fix Version/s: (was: 3.1.1)
   3.1.2

> LocalFs#listLocatedStatus does not filter out hidden .crc files
> ---
>
> Key: HADOOP-15637
> URL: https://issues.apache.org/jira/browse/HADOOP-15637
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Fix For: 3.2.0, 2.9.2, 2.8.5, 3.0.4, 3.1.2
>
> Attachments: HADOOP-15637.000.patch
>
>
> After HADOOP-7165, {{LocalFs#listLocatedStatus}} incorrectly returns the 
> hidden {{.crc}} files used to store checksum information. This is because 
> HADOOP-7165 implemented {{listLocatedStatus}} on {{FilterFs}}, so the default 
> implementation is no longer used, and {{FilterFs}} directly calls the raw FS 
> since {{listLocatedStatus}} is not defined in {{ChecksumFs}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15612) Improve exception when tfile fails to load LzoCodec

2018-07-30 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16562305#comment-16562305
 ] 

Wangda Tan commented on HADOOP-15612:
-

Updated fixed version to 3.1.2 given this don't exist in branch-3.1.1

> Improve exception when tfile fails to load LzoCodec 
> 
>
> Key: HADOOP-15612
> URL: https://issues.apache.org/jira/browse/HADOOP-15612
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HADOOP-15612.001.patch, HADOOP-15612.002.patch, 
> HADOOP-15612.003.patch
>
>
> When hadoop-lzo is not on classpath you get
> {code:java}
> java.io.IOException: LZO codec class not specified. Did you forget to set 
> property io.compression.codec.lzo.class?{code}
> which is probably rarely the real cause given the default class name. The 
> real root cause is not attached to the exception thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15612) Improve exception when tfile fails to load LzoCodec

2018-07-30 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15612:

Fix Version/s: (was: 3.1.1)
   3.1.2

> Improve exception when tfile fails to load LzoCodec 
> 
>
> Key: HADOOP-15612
> URL: https://issues.apache.org/jira/browse/HADOOP-15612
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HADOOP-15612.001.patch, HADOOP-15612.002.patch, 
> HADOOP-15612.003.patch
>
>
> When hadoop-lzo is not on classpath you get
> {code:java}
> java.io.IOException: LZO codec class not specified. Did you forget to set 
> property io.compression.codec.lzo.class?{code}
> which is probably rarely the real cause given the default class name. The 
> real root cause is not attached to the exception thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15609) Retry KMS calls when SSLHandshakeException occurs

2018-07-30 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15609:

Fix Version/s: (was: 3.1.1)
   3.1.2

> Retry KMS calls when SSLHandshakeException occurs
> -
>
> Key: HADOOP-15609
> URL: https://issues.apache.org/jira/browse/HADOOP-15609
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HADOOP-15609.001.patch, HADOOP-15609.002.patch, 
> HADOOP-15609.003.patch, HADOOP-15609.004.patch
>
>
> KMS call should retry when javax.net.ssl.SSLHandshakeException occurs and 
> FailoverOnNetworkExceptionRetry policy is used.
> For example in the following stack trace, we can see that the KMS Provider's 
> connection is lost, an SSLHandshakeException is thrown and the operation is 
> not retried:
> {code}
> W0711 18:19:50.213472  1508 LoadBalancingKMSClientProvider.java:132] KMS 
> provider at [https://example.com:16000/kms/v1/] threw an IOException:
> Java exception follows:
> javax.net.ssl.SSLHandshakeException: Remote host closed connection during 
> handshake
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1002)
> at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
> at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
> at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1316)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1291)
> at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:250)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:512)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:502)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:791)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
> Caused by: java.io.EOFException: SSL peer shut down incorrectly
> at sun.security.ssl.InputRecord.read(InputRecord.java:505)
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
> ... 22 more
> W0711 18:19:50.239328  1508 LoadBalancingKMSClientProvider.java:149] Aborting 
> since the Request has failed with all KMS providers(depending on 
> hadoop.security.kms.client.failover.max.retries=1 setting and numProviders=1) 
> in the group OR the exception is not recoverable
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15609) Retry KMS calls when SSLHandshakeException occurs

2018-07-30 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16562304#comment-16562304
 ] 

Wangda Tan commented on HADOOP-15609:
-

Updated fixed version to 3.1.2 given this don't exist in branch-3.1.1

> Retry KMS calls when SSLHandshakeException occurs
> -
>
> Key: HADOOP-15609
> URL: https://issues.apache.org/jira/browse/HADOOP-15609
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HADOOP-15609.001.patch, HADOOP-15609.002.patch, 
> HADOOP-15609.003.patch, HADOOP-15609.004.patch
>
>
> KMS call should retry when javax.net.ssl.SSLHandshakeException occurs and 
> FailoverOnNetworkExceptionRetry policy is used.
> For example in the following stack trace, we can see that the KMS Provider's 
> connection is lost, an SSLHandshakeException is thrown and the operation is 
> not retried:
> {code}
> W0711 18:19:50.213472  1508 LoadBalancingKMSClientProvider.java:132] KMS 
> provider at [https://example.com:16000/kms/v1/] threw an IOException:
> Java exception follows:
> javax.net.ssl.SSLHandshakeException: Remote host closed connection during 
> handshake
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1002)
> at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
> at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
> at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1316)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1291)
> at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:250)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:512)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:502)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:791)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
> Caused by: java.io.EOFException: SSL peer shut down incorrectly
> at sun.security.ssl.InputRecord.read(InputRecord.java:505)
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
> ... 22 more
> W0711 18:19:50.239328  1508 LoadBalancingKMSClientProvider.java:149] Aborting 
> since the Request has failed with all KMS providers(depending on 
> hadoop.security.kms.client.failover.max.retries=1 setting and numProviders=1) 
> in the group OR the exception is not recoverable
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15609) Retry KMS calls when SSLHandshakeException occurs

2018-07-30 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15609:

Fix Version/s: (was: 3.2)
   3.2.0

> Retry KMS calls when SSLHandshakeException occurs
> -
>
> Key: HADOOP-15609
> URL: https://issues.apache.org/jira/browse/HADOOP-15609
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HADOOP-15609.001.patch, HADOOP-15609.002.patch, 
> HADOOP-15609.003.patch, HADOOP-15609.004.patch
>
>
> KMS call should retry when javax.net.ssl.SSLHandshakeException occurs and 
> FailoverOnNetworkExceptionRetry policy is used.
> For example in the following stack trace, we can see that the KMS Provider's 
> connection is lost, an SSLHandshakeException is thrown and the operation is 
> not retried:
> {code}
> W0711 18:19:50.213472  1508 LoadBalancingKMSClientProvider.java:132] KMS 
> provider at [https://example.com:16000/kms/v1/] threw an IOException:
> Java exception follows:
> javax.net.ssl.SSLHandshakeException: Remote host closed connection during 
> handshake
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1002)
> at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
> at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
> at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1316)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1291)
> at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:250)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:512)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:502)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:791)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
> Caused by: java.io.EOFException: SSL peer shut down incorrectly
> at sun.security.ssl.InputRecord.read(InputRecord.java:505)
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
> ... 22 more
> W0711 18:19:50.239328  1508 LoadBalancingKMSClientProvider.java:149] Aborting 
> since the Request has failed with all KMS providers(depending on 
> hadoop.security.kms.client.failover.max.retries=1 setting and numProviders=1) 
> in the group OR the exception is not recoverable
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15593) UserGroupInformation TGT renewer throws NPE

2018-07-19 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16549998#comment-16549998
 ] 

Wangda Tan commented on HADOOP-15593:
-

[~eyang], sure, I'd like to include this in 3.1.1 if it won't take too long :).

> UserGroupInformation TGT renewer throws NPE
> ---
>
> Key: HADOOP-15593
> URL: https://issues.apache.org/jira/browse/HADOOP-15593
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Blocker
> Attachments: HADOOP-15593.001.patch, HADOOP-15593.002.patch
>
>
> Found the following NPE thrown in UGI tgt renewer. The NPE was thrown within 
> an exception handler so the original exception was hidden, though it's likely 
> caused by expired tgt.
> {noformat}
> 18/07/02 10:30:57 ERROR util.SparkUncaughtExceptionHandler: Uncaught 
> exception in thread Thread[TGT Renewer for f...@example.com,5,main]
> java.lang.NullPointerException
> at 
> javax.security.auth.kerberos.KerberosTicket.getEndTime(KerberosTicket.java:482)
> at 
> org.apache.hadoop.security.UserGroupInformation$1.run(UserGroupInformation.java:894)
> at java.lang.Thread.run(Thread.java:748){noformat}
> Suspect it's related to [https://bugs.openjdk.java.net/browse/JDK-8154889].
> The relevant code was added in HADOOP-13590. File this jira to handle the 
> exception better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15493) DiskChecker should handle disk full situation

2018-07-19 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15493:

Target Version/s: 3.2.0  (was: 3.2.0, 3.1.1)

> DiskChecker should handle disk full situation
> -
>
> Key: HADOOP-15493
> URL: https://issues.apache.org/jira/browse/HADOOP-15493
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Critical
> Attachments: HADOOP-15493.01.patch, HADOOP-15493.02.patch
>
>
> DiskChecker#checkDirWithDiskIo creates a file to verify that the disk is 
> writable.
> However check should not fail when file creation fails due to disk being 
> full. This avoids marking full disks as _failed_.
> Reported by [~kihwal] and [~daryn] in HADOOP-15450. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15493) DiskChecker should handle disk full situation

2018-06-11 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16508957#comment-16508957
 ] 

Wangda Tan commented on HADOOP-15493:
-

Bulk update on non-blocker issues which are targeted to 3.1.1:

If this issue is absolutely required for 3.1.1, please upgrade priority to 
blocker. I'm working on 3.1.1 release now and will move this JIRAs to 3.1.2 
during the week. Thanks.

> DiskChecker should handle disk full situation
> -
>
> Key: HADOOP-15493
> URL: https://issues.apache.org/jira/browse/HADOOP-15493
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Critical
> Attachments: HADOOP-15493.01.patch, HADOOP-15493.02.patch
>
>
> DiskChecker#checkDirWithDiskIo creates a file to verify that the disk is 
> writable.
> However check should not fail when file creation fails due to disk being 
> full. This avoids marking full disks as _failed_.
> Reported by [~kihwal] and [~daryn] in HADOOP-15450. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15392) S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export

2018-05-10 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16471330#comment-16471330
 ] 

Wangda Tan commented on HADOOP-15392:
-

Since this is marked as 3.1.1 blocker issue which we plan to release soon. 
[~mackrorysd], [~fabbri], [~Krizek], Could you update what is the status of 
this Jira?

> S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export
> 
>
> Key: HADOOP-15392
> URL: https://issues.apache.org/jira/browse/HADOOP-15392
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Voyta
>Priority: Blocker
>
> While using HBase S3A Export Snapshot utility we started to experience memory 
> leaks of the process after version upgrade.
> By running code analysis we traced the cause to revision 
> 6555af81a26b0b72ec3bee7034e01f5bd84b1564 that added the following static 
> reference (singleton):
> private static MetricsSystem metricsSystem = null;
> When application uses S3AFileSystem instance that is not closed immediately 
> metrics are accumulated in this instance and memory grows without any limit.
>  
> Expectation:
>  * It would be nice to have an option to disable metrics completely as this 
> is not needed for Export Snapshot utility.
>  * Usage of S3AFileSystem should not contain any static object that can grow 
> indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15411) AuthenticationFilter should use Configuration.getPropsWithPrefix instead of iterator

2018-04-25 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15411:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.3
   3.1.1
   3.2.0
   Status: Resolved  (was: Patch Available)

Committed branch-3.0/3.1/trunk, thanks [~suma.shivaprasad].

> AuthenticationFilter should use Configuration.getPropsWithPrefix instead of 
> iterator
> 
>
> Key: HADOOP-15411
> URL: https://issues.apache.org/jira/browse/HADOOP-15411
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Critical
> Fix For: 3.2.0, 3.1.1, 3.0.3
>
> Attachments: HADOOP-15411.1.patch
>
>
> Node manager start up fails with the following stack trace
> {code}
> 2018-04-19 13:08:30,638 ERROR nodemanager.NodeManager 
> (NodeManager.java:initAndStartNodeManager(921)) - Error starting NodeManager
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: NMWebapps failed to 
> start.
>  at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:117)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:919)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:979)
> Caused by: org.apache.hadoop.yarn.webapp.WebAppException: Error starting http 
> server
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:377)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:424)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:420)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:112)
>  ... 5 more
> Caused by: java.io.IOException: java.util.ConcurrentModificationException
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:532)
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:117)
>  at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:421)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:333)
>  ... 8 more
> Caused by: java.util.ConcurrentModificationException
>  at java.util.Hashtable$Enumerator.next(Hashtable.java:1383)
>  at org.apache.hadoop.conf.Configuration.iterator(Configuration.java:2853)
>  at 
> org.apache.hadoop.security.AuthenticationFilterInitializer.getFilterConfigMap(AuthenticationFilterInitializer.java:73)
>  at 
> org.apache.hadoop.http.HttpServer2.getFilterProperties(HttpServer2.java:647)
>  at 
> org.apache.hadoop.http.HttpServer2.constructSecretProvider(HttpServer2.java:637)
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:525)
>  ... 11 more
> 2018-04-19 13:08:30,639 INFO timeline.HadoopTimelineMetricsSink 
> (AbstractTimelineMetricsSink.java:getCurrentCollectorHost(291)) - No live 
> collector to send metrics to. Metrics to be sent will be discarded. This 
> message will be skipped for the next 20 times.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15411) AuthenticationFilter should use Configuration.getPropsWithPrefix instead of iterator

2018-04-25 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15411:

Priority: Critical  (was: Blocker)
Target Version/s: 3.2.0, 3.1.1, 3.0.3

> AuthenticationFilter should use Configuration.getPropsWithPrefix instead of 
> iterator
> 
>
> Key: HADOOP-15411
> URL: https://issues.apache.org/jira/browse/HADOOP-15411
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Critical
> Attachments: HADOOP-15411.1.patch
>
>
> Node manager start up fails with the following stack trace
> {code}
> 2018-04-19 13:08:30,638 ERROR nodemanager.NodeManager 
> (NodeManager.java:initAndStartNodeManager(921)) - Error starting NodeManager
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: NMWebapps failed to 
> start.
>  at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:117)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:919)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:979)
> Caused by: org.apache.hadoop.yarn.webapp.WebAppException: Error starting http 
> server
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:377)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:424)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:420)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:112)
>  ... 5 more
> Caused by: java.io.IOException: java.util.ConcurrentModificationException
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:532)
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:117)
>  at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:421)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:333)
>  ... 8 more
> Caused by: java.util.ConcurrentModificationException
>  at java.util.Hashtable$Enumerator.next(Hashtable.java:1383)
>  at org.apache.hadoop.conf.Configuration.iterator(Configuration.java:2853)
>  at 
> org.apache.hadoop.security.AuthenticationFilterInitializer.getFilterConfigMap(AuthenticationFilterInitializer.java:73)
>  at 
> org.apache.hadoop.http.HttpServer2.getFilterProperties(HttpServer2.java:647)
>  at 
> org.apache.hadoop.http.HttpServer2.constructSecretProvider(HttpServer2.java:637)
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:525)
>  ... 11 more
> 2018-04-19 13:08:30,639 INFO timeline.HadoopTimelineMetricsSink 
> (AbstractTimelineMetricsSink.java:getCurrentCollectorHost(291)) - No live 
> collector to send metrics to. Metrics to be sent will be discarded. This 
> message will be skipped for the next 20 times.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15411) AuthenticationFilter should use Configuration.getPropsWithPrefix instead of iterator

2018-04-25 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16452879#comment-16452879
 ] 

Wangda Tan commented on HADOOP-15411:
-

+1, thanks [~suma.shivaprasad], will commit by today if no objections.

> AuthenticationFilter should use Configuration.getPropsWithPrefix instead of 
> iterator
> 
>
> Key: HADOOP-15411
> URL: https://issues.apache.org/jira/browse/HADOOP-15411
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Blocker
> Attachments: HADOOP-15411.1.patch
>
>
> Node manager start up fails with the following stack trace
> {code}
> 2018-04-19 13:08:30,638 ERROR nodemanager.NodeManager 
> (NodeManager.java:initAndStartNodeManager(921)) - Error starting NodeManager
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: NMWebapps failed to 
> start.
>  at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:117)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:919)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:979)
> Caused by: org.apache.hadoop.yarn.webapp.WebAppException: Error starting http 
> server
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:377)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:424)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:420)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:112)
>  ... 5 more
> Caused by: java.io.IOException: java.util.ConcurrentModificationException
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:532)
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:117)
>  at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:421)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:333)
>  ... 8 more
> Caused by: java.util.ConcurrentModificationException
>  at java.util.Hashtable$Enumerator.next(Hashtable.java:1383)
>  at org.apache.hadoop.conf.Configuration.iterator(Configuration.java:2853)
>  at 
> org.apache.hadoop.security.AuthenticationFilterInitializer.getFilterConfigMap(AuthenticationFilterInitializer.java:73)
>  at 
> org.apache.hadoop.http.HttpServer2.getFilterProperties(HttpServer2.java:647)
>  at 
> org.apache.hadoop.http.HttpServer2.constructSecretProvider(HttpServer2.java:637)
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:525)
>  ... 11 more
> 2018-04-19 13:08:30,639 INFO timeline.HadoopTimelineMetricsSink 
> (AbstractTimelineMetricsSink.java:getCurrentCollectorHost(291)) - No live 
> collector to send metrics to. Metrics to be sent will be discarded. This 
> message will be skipped for the next 20 times.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15205) maven release: missing source attachments for hadoop-mapreduce-client-core

2018-04-10 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432908#comment-16432908
 ] 

Wangda Tan commented on HADOOP-15205:
-

Thanks [~shv], 

I quickly checked, all the releases since 2.8.3 doesn't have source jars. 
(including 3.0.1 released version).

https://repository.apache.org/content/repositories/releases/org/apache/hadoop/hadoop-nfs/

You can change the version to see per-release artifacts.

> maven release: missing source attachments for hadoop-mapreduce-client-core
> --
>
> Key: HADOOP-15205
> URL: https://issues.apache.org/jira/browse/HADOOP-15205
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.5, 3.0.0
>Reporter: Zoltan Haindrich
>Priority: Major
>
> I wanted to use the source attachment; however it looks like since 2.7.5 that 
> artifact is not present at maven central ; it looks like the last release 
> which had source attachments / javadocs was 2.7.4
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.5/
> this seems to be not limited to mapreduce; as the same change is present for 
> yarn-common as well
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-yarn-common/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-yarn-common/2.7.5/
> and also hadoop-common
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/2.7.5/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/3.0.0/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15334) Upgrade Maven surefire plugin

2018-03-29 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15334:

Fix Version/s: (was: 3.2.0)

> Upgrade Maven surefire plugin
> -
>
> Key: HADOOP-15334
> URL: https://issues.apache.org/jira/browse/HADOOP-15334
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2
>
> Attachments: HADOOP-15334.01.patch
>
>
> Recent versions of the surefire plugin suppress summary test execution output 
> in quiet mode. This is now fixed in plugin version 2.21.0 (via SUREFIRE-1436).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15312) Undocumented KeyProvider configuration keys

2018-03-29 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15312:

Fix Version/s: (was: 3.2.0)

> Undocumented KeyProvider configuration keys
> ---
>
> Key: HADOOP-15312
> URL: https://issues.apache.org/jira/browse/HADOOP-15312
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.0.2
>
> Attachments: HADOOP-15312.001.patch, HADOOP-15312.002.patch
>
>
> Via HADOOP-14445, I found two undocumented configuration keys: 
> hadoop.security.key.default.bitlength and hadoop.security.key.default.cipher



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14067) VersionInfo should load version-info.properties from its own classloader

2018-03-29 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-14067:

Fix Version/s: (was: 3.2.0)

> VersionInfo should load version-info.properties from its own classloader
> 
>
> Key: HADOOP-14067
> URL: https://issues.apache.org/jira/browse/HADOOP-14067
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.0.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HADOOP-14067.01.patch, HADOOP-14067.01.patch, 
> HADOOP-14067.02.patch, HADOOP-14067.03.patch
>
>
> org.apache.hadoop.util.VersionInfo loads the version-info.properties file via 
> the current thread classloader.
> However, in case of applications that are using hadoop classes dynamically  
> (eg jdbc based tools such as SQuirreL SQL) the current thread might not be 
> the one that loaded the hadoop classes including VersionInfo, and it would 
> fail to fine the properties file.
> The right place to look for the properties file is in the classloader of 
> VersionInfo class, as right version is the one that is associated with rest 
> of the loaded hadoop classes,  and not necessarily the one in current thread 
> classloader.
> Created a related jira - HADOOP-14066 to make methods to get version via 
> VersionInfo a public api.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9747) Reduce unnecessary UGI synchronization

2018-03-29 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-9747:
---
Fix Version/s: (was: 3.2.0)

> Reduce unnecessary UGI synchronization
> --
>
> Key: HADOOP-9747
> URL: https://issues.apache.org/jira/browse/HADOOP-9747
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Fix For: 3.1.0, 3.0.2
>
> Attachments: HADOOP-9747-trunk-03.patch, HADOOP-9747-trunk-04.patch, 
> HADOOP-9747-trunk.01.patch, HADOOP-9747-trunk.02.patch, 
> HADOOP-9747.2.branch-2.patch, HADOOP-9747.2.trunk.patch, 
> HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch
>
>
> Jstacks of heavily loaded NNs show up to dozens of threads blocking in the 
> UGI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14067) VersionInfo should load version-info.properties from its own classloader

2018-03-29 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16419386#comment-16419386
 ] 

Wangda Tan commented on HADOOP-14067:
-

Doing 3.1.0 RC1 now, moved all 3.1.1 (branch-3.1) fixes to 3.1.0 (branch-3.1.0)

> VersionInfo should load version-info.properties from its own classloader
> 
>
> Key: HADOOP-14067
> URL: https://issues.apache.org/jira/browse/HADOOP-14067
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.0.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-14067.01.patch, HADOOP-14067.01.patch, 
> HADOOP-14067.02.patch, HADOOP-14067.03.patch
>
>
> org.apache.hadoop.util.VersionInfo loads the version-info.properties file via 
> the current thread classloader.
> However, in case of applications that are using hadoop classes dynamically  
> (eg jdbc based tools such as SQuirreL SQL) the current thread might not be 
> the one that loaded the hadoop classes including VersionInfo, and it would 
> fail to fine the properties file.
> The right place to look for the properties file is in the classloader of 
> VersionInfo class, as right version is the one that is associated with rest 
> of the loaded hadoop classes,  and not necessarily the one in current thread 
> classloader.
> Created a related jira - HADOOP-14066 to make methods to get version via 
> VersionInfo a public api.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14742) Document multi-URI replication Inode for ViewFS

2018-03-29 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16419384#comment-16419384
 ] 

Wangda Tan commented on HADOOP-14742:
-

Doing 3.1.0 RC1 now, moved all 3.1.1 (branch-3.1) fixes to 3.1.0 (branch-3.1.0)

> Document multi-URI replication Inode for ViewFS
> ---
>
> Key: HADOOP-14742
> URL: https://issues.apache.org/jira/browse/HADOOP-14742
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation, viewfs
>Affects Versions: 3.0.0-beta1
>Reporter: Chris Douglas
>Assignee: Gera Shegalov
>Priority: Major
> Fix For: 3.1.0, 3.0.2
>
> Attachments: HADOOP-14742.001.patch, HADOOP-14742.002.patch
>
>
> HADOOP-12077 added client-side "replication" capabilities to ViewFS. Its 
> semantics and configuration should be documented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14067) VersionInfo should load version-info.properties from its own classloader

2018-03-29 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-14067:

Fix Version/s: (was: 3.1.1)
   3.1.0

> VersionInfo should load version-info.properties from its own classloader
> 
>
> Key: HADOOP-14067
> URL: https://issues.apache.org/jira/browse/HADOOP-14067
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.0.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-14067.01.patch, HADOOP-14067.01.patch, 
> HADOOP-14067.02.patch, HADOOP-14067.03.patch
>
>
> org.apache.hadoop.util.VersionInfo loads the version-info.properties file via 
> the current thread classloader.
> However, in case of applications that are using hadoop classes dynamically  
> (eg jdbc based tools such as SQuirreL SQL) the current thread might not be 
> the one that loaded the hadoop classes including VersionInfo, and it would 
> fail to fine the properties file.
> The right place to look for the properties file is in the classloader of 
> VersionInfo class, as right version is the one that is associated with rest 
> of the loaded hadoop classes,  and not necessarily the one in current thread 
> classloader.
> Created a related jira - HADOOP-14066 to make methods to get version via 
> VersionInfo a public api.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14742) Document multi-URI replication Inode for ViewFS

2018-03-29 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-14742:

Fix Version/s: (was: 3.1.1)
   3.1.0

> Document multi-URI replication Inode for ViewFS
> ---
>
> Key: HADOOP-14742
> URL: https://issues.apache.org/jira/browse/HADOOP-14742
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation, viewfs
>Affects Versions: 3.0.0-beta1
>Reporter: Chris Douglas
>Assignee: Gera Shegalov
>Priority: Major
> Fix For: 3.1.0, 3.0.2
>
> Attachments: HADOOP-14742.001.patch, HADOOP-14742.002.patch
>
>
> HADOOP-12077 added client-side "replication" capabilities to ViewFS. Its 
> semantics and configuration should be documented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15312) Undocumented KeyProvider configuration keys

2018-03-29 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15312:

Fix Version/s: (was: 3.1.1)
   3.1.0

> Undocumented KeyProvider configuration keys
> ---
>
> Key: HADOOP-15312
> URL: https://issues.apache.org/jira/browse/HADOOP-15312
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.2.0, 3.0.2
>
> Attachments: HADOOP-15312.001.patch, HADOOP-15312.002.patch
>
>
> Via HADOOP-14445, I found two undocumented configuration keys: 
> hadoop.security.key.default.bitlength and hadoop.security.key.default.cipher



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15311) HttpServer2 needs a way to configure the acceptor/selector count

2018-03-29 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15311:

Fix Version/s: (was: 3.1.1)
   3.1.0

> HttpServer2 needs a way to configure the acceptor/selector count
> 
>
> Key: HADOOP-15311
> URL: https://issues.apache.org/jira/browse/HADOOP-15311
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 3.1.0, 3.0.2
>
> Attachments: HADOOP-15311.000.patch, HADOOP-15311.001.patch, 
> HADOOP-15311.002.patch
>
>
> HttpServer2 starts up with some number of acceptors and selectors, but only 
> allows for the automatic configuration of these based off of the number of 
> available cores:
> {code:title=org.eclipse.jetty.server.ServerConnector}
> selectors > 0 ? selectors : Math.max(1, Math.min(4, 
> Runtime.getRuntime().availableProcessors() / 2)))
> {code}
> {code:title=org.eclipse.jetty.server.AbstractConnector}
> if (acceptors < 0) {
>   acceptors = Math.max(1, Math.min(4, cores / 8));
> }
> {code}
> A thread pool is started of size, at minimum, {{acceptors + selectors + 1}}, 
> so in addition to allowing for a higher tuning value under heavily loaded 
> environments, adding configurability for this enables tuning these values 
> down in resource constrained environments such as a MiniDFSCluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15312) Undocumented KeyProvider configuration keys

2018-03-29 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16419381#comment-16419381
 ] 

Wangda Tan commented on HADOOP-15312:
-

Doing 3.1.0 RC1 now, moved all 3.1.1 (branch-3.1) fixes to 3.1.0 (branch-3.1.0)

> Undocumented KeyProvider configuration keys
> ---
>
> Key: HADOOP-15312
> URL: https://issues.apache.org/jira/browse/HADOOP-15312
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.2.0, 3.0.2
>
> Attachments: HADOOP-15312.001.patch, HADOOP-15312.002.patch
>
>
> Via HADOOP-14445, I found two undocumented configuration keys: 
> hadoop.security.key.default.bitlength and hadoop.security.key.default.cipher



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15334) Upgrade Maven surefire plugin

2018-03-29 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16419379#comment-16419379
 ] 

Wangda Tan commented on HADOOP-15334:
-

Doing 3.1.0 RC1 now, moved all 3.1.1 (branch-3.1) fixes to 3.1.0 (branch-3.1.0)

> Upgrade Maven surefire plugin
> -
>
> Key: HADOOP-15334
> URL: https://issues.apache.org/jira/browse/HADOOP-15334
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.2.0, 3.0.2
>
> Attachments: HADOOP-15334.01.patch
>
>
> Recent versions of the surefire plugin suppress summary test execution output 
> in quiet mode. This is now fixed in plugin version 2.21.0 (via SUREFIRE-1436).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15311) HttpServer2 needs a way to configure the acceptor/selector count

2018-03-29 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16419383#comment-16419383
 ] 

Wangda Tan commented on HADOOP-15311:
-

Doing 3.1.0 RC1 now, moved all 3.1.1 (branch-3.1) fixes to 3.1.0 (branch-3.1.0)

> HttpServer2 needs a way to configure the acceptor/selector count
> 
>
> Key: HADOOP-15311
> URL: https://issues.apache.org/jira/browse/HADOOP-15311
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 3.1.0, 3.0.2
>
> Attachments: HADOOP-15311.000.patch, HADOOP-15311.001.patch, 
> HADOOP-15311.002.patch
>
>
> HttpServer2 starts up with some number of acceptors and selectors, but only 
> allows for the automatic configuration of these based off of the number of 
> available cores:
> {code:title=org.eclipse.jetty.server.ServerConnector}
> selectors > 0 ? selectors : Math.max(1, Math.min(4, 
> Runtime.getRuntime().availableProcessors() / 2)))
> {code}
> {code:title=org.eclipse.jetty.server.AbstractConnector}
> if (acceptors < 0) {
>   acceptors = Math.max(1, Math.min(4, cores / 8));
> }
> {code}
> A thread pool is started of size, at minimum, {{acceptors + selectors + 1}}, 
> so in addition to allowing for a higher tuning value under heavily loaded 
> environments, adding configurability for this enables tuning these values 
> down in resource constrained environments such as a MiniDFSCluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15334) Upgrade Maven surefire plugin

2018-03-29 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15334:

Fix Version/s: (was: 3.1.1)
   3.1.0

> Upgrade Maven surefire plugin
> -
>
> Key: HADOOP-15334
> URL: https://issues.apache.org/jira/browse/HADOOP-15334
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.2.0, 3.0.2
>
> Attachments: HADOOP-15334.01.patch
>
>
> Recent versions of the surefire plugin suppress summary test execution output 
> in quiet mode. This is now fixed in plugin version 2.21.0 (via SUREFIRE-1436).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15171) native ZLIB decompressor produces 0 bytes on the 2nd call; also incorrrectly handles some zlib errors

2018-03-20 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15171:

Fix Version/s: (was: 3.0.1)
   (was: 3.1.0)

> native ZLIB decompressor produces 0 bytes on the 2nd call; also incorrrectly 
> handles some zlib errors
> -
>
> Key: HADOOP-15171
> URL: https://issues.apache.org/jira/browse/HADOOP-15171
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Sergey Shelukhin
>Assignee: Lokesh Jain
>Priority: Blocker
>
> While reading some ORC file via direct buffers, Hive gets a 0-sized buffer 
> for a particular compressed segment of the file. We narrowed it down to 
> Hadoop native ZLIB codec; when the data is copied to heap-based buffer and 
> the JDK Inflater is used, it produces correct output. Input is only 127 bytes 
> so I can paste it here.
> All the other (many) blocks of the file are decompressed without problems by 
> the same code.
> {noformat}
> 2018-01-13T02:47:40,815 TRACE [IO-Elevator-Thread-0 
> (1515637158315_0079_1_00_00_0)] encoded.EncodedReaderImpl: Decompressing 
> 127 bytes to dest buffer pos 524288, limit 786432
> 2018-01-13T02:47:40,816  WARN [IO-Elevator-Thread-0 
> (1515637158315_0079_1_00_00_0)] encoded.EncodedReaderImpl: The codec has 
> produced 0 bytes for 127 bytes at pos 0, data hash 1719565039: [e3 92 e1 62 
> 66 60 60 10 12 e5 98 e0 27 c4 c7 f1 e8 12 8f 40 c3 7b 5e 89 09 7f 6e 74 73 04 
> 30 70 c9 72 b1 30 14 4d 60 82 49 37 bd e7 15 58 d0 cd 2f 31 a1 a1 e3 35 4c fa 
> 15 a3 02 4c 7a 51 37 bf c0 81 e5 02 12 13 5a b6 9f e2 04 ea 96 e3 62 65 b8 c3 
> b4 01 ae fd d0 72 01 81 07 87 05 25 26 74 3c 5b c9 05 35 fd 0a b3 03 50 7b 83 
> 11 c8 f2 c3 82 02 0f 96 0b 49 34 7c fa ff 9f 2d 80 01 00
> 2018-01-13T02:47:40,816  WARN [IO-Elevator-Thread-0 
> (1515637158315_0079_1_00_00_0)] encoded.EncodedReaderImpl: Fell back to 
> JDK decompressor with memcopy; got 155 bytes
> {noformat}
> Hadoop version is based on 3.1 snapshot.
> The size of libhadoop.so is 824403 bytes, and libgplcompression is 78273 
> FWIW. Not sure how to extract versions from those. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14757) S3AFileSystem.innerRename() to size metadatastore lists better

2018-03-20 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-14757:

Fix Version/s: (was: 3.1.0)

> S3AFileSystem.innerRename() to size metadatastore lists better
> --
>
> Key: HADOOP-14757
> URL: https://issues.apache.org/jira/browse/HADOOP-14757
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Abraham Fine
>Priority: Minor
>
> In {{S3AFileSystem.innerRename()}}, various ArrayLists are created to track 
> paths to update; these are created with the default size. It could/should be 
> possible to allocate better, so avoid expensive array growth & copy 
> operations while iterating through the list of entries.
> # for a single file copy, sizes == 1
> # for a recursive copy, the outcome of the first real LIST will either 
> provide the actual size, or, if the list == the max response, a very large 
> minimum size.
> For #2, we'd need to get the hint of iterable length rather than just iterate 
> through...some interface {{{IterableLength.expectedMinimumSize()}} could do 
> that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname, incase multiple loopback addresses are present in /etc/hosts

2018-03-20 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-12687:

Target Version/s: 3.2.0  (was: 3.1.0)

> SecureUtil#getByName should also try to resolve direct hostname, incase 
> multiple loopback addresses are present in /etc/hosts
> -
>
> Key: HADOOP-12687
> URL: https://issues.apache.org/jira/browse/HADOOP-12687
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Sunil G
>Priority: Major
>  Labels: security
> Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, 
> 0003-HADOOP-12687.patch, 0004-HADOOP-12687.patch
>
>
> From 
> https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
>  we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
> timeout which can be reproduced locally.
> When {{/etc/hosts}} has multiple loopback entries, 
> {{InetAddress.getByName(null)}} will be returning the first entry present in 
> etc/hosts. Hence its possible that machine hostname can be second in list and 
> cause {{UnKnownHostException}}.
> Suggesting a direct resolve for such hostname scenarios.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10584) ActiveStandbyElector goes down if ZK quorum become unavailable

2018-03-20 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-10584:

Target Version/s: 2.9.1, 3.2.0  (was: 3.1.0, 2.9.1)

> ActiveStandbyElector goes down if ZK quorum become unavailable
> --
>
> Key: HADOOP-10584
> URL: https://issues.apache.org/jira/browse/HADOOP-10584
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.4.0
>Reporter: Karthik Kambatla
>Assignee: Daniel Templeton
>Priority: Major
> Attachments: HADOOP-10584.prelim.patch, hadoop-10584-prelim.patch, 
> rm.log
>
>
> ActiveStandbyElector retries operations for a few times. If the ZK quorum 
> itself is down, it goes down and the daemons will have to be brought up 
> again. 
> Instead, it should log the fact that it is unable to talk to ZK, call 
> becomeStandby on its client, and continue to attempt connecting to ZK.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15139) [Umbrella] Improvements and fixes for Hadoop shaded client work

2018-03-20 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15139:

Target Version/s: 3.2.0  (was: 3.1.0)

> [Umbrella] Improvements and fixes for Hadoop shaded client work 
> 
>
> Key: HADOOP-15139
> URL: https://issues.apache.org/jira/browse/HADOOP-15139
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Bharat Viswanadham
>Priority: Critical
>
> In HADOOP-11656, we have made great progress in splitting out third-party 
> dependencies from shaded hadoop client jar (hadoop-client-api), put runtime 
> dependencies in hadoop-client-runtime, and have shaded version of 
> hadoop-client-minicluster for test. However, there are still some left work 
> for this feature to be fully completed:
> - We don't have a comprehensive documentation to guide downstream 
> projects/users to use shaded JARs instead of previous JARs
> - We should consider to wrap up hadoop tools (distcp, aws, azure) to have 
> shaded version
> - More issues could be identified when shaded jars are adopted in more test 
> and production environment, like HADOOP-15137.
> Let's have this umbrella JIRA to track all efforts that left to improve 
> hadoop shaded client effort.
> CC [~busbey], [~bharatviswa] and [~vinodkv].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14347) Make KMS and HttpFS Jetty accept queue size configurable

2018-03-20 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-14347:

Target Version/s: 3.2.0  (was: 3.1.0)

> Make KMS and HttpFS Jetty accept queue size configurable
> 
>
> Key: HADOOP-14347
> URL: https://issues.apache.org/jira/browse/HADOOP-14347
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Major
> Attachments: HADOOP-14347.001.patch, HADOOP-14347.002.patch, 
> HADOOP-14347.003.patch
>
>
> HADOOP-14003 enabled the customization of Tomcat attribute {{protocol}}, 
> {{acceptCount}}, and {{acceptorThreadCount}} for KMS in branch-2. See 
> https://tomcat.apache.org/tomcat-6.0-doc/config/http.html.
> KMS switched from Tomcat to Jetty in trunk. Only {{acceptCount}} has a 
> counterpart in Jetty, {{acceptQueueSize}}. See 
> http://www.eclipse.org/jetty/documentation/9.3.x/configuring-connectors.html.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14954) MetricsSystemImpl#init should increment refCount when already initialized

2018-03-20 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-14954:

Target Version/s: 3.2.0  (was: 3.1.0)

> MetricsSystemImpl#init should increment refCount when already initialized
> -
>
> Key: HADOOP-14954
> URL: https://issues.apache.org/jira/browse/HADOOP-14954
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.7.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14954.001.patch, HADOOP-14954.002.patch, 
> HADOOP-14954.002a.patch
>
>
> {{++refCount}} here in {{init}} should be symmetric to {{--refCount}} in 
> {{shutdown}}.
> {code:java}
>   public synchronized MetricsSystem init(String prefix) {
> if (monitoring && !DefaultMetricsSystem.inMiniClusterMode()) {
>   LOG.warn(this.prefix +" metrics system already initialized!");
>   return this;
> }
> this.prefix = checkNotNull(prefix, "prefix");
> ++refCount;
> {code}
> Move {{++refCount}}  to the beginning of this method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14996) wasb: ReadFully occasionally fails when using page blobs

2018-03-20 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-14996:

Target Version/s: 2.10.0, 3.2.0  (was: 3.1.0, 2.10.0)

> wasb: ReadFully occasionally fails when using page blobs
> 
>
> Key: HADOOP-14996
> URL: https://issues.apache.org/jira/browse/HADOOP-14996
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
>Priority: Major
>
> Looks like there is a functional bug or concurrency bug in the 
> PageBlobInputStream implemenation of ReadFully.
> 1) Use 1 mapper to copy results in success:
> hadoop distcp -m 1 
> wasb://hbt-lifet...@salsbx01sparkdata.blob.core.windows.net/hive_tables  
> wasb://hbt-lifetime-...@supporttestl2.blob.core.windows.net/hdi_backup
> 2) Turn on DEBUG log by setting mapreduce.map.log.level=DEBUG in ambari. Then 
> run with more than 1 mapper:
> Saw debug log like this:
> {code}
> 2017-10-27 06:18:53,545 DEBUG [main] 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem: Seek to position 136251. 
> Bytes skipped 136210
> 2017-10-27 06:18:53,549 DEBUG [main] 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore: Closing page blob 
> output stream.
> 2017-10-27 06:18:53,549 DEBUG [main] 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore: 
> java.util.concurrent.ThreadPoolExecutor@73dce0e6[Terminated, pool size = 0, 
> active threads = 0, queued tasks = 0, completed tasks = 0]
> 2017-10-27 06:18:53,549 DEBUG [main] 
> org.apache.hadoop.security.UserGroupInformation: PrivilegedActionException 
> as:mssupport (auth:SIMPLE) cause:java.io.EOFException
> 2017-10-27 06:18:53,553 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.io.EOFException
> at java.io.DataInputStream.readFully(DataInputStream.java:197)
> at java.io.DataInputStream.readFully(DataInputStream.java:169)
> at org.apache.hadoop.io.SequenceFile$Reader.sync(SequenceFile.java:2693)
> at 
> org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.initialize(SequenceFileRecordReader.java:58)
> at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14786) HTTP default servlets do not require authentication when kerberos is enabled

2018-03-20 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-14786:

Target Version/s: 3.2.0  (was: 3.1.0)

> HTTP default servlets do not require authentication when kerberos is enabled
> 
>
> Key: HADOOP-14786
> URL: https://issues.apache.org/jira/browse/HADOOP-14786
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Major
>
> The default HttpServer2 servlet /jmx, /conf, /logLevel, and /stack do not 
> require authentication when Kerberos is enabled.
> {code:java|title=HttpServer2#addDefaultServlets}
>   // set up default servlets
>   addServlet("stacks", "/stacks", StackServlet.class);
>   addServlet("logLevel", "/logLevel", LogLevel.Servlet.class);
>   addServlet("jmx", "/jmx", JMXJsonServlet.class);
>   addServlet("conf", "/conf", ConfServlet.class);
> {code}
> {code:java|title=HttpServer2#addServlet}
> public void addServlet(String name, String pathSpec,
>Class clazz) {
>   addInternalServlet(name, pathSpec, clazz, false);
>   addFilterPathMapping(pathSpec, webAppContext);
> {code}
> {code:java|title=Httpserver2#addInternalServlet}
> addInternalServlet(…, bool requireAuth)
> …
> if(requireAuth && UserGroupInformation.isSecurityEnabled()) {
>   LOG.info("Adding Kerberos (SPNEGO) filter to " + name);
> {code}
> {{requireAuth}} is {{false}} for the default servlets inside 
> {{addInternalServlet}}.
> The issue can be verified by running the following curl command against 
> NameNode web address when Kerberos is enabled:
> {noformat}
> kdestroy
> curl --negotiate -u: -k -sS 'https://:9871/jmx'
> {noformat}
> Expect curl to fail, but it returns JMX anyway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14679) Get ADLS access token provider type from Credential Provider

2018-03-20 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-14679:

Target Version/s: 3.2.0  (was: 3.1.0)

> Get ADLS access token provider type from Credential Provider
> 
>
> Key: HADOOP-14679
> URL: https://issues.apache.org/jira/browse/HADOOP-14679
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14679.001.patch, HADOOP-14679.002.patch
>
>
> Found it convenient to add {{fs.adl.oauth2.access.token.provider.type}} along 
> with ADLS credentials to the credential store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15069) support git-secrets commit hook to keep AWS secrets out of git

2018-03-20 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15069:

Target Version/s: 2.8.3, 2.9.1, 3.2.0, 3.0.2  (was: 2.8.3, 3.1.0, 2.9.1, 
3.0.2)

> support git-secrets commit hook to keep AWS secrets out of git
> --
>
> Key: HADOOP-15069
> URL: https://issues.apache.org/jira/browse/HADOOP-15069
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15069-001.patch, HADOOP-15069-002.patch
>
>
> The latest Uber breach looks like it involved AWS keys in git repos.
> Nobody wants that, which is why amazon provide 
> [git-secrets|https://github.com/awslabs/git-secrets]; a script you can use to 
> scan a repo and its history, *and* add as an automated check.
> Anyone can set this up, but there are a few false positives in the scan, 
> mostly from longs and a few all-upper-case constants. These can all be added 
> to a .gitignore file.
> Also: mention git-secrets in the aws testing docs; say "use it"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14671) Upgrade to Apache Yetus 0.7.0

2018-03-20 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-14671:

Target Version/s: 3.2.0  (was: 3.1.0)

> Upgrade to Apache Yetus 0.7.0
> -
>
> Key: HADOOP-14671
> URL: https://issues.apache.org/jira/browse/HADOOP-14671
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation, test
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-14671.001.patch
>
>
> Apache Yetus 0.7.0 was released.  Let's upgrade the bundled reference to the 
> new version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14652) Update metrics-core version to 3.2.4

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-14652:

Fix Version/s: 3.1.0

> Update metrics-core version to 3.2.4
> 
>
> Key: HADOOP-14652
> URL: https://issues.apache.org/jira/browse/HADOOP-14652
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-14652.001.patch, HADOOP-14652.002.patch, 
> HADOOP-14652.003.patch, HADOOP-14652.004.patch, HADOOP-14652.005.patch, 
> HADOOP-14652.006.patch
>
>
> The current artifact is:
> com.codehale.metrics:metrics-core:3.0.1
> That version could either be bumped to 3.0.2 (the latest of that line), or 
> use the latest artifact:
> io.dropwizard.metrics:metrics-core:3.2.4



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15305) Replace FileUtils.writeStringToFile(File, String) with (File, String, Charset) to fix deprecation warnings

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15305:

Fix Version/s: 3.1.0

> Replace FileUtils.writeStringToFile(File, String) with (File, String, 
> Charset) to fix deprecation warnings
> --
>
> Key: HADOOP-15305
> URL: https://issues.apache.org/jira/browse/HADOOP-15305
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: fang zhenyi
>Priority: Minor
>  Labels: newbie
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-15305.001.patch, HADOOP-15305.002.patch
>
>
> FileUtils.writeStringToFile(File, String) relies on default charset and 
> should be replaced with FileUtils.writeStringToFile(File, String, Charset).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15191) Add Private/Unstable BulkDelete operations to supporting object stores for DistCP

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15191:

Fix Version/s: 3.1.0

> Add Private/Unstable BulkDelete operations to supporting object stores for 
> DistCP
> -
>
> Key: HADOOP-15191
> URL: https://issues.apache.org/jira/browse/HADOOP-15191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, tools/distcp
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-15191-001.patch, HADOOP-15191-002.patch, 
> HADOOP-15191-003.patch, HADOOP-15191-004.patch
>
>
> Large scale DistCP with the -delete option doesn't finish in a viable time 
> because of the final CopyCommitter doing a 1 by 1 delete of all missing 
> files. This isn't randomized (the list is sorted), and it's throttled by AWS.
> If bulk deletion of files was exposed as an API, distCP would do 1/1000 of 
> the REST calls, so not get throttled.
> Proposed: add an initially private/unstable interface for stores, 
> {{BulkDelete}} which declares a page size and offers a 
> {{bulkDelete(List)}} operation for the bulk deletion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15287) JDK9 JavaDoc build fails due to one-character underscore identifiers in hadoop-yarn-common

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15287:

Fix Version/s: 3.1.0

> JDK9 JavaDoc build fails due to one-character underscore identifiers in 
> hadoop-yarn-common
> --
>
> Key: HADOOP-15287
> URL: https://issues.apache.org/jira/browse/HADOOP-15287
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
> Environment: Java 9.0.4, Applied HADOOP-12760 and HDFS-11610
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-15287.1.patch, HADOOP-15287.2.patch
>
>
> {{mvn --projects hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
> javadoc:javadoc}} fails.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.0-M1:javadoc (default-cli) 
> on project hadoop-yarn-common: An error has occurred in Javadoc report 
> generation:
> [ERROR] Exit code: 1 - 
> ./hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/Hamlet.java:50:
>  error: as of release 9, '_' is a keyword, and may not be used as an 
> identifier
> [ERROR]   public class HTML extends EImp implements 
> HamletSpec.HTML {
> [ERROR]   ^
> [ERROR] 
> ./hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/Hamlet.java:92:
>  error: as of release 9, '_' is a keyword, and may not be used as an 
> identifier
> [ERROR]   return base().$href(href)._();
> [ERROR] ^
> ...
> {noformat}
> FYI: https://bugs.openjdk.java.net/browse/JDK-8061549



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13374) Add the L&N verification script

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-13374:

Fix Version/s: 3.1.0

> Add the L&N verification script
> ---
>
> Key: HADOOP-13374
> URL: https://issues.apache.org/jira/browse/HADOOP-13374
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Assignee: Allen Wittenauer
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-13374.01.patch, HADOOP-13374.02.patch, 
> HADOOP-13374.03.patch, HADOOP-13374.04.patch
>
>
> This is the script that's used for L&N change verification during 
> HADOOP-12893. We should commit this as [~ozawa] 
> [suggested|https://issues.apache.org/jira/browse/HADOOP-13298?focusedCommentId=15374498&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15374498].
> I was 
> [initially|https://issues.apache.org/jira/browse/HADOOP-12893?focusedCommentId=15283040&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15283040]
>  verifying from an on-fly shell command, and [~andrew.wang] contributed the 
> script later in [a comment|
> https://issues.apache.org/jira/browse/HADOOP-12893?focusedCommentId=15303281&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15303281],
>  so most credit should go to him. :)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15234) Throw meaningful message on null when initializing KMSWebApp

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15234:

Fix Version/s: 3.1.0

> Throw meaningful message on null when initializing KMSWebApp
> 
>
> Key: HADOOP-15234
> URL: https://issues.apache.org/jira/browse/HADOOP-15234
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Xiao Chen
>Assignee: fang zhenyi
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-15234.001.patch, HADOOP-15234.002.patch, 
> HADOOP-15234.003.patch, HADOOP-15234.004.patch, HADOOP-15234.005.patch
>
>
> During KMS startup, if the {{keyProvider}} is null, it will NPE inside 
> KeyProviderExtension.
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.crypto.key.KeyProviderExtension.(KeyProviderExtension.java:43)
>   at 
> org.apache.hadoop.crypto.key.CachingKeyProvider.(CachingKeyProvider.java:93)
>   at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:170)
> {noformat}
> We're investigating the exact scenario that could lead to this, but the NPE 
> and log around it can be improved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15271) Remove unicode multibyte characters from JavaDoc

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15271:

Fix Version/s: 3.1.0

> Remove unicode multibyte characters from JavaDoc
> 
>
> Key: HADOOP-15271
> URL: https://issues.apache.org/jira/browse/HADOOP-15271
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
> Environment: Java 9.0.4, Applied HADOOP-12760 and HDFS-11610
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-15271.1.patch, HADOOP-15271.2.patch
>
>
> {{mvn package -Pdist,native -Dtar -DskipTests}} fails.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.0-M1:jar (module-javadocs) 
> on project hadoop-common: MavenReportException: Error while generating 
> Javadoc: 
> [ERROR] Exit code: 1 - javadoc: warning - The old Doclet and Taglet APIs in 
> the packages
> [ERROR] com.sun.javadoc, com.sun.tools.doclets and their implementations
> [ERROR] are planned to be removed in a future JDK release. These
> [ERROR] components have been superseded by the new APIs in jdk.javadoc.doclet.
> [ERROR] Users are strongly recommended to migrate to the new APIs.
> [ERROR] 
> /home/centos/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:1652:
>  error: unmappable character (0xE2) for encoding US-ASCII
> [ERROR]* closed automatically ???these the marked paths will be deleted 
> as a result.
> [ERROR]   ^
> [ERROR] 
> /home/centos/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:1652:
>  error: unmappable character (0x80) for encoding US-ASCII
> [ERROR]* closed automatically ???these the marked paths will be deleted 
> as a result.
> [ERROR]^
> [ERROR] 
> /home/centos/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:1652:
>  error: unmappable character (0x94) for encoding US-ASCII
> [ERROR]* closed automatically ???these the marked paths will be deleted 
> as a result.
> {noformat}
> JDK9 JavaDoc cannot treat non-ascii characters due to 
> https://bugs.openjdk.java.net/browse/JDK-8188649.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-6852) apparent bug in concatenated-bzip2 support (decoding)

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-6852:
---
Fix Version/s: 3.1.0

> apparent bug in concatenated-bzip2 support (decoding)
> -
>
> Key: HADOOP-6852
> URL: https://issues.apache.org/jira/browse/HADOOP-6852
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 0.22.0
> Environment: Linux x86_64 running 32-bit Hadoop, JDK 1.6.0_15
>Reporter: Greg Roelofs
>Assignee: Zsolt Venczel
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-6852.01.patch, HADOOP-6852.02.patch, 
> HADOOP-6852.03.patch, HADOOP-6852.04.patch
>
>
> The following simplified code (manually picked out of testMoreBzip2() in 
> https://issues.apache.org/jira/secure/attachment/12448272/HADOOP-6835.v4.trunk-hadoop-mapreduce.patch)
>  triggers a "java.io.IOException: bad block header" in 
> org.apache.hadoop.io.compress.bzip2.CBZip2InputStream.initBlock( 
> CBZip2InputStream.java:527):
> {noformat}
> JobConf jobConf = new JobConf(defaultConf);
> CompressionCodec bzip2 = new BZip2Codec();
> ReflectionUtils.setConf(bzip2, jobConf);
> localFs.delete(workDir, true);
> // copy multiple-member test file to HDFS
> String fn2 = "testCompressThenConcat.txt" + bzip2.getDefaultExtension();
> Path fnLocal2 = new 
> Path(System.getProperty("test.concat.data","/tmp"),fn2);
> Path fnHDFS2  = new Path(workDir, fn2);
> localFs.copyFromLocalFile(fnLocal2, fnHDFS2);
> FileInputFormat.setInputPaths(jobConf, workDir);
> final FileInputStream in2 = new FileInputStream(fnLocal2.toString());
> CompressionInputStream cin2 = bzip2.createInputStream(in2);
> LineReader in = new LineReader(cin2);
> Text out = new Text();
> int numBytes, totalBytes=0, lineNum=0;
> while ((numBytes = in.readLine(out)) > 0) {
>   ++lineNum;
>   totalBytes += numBytes;
> }
> in.close();
> {noformat}
> The specified file is also included in the H-6835 patch linked above, and 
> some additional debug output is included in the commented-out test loop 
> above.  (Only in the linked, "v4" version of the patch, however--I'm about to 
> remove the debug stuff for checkin.)
> It's possible I've done something completely boneheaded here, but the file, 
> at least, checks out in a subsequent set of subtests and with stock bzip2 
> itself.  Only the code above is problematic; it reads through the first 
> concatenated chunk (17 lines of text) just fine but chokes on the header of 
> the second one.  Altogether, the test file contains 84 lines of text and 4 
> concatenated bzip2 files.
> (It's possible this is a mapreduce issue rather than common, but note that 
> the identical gzip test works fine.  Possibly it's related to the 
> stream-vs-decompressor dichotomy, though; intentionally not supported?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15261) Upgrade commons-io from 2.4 to 2.5

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15261:

Fix Version/s: 3.1.0

> Upgrade commons-io from 2.4 to 2.5
> --
>
> Key: HADOOP-15261
> URL: https://issues.apache.org/jira/browse/HADOOP-15261
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: minikdc
>Affects Versions: 3.0.0-alpha3
>Reporter: PandaMonkey
>Assignee: PandaMonkey
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: 347.patch, hadoop.txt
>
>
> Hi, after analyzing hadoop-common-project\hadoop-minikdc\pom.xml, we found 
> that Hadoop depends on org.apache.kerby:kerb-simplekdc 1.0.1, which 
> transitivity introduced commons-io:2.5. 
> At the same time, hadoop directly depends on a older version of 
> commons-io:2.4. By further look into the source code, these two versions of 
> commons-io have many different features. The dependency conflict problem 
> brings high risks of "NotClassDefFoundError:" or "NoSuchMethodError" issues 
> at runtime. Please notice this problem. Maybe upgrading commons-io from 2.4 
> to 2.5 is a good choice. Hope this report can help you. Thanks!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15293) TestLogLevel fails on Java 9

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15293:

Fix Version/s: 3.1.0

> TestLogLevel fails on Java 9
> 
>
> Key: HADOOP-15293
> URL: https://issues.apache.org/jira/browse/HADOOP-15293
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
> Environment: Applied HADOOP-12760 and HDFS-11610
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-15293.1.patch, HADOOP-15293.2.patch
>
>
> {noformat}
> [INFO] Running org.apache.hadoop.log.TestLogLevel
> [ERROR] Tests run: 7, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 9.805 
> s <<< FAILURE! - in org.apache.hadoop.log.TestLogLevel
> [ERROR] testLogLevelByHttpWithSpnego(org.apache.hadoop.log.TestLogLevel)  
> Time elapsed: 1.179 s  <<< FAILURE!
> java.lang.AssertionError: 
>  Expected to find 'Unrecognized SSL message' but got unexpected exception: 
> javax.net.ssl.SSLException: Unsupported or unrecognized SSL message
>   at 
> java.base/sun.security.ssl.SSLSocketInputRecord.handleUnknownRecord(SSLSocketInputRecord.java:416)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15280) TestKMS.testWebHDFSProxyUserKerb and TestKMS.testWebHDFSProxyUserSimple fail in trunk

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15280:

Fix Version/s: 3.1.0

> TestKMS.testWebHDFSProxyUserKerb and TestKMS.testWebHDFSProxyUserSimple fail 
> in trunk
> -
>
> Key: HADOOP-15280
> URL: https://issues.apache.org/jira/browse/HADOOP-15280
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Ray Chiang
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-15280.00.patch, HADOOP-15280.01.patch
>
>
> I'm seeing these messages on OS X and on Linux.
> {noformat}
> [ERROR] Failures:
> [ERROR] 
> TestKMS.testWebHDFSProxyUserKerb:2526->doWebHDFSProxyUserTest:2625->runServer:158->runServer:176
>  org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Error while authenticating with endpoint: 
> http://localhost:56112/kms/v1/keys?doAs=foo1
> [ERROR] 
> TestKMS.testWebHDFSProxyUserSimple:2531->doWebHDFSProxyUserTest:2625->runServer:158->runServer:176
>  org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Error while authenticating with endpoint: 
> http://localhost:56206/kms/v1/keys?doAs=foo1 
> {noformat}
> as well as a [recent PreCommit-HADOOP-Build 
> job|https://builds.apache.org/job/PreCommit-HADOOP-Build/14235/].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15252) Checkstyle version is not compatible with IDEA's checkstyle plugin

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15252:

Fix Version/s: 3.1.0

> Checkstyle version is not compatible with IDEA's checkstyle plugin
> --
>
> Key: HADOOP-15252
> URL: https://issues.apache.org/jira/browse/HADOOP-15252
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-15252.001.patch, HADOOP-15252.002.patch, 
> HADOOP-15252.003.patch, idea_checkstyle_settings.png
>
>
> After upgrading to the latest IDEA the IDE throws error messages in every few 
> minutes like
> {code:java}
> The Checkstyle rules file could not be parsed.
> SuppressionCommentFilter is not allowed as a child in Checker
> The file has been blacklisted for 60s.{code}
> This is caused by some backward incompatible changes in checkstyle source 
> code:
>  [http://checkstyle.sourceforge.net/releasenotes.html]
>  * 8.1: Make SuppressionCommentFilter and SuppressWithNearbyCommentFilter 
> children of TreeWalker.
>  * 8.2: remove FileContentsHolder module as FileContents object is available 
> for filters on TreeWalker in TreeWalkerAudit Event.
> IDEA uses checkstyle 8.8
> We should upgrade our checkstyle version to be compatible with IDEA's 
> checkstyle plugin.
>  Also it's a good time to upgrade maven-checkstyle-plugin as well to brand 
> new 3.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12897) KerberosAuthenticator.authenticate to include URL on IO failures

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-12897:

Fix Version/s: 3.1.0

> KerberosAuthenticator.authenticate to include URL on IO failures
> 
>
> Key: HADOOP-12897
> URL: https://issues.apache.org/jira/browse/HADOOP-12897
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-12897.001.patch, HADOOP-12897.002.patch, 
> HADOOP-12897.003.patch, HADOOP-12897.004.patch, HADOOP-12897.005.patch, 
> HADOOP-12897.006.patch, HADOOP-12897.007.patch
>
>
> If {{KerberosAuthenticator.authenticate}} can't connect to the endpoint, you 
> get a stack trace, but without the URL it is trying to talk to.
> That is: it doesn't have any equivalent of the {{NetUtils.wrapException}} 
> handler —which can't be called here as its not in the {{hadoop-auth}} module



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15294) TestUGILoginFromKeytab fails on Java9

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15294:

Fix Version/s: 3.1.0

> TestUGILoginFromKeytab fails on Java9
> -
>
> Key: HADOOP-15294
> URL: https://issues.apache.org/jira/browse/HADOOP-15294
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
> Environment: Applied HADOOP-12760 and HDFS-11610
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-15294.1.patch
>
>
> This is the same cause as HADOOP-15291, but this time we may need to fix 
> {{UserGroupInformation}}.
> {noformat}
> [ERROR] 
> testReloginAfterFailedRelogin(org.apache.hadoop.security.TestUGILoginFromKeytab)
>   Time elapsed: 1.157 s  <<< ERROR!
> org.apache.hadoop.security.KerberosAuthException:
> Login failure for user: us...@example.com 
> javax.security.auth.login.LoginException: java.lang.NullPointerException: 
> invalid null input(s)
>   at java.base/java.util.Objects.requireNonNull(Objects.java:246)
>   at 
> java.base/javax.security.auth.Subject$SecureSet.remove(Subject.java:1172)
> ...
>   at 
> org.apache.hadoop.security.UserGroupInformation$HadoopLoginContext.logout(UserGroupInformation.java:1888)
>   at 
> org.apache.hadoop.security.UserGroupInformation.unprotectedRelogin(UserGroupInformation.java:1129)
>   at 
> org.apache.hadoop.security.UserGroupInformation.relogin(UserGroupInformation.java:1109)
>   at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1078)
>   at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1060)
>   at 
> org.apache.hadoop.security.TestUGILoginFromKeytab.testReloginAfterFailedRelogin(TestUGILoginFromKeytab.java:363)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15291) TestMiniKdc fails on Java 9

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15291:

Fix Version/s: 3.1.0

> TestMiniKdc fails on Java 9
> ---
>
> Key: HADOOP-15291
> URL: https://issues.apache.org/jira/browse/HADOOP-15291
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-15291.1.patch
>
>
> {noformat}
> [INFO] Running org.apache.hadoop.minikdc.TestMiniKdc
> [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 3.748 
> s <<< FAILURE! - in org.apache.hadoop.minikdc.TestMiniKdc
> [ERROR] testKerberosLogin(org.apache.hadoop.minikdc.TestMiniKdc)  Time 
> elapsed: 1.301 s  <<< ERROR!
> javax.security.auth.login.LoginException: 
> java.lang.NullPointerException: invalid null input(s)
>   at java.base/java.util.Objects.requireNonNull(Objects.java:246)
>   at 
> java.base/javax.security.auth.Subject$SecureSet.remove(Subject.java:1172)
>   at 
> java.base/java.util.Collections$SynchronizedCollection.remove(Collections.java:2039)
>   at 
> jdk.security.auth/com.sun.security.auth.module.Krb5LoginModule.logout(Krb5LoginModule.java:1193)
>   at 
> java.base/javax.security.auth.login.LoginContext.invoke(LoginContext.java:732)
>   at 
> java.base/javax.security.auth.login.LoginContext.access$000(LoginContext.java:194)
>   at 
> java.base/javax.security.auth.login.LoginContext$4.run(LoginContext.java:665)
>   at 
> java.base/javax.security.auth.login.LoginContext$4.run(LoginContext.java:663)
>   at java.base/java.security.AccessController.doPrivileged(Native Method)
>   at 
> java.base/javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:663)
>   at 
> java.base/javax.security.auth.login.LoginContext.logout(LoginContext.java:613)
>   at 
> org.apache.hadoop.minikdc.TestMiniKdc.testKerberosLogin(TestMiniKdc.java:169)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15208) DistCp to offer -xtrack option to save src/dest filesets as alternative to delete()

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15208:

Fix Version/s: 3.1.0

> DistCp to offer -xtrack  option to save src/dest filesets as 
> alternative to delete()
> --
>
> Key: HADOOP-15208
> URL: https://issues.apache.org/jira/browse/HADOOP-15208
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-15208-001.patch, HADOOP-15208-002.patch, 
> HADOOP-15208-002.patch, HADOOP-15208-003.patch
>
>
> There are opportunities to improve distcp delete performance and scalability 
> with object stores, but you need to test with production datasets to 
> determine if the optimizations work, don't run out of memory, etc.
> By adding the option to save the sequence files of source, dest listings, 
> people (myself included) can experiment with different strategies before 
> trying to commit one which doesn't scale



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15274) Move hadoop-openstack to slf4j

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15274:

Fix Version/s: 3.1.0

> Move hadoop-openstack to slf4j
> --
>
> Key: HADOOP-15274
> URL: https://issues.apache.org/jira/browse/HADOOP-15274
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/swift
>Reporter: Steve Loughran
>Assignee: fang zhenyi
>Priority: Minor
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-15274.001.patch, HADOOP-15274.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15282) HADOOP-15235 broke TestHttpFSServerWebServer

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15282:

Fix Version/s: 3.1.0

> HADOOP-15235 broke TestHttpFSServerWebServer
> 
>
> Key: HADOOP-15282
> URL: https://issues.apache.org/jira/browse/HADOOP-15282
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.2.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-15282.001.patch
>
>
> As [~xiaochen] pointed out in [this 
> comment|https://issues.apache.org/jira/browse/HADOOP-15235?focusedCommentId=16375379&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16375379]
>  on HADOOP-15235, it broke {{TestHttpFSServerWebServer}}:
> {noformat}
> 2018-02-23 23:13:29,791 WARN  ServletHandler - /webhdfs/v1/
> java.lang.IllegalArgumentException: Empty key
>   at javax.crypto.spec.SecretKeySpec.(SecretKeySpec.java:96)
>   at 
> org.apache.hadoop.security.authentication.util.Signer.computeSignature(Signer.java:93)
>   at 
> org.apache.hadoop.security.authentication.util.Signer.sign(Signer.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:587)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1617)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:534)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:745)
> java.lang.AssertionError: 
> Expected :500
> Actual   :200
>  
> {noformat}
> This only affects trunk because {{TestHttpFSServerWebServer}} doesn't exist 
> in branch-2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15235) Authentication Tokens should use HMAC instead of MAC

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15235:

Fix Version/s: 3.1.0

> Authentication Tokens should use HMAC instead of MAC
> 
>
> Key: HADOOP-15235
> URL: https://issues.apache.org/jira/browse/HADOOP-15235
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.10.0, 3.2.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.2.0
>
> Attachments: HADOOP-15235.001.patch, HADOOP-15235.002.patch
>
>
> We currently use {{MessageDigest}} to compute a "SHA" MAC for signing 
> Authentication Tokens.  Firstly, what "SHA" maps to is dependent on the JVM 
> and Cryptography Provider.  While they _should_ do something reasonable, it's 
> probably a safer idea to pick a specific algorithm.  It looks like the Oracle 
> JVM picks SHA-1; though something like SHA-256 would be better.
> In any case, it would also be better to use an HMAC algorithm instead.
> Changing from SHA-1 to SHA-256 or MAC to HMAC won't generate equivalent 
> signatures, so this would normally be an incompatible change because the 
> server wouldn't accept previous tokens it issued with the older algorithm.  
> However, Authentication Tokens are used as a cheaper shortcut for Kerberos, 
> so it's expected for users to also have Kerberos credentials; in this case, 
> the Authentication Token will be rejected, but it will silently retry using 
> Kerberos, and get an updated token.  So this should all be transparent to the 
> user.
> And finally, the code where we verify a signature uses a non-constant-time 
> comparison, which could be subject to timing attacks.  I believe it would be 
> quite difficult in this case to do so, but we're probably better off using a 
> constant-time comparison.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15288) TestSwiftFileSystemBlockLocation doesn't compile

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15288:

Fix Version/s: 3.1.0

> TestSwiftFileSystemBlockLocation doesn't compile
> 
>
> Key: HADOOP-15288
> URL: https://issues.apache.org/jira/browse/HADOOP-15288
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, fs/swift
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-15288-001.patch
>
>
> TestSwiftFileSystemBlockLocation doesn't comple after the switch to the slf4J 
> APIs. one line fix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15158) AliyunOSS: Supports role based credential in URL

2018-03-07 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15158:

Fix Version/s: (was: 3.0.1)
   (was: 2.9.1)
   (was: 3.1.0)

> AliyunOSS: Supports role based credential in URL
> 
>
> Key: HADOOP-15158
> URL: https://issues.apache.org/jira/browse/HADOOP-15158
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15158.001.patch, HADOOP-15158.002.patch, 
> HADOOP-15158.003.patch, HADOOP-15158.004.patch, HADOOP-15158.005.patch
>
>
> Currently, AliyunCredentialsProvider supports credential by 
> configuration(core-site.xml). Sometimes, admin wants to create different 
> temporary credential(key/secret/token) for different roles so that one role 
> cannot read data that belongs to another role.
> So, our code should support pass in the URI when creates an 
> XXXCredentialsProvider so that we can get user info(role) from the URI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15262) AliyunOSS: rename() to move files in a directory in parallel

2018-03-07 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390703#comment-16390703
 ] 

Wangda Tan commented on HADOOP-15262:
-

[~wujinhu], removed fix version, it should be set by committer when the patch 
got committed.

> AliyunOSS: rename() to move files in a directory in parallel
> 
>
> Key: HADOOP-15262
> URL: https://issues.apache.org/jira/browse/HADOOP-15262
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15262.001.patch, HADOOP-15262.002.patch, 
> HADOOP-15262.003.patch, HADOOP-15262.004.patch, HADOOP-15262.005.patch
>
>
> Currently, rename() operation renames files in series. This will be slow if a 
> directory contains many files. So we can improve this by rename files in 
> parallel.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15262) AliyunOSS: rename() to move files in a directory in parallel

2018-03-07 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15262:

Fix Version/s: (was: 3.0.1)
   (was: 2.9.1)
   (was: 3.1.0)

> AliyunOSS: rename() to move files in a directory in parallel
> 
>
> Key: HADOOP-15262
> URL: https://issues.apache.org/jira/browse/HADOOP-15262
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15262.001.patch, HADOOP-15262.002.patch, 
> HADOOP-15262.003.patch, HADOOP-15262.004.patch, HADOOP-15262.005.patch
>
>
> Currently, rename() operation renames files in series. This will be slow if a 
> directory contains many files. So we can improve this by rename files in 
> parallel.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13761) S3Guard: implement retries for DDB failures and throttling; translate exceptions

2018-03-01 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16383163#comment-16383163
 ] 

Wangda Tan commented on HADOOP-13761:
-

Thanks [~fabbri] for help, we have last two blockers, this is one of them, if 
everything goes very well, we can get RC0 by end of this week or early next 
week. Just want to make sure this is not the last one :) 

> S3Guard: implement retries for DDB failures and throttling; translate 
> exceptions
> 
>
> Key: HADOOP-13761
> URL: https://issues.apache.org/jira/browse/HADOOP-13761
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Blocker
> Attachments: HADOOP-13761-004-to-005.patch, 
> HADOOP-13761-005-to-006-approx.diff.txt, HADOOP-13761-005.patch, 
> HADOOP-13761-006.patch, HADOOP-13761-007.patch, HADOOP-13761-008.patch, 
> HADOOP-13761-009.patch, HADOOP-13761-010.patch, HADOOP-13761-010.patch, 
> HADOOP-13761-011.patch, HADOOP-13761-012.patch, HADOOP-13761.001.patch, 
> HADOOP-13761.002.patch, HADOOP-13761.003.patch, HADOOP-13761.004.patch
>
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> retry logic.
> In HADOOP-13651, I added TODO comments in most of the places retry loops are 
> needed, including:
> - open(path).  If MetadataStore reflects recent create/move of file path, but 
> we fail to read it from S3, retry.
> - delete(path).  If deleteObject() on S3 fails, but MetadataStore shows the 
> file exists, retry.
> - rename(src,dest).  If source path is not visible in S3 yet, retry.
> - listFiles(). Skip for now. Not currently implemented in S3Guard. I will 
> create a separate JIRA for this as it will likely require interface changes 
> (i.e. prefix or subtree scan).
> We may miss some cases initially and we should do failure injection testing 
> to make sure we're covered.  Failure injection tests can be a separate JIRA 
> to make this easier to review.
> We also need basic configuration parameters around retry policy.  There 
> should be a way to specify maximum retry duration, as some applications would 
> prefer to receive an error eventually, than waiting indefinitely.  We should 
> also be keeping statistics when inconsistency is detected and we enter a 
> retry loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13761) S3Guard: implement retries for DDB failures and throttling; translate exceptions

2018-03-01 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16382580#comment-16382580
 ] 

Wangda Tan commented on HADOOP-13761:
-

[~fabbri]/[~ste...@apache.org],

Given the other blockers are close to finish, how much more time needed for 
this JIRA? Asking this because I want to cut RC0 of 3.1.0 earlier if possible.

> S3Guard: implement retries for DDB failures and throttling; translate 
> exceptions
> 
>
> Key: HADOOP-13761
> URL: https://issues.apache.org/jira/browse/HADOOP-13761
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Blocker
> Attachments: HADOOP-13761-004-to-005.patch, 
> HADOOP-13761-005-to-006-approx.diff.txt, HADOOP-13761-005.patch, 
> HADOOP-13761-006.patch, HADOOP-13761-007.patch, HADOOP-13761-008.patch, 
> HADOOP-13761-009.patch, HADOOP-13761-010.patch, HADOOP-13761-010.patch, 
> HADOOP-13761-011.patch, HADOOP-13761.001.patch, HADOOP-13761.002.patch, 
> HADOOP-13761.003.patch, HADOOP-13761.004.patch
>
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> retry logic.
> In HADOOP-13651, I added TODO comments in most of the places retry loops are 
> needed, including:
> - open(path).  If MetadataStore reflects recent create/move of file path, but 
> we fail to read it from S3, retry.
> - delete(path).  If deleteObject() on S3 fails, but MetadataStore shows the 
> file exists, retry.
> - rename(src,dest).  If source path is not visible in S3 yet, retry.
> - listFiles(). Skip for now. Not currently implemented in S3Guard. I will 
> create a separate JIRA for this as it will likely require interface changes 
> (i.e. prefix or subtree scan).
> We may miss some cases initially and we should do failure injection testing 
> to make sure we're covered.  Failure injection tests can be a separate JIRA 
> to make this easier to review.
> We also need basic configuration parameters around retry policy.  There 
> should be a way to specify maximum retry duration, as some applications would 
> prefer to receive an error eventually, than waiting indefinitely.  We should 
> also be keeping statistics when inconsistency is detected and we enter a 
> retry loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15007) Stabilize and document Configuration element

2018-02-27 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16379387#comment-16379387
 ] 

Wangda Tan commented on HADOOP-15007:
-

[~anu], apologize for my late responses, was traveling in the last a couple of 
days and recovering from jet lag 

I just found the patch removed class marked {{@Evolving}} which means we *must* 
include this patch in 3.1.0 as well (see 
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html).
 I just reopened this JIRA and set target version to 3.1.0.

To me I'd like to include this improvement in 3.1.0 since it improves doc for 
the new added feature, please backport to 3.1.0 if you think the change is safe.

> Stabilize and document Configuration  element
> --
>
> Key: HADOOP-15007
> URL: https://issues.apache.org/jira/browse/HADOOP-15007
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Blocker
> Fix For: 3.2.0
>
> Attachments: HADOOP-15007.000.patch, HADOOP-15007.001.patch, 
> HADOOP-15007.002.patch, HADOOP-15007.003.patch
>
>
> HDFS-12350 (moved to HADOOP-15005). Adds the ability to tag properties with a 
>  value.
> We need to make sure that this feature is backwards compatible & usable in 
> production. That's docs, testing, marshalling etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-15007) Stabilize and document Configuration element

2018-02-27 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan reopened HADOOP-15007:
-

> Stabilize and document Configuration  element
> --
>
> Key: HADOOP-15007
> URL: https://issues.apache.org/jira/browse/HADOOP-15007
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Blocker
> Fix For: 3.2.0
>
> Attachments: HADOOP-15007.000.patch, HADOOP-15007.001.patch, 
> HADOOP-15007.002.patch, HADOOP-15007.003.patch
>
>
> HDFS-12350 (moved to HADOOP-15005). Adds the ability to tag properties with a 
>  value.
> We need to make sure that this feature is backwards compatible & usable in 
> production. That's docs, testing, marshalling etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15007) Stabilize and document Configuration element

2018-02-27 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15007:

Target Version/s: 3.1.0  (was: 3.2.0)

> Stabilize and document Configuration  element
> --
>
> Key: HADOOP-15007
> URL: https://issues.apache.org/jira/browse/HADOOP-15007
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Blocker
> Fix For: 3.2.0
>
> Attachments: HADOOP-15007.000.patch, HADOOP-15007.001.patch, 
> HADOOP-15007.002.patch, HADOOP-15007.003.patch
>
>
> HDFS-12350 (moved to HADOOP-15005). Adds the ability to tag properties with a 
>  value.
> We need to make sure that this feature is backwards compatible & usable in 
> production. That's docs, testing, marshalling etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15264) AWS "shaded" SDK 1.271 is pulling in netty 4.1.17

2018-02-27 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15264:

Target Version/s: 3.1.0

> AWS "shaded" SDK 1.271 is pulling in netty 4.1.17
> -
>
> Key: HADOOP-15264
> URL: https://issues.apache.org/jira/browse/HADOOP-15264
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15264-001.patch
>
>
> The latest versions of the AWS Shaded SDK are declaring a dependency on netty 
> 4.1.17
> {code}
> [INFO] +- org.apache.hadoop:hadoop-aws:jar:3.2.0-SNAPSHOT:compile
> [INFO] |  \- com.amazonaws:aws-java-sdk-bundle:jar:1.11.271:compile
> [INFO] | +- io.netty:netty-codec-http:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-codec:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-handler:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-buffer:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-common:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-transport:jar:4.1.17.Final:compile
> [INFO] | \- io.netty:netty-resolver:jar:4.1.17.Final:compile
> {code}
> We either exclude these or roll back HADOOP-15040.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15264) AWS "shaded" SDK 1.271 is pulling in netty 4.1.17

2018-02-27 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16379304#comment-16379304
 ] 

Wangda Tan commented on HADOOP-15264:
-

Thanks [~ste...@apache.org], the patch LGTM. Do you want to make any more 
changes / verification within this JIRA's scope?

> AWS "shaded" SDK 1.271 is pulling in netty 4.1.17
> -
>
> Key: HADOOP-15264
> URL: https://issues.apache.org/jira/browse/HADOOP-15264
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15264-001.patch
>
>
> The latest versions of the AWS Shaded SDK are declaring a dependency on netty 
> 4.1.17
> {code}
> [INFO] +- org.apache.hadoop:hadoop-aws:jar:3.2.0-SNAPSHOT:compile
> [INFO] |  \- com.amazonaws:aws-java-sdk-bundle:jar:1.11.271:compile
> [INFO] | +- io.netty:netty-codec-http:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-codec:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-handler:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-buffer:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-common:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-transport:jar:4.1.17.Final:compile
> [INFO] | \- io.netty:netty-resolver:jar:4.1.17.Final:compile
> {code}
> We either exclude these or roll back HADOOP-15040.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14295) Authentication proxy filter may fail authorization because of getRemoteAddr

2018-02-27 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-14295:

Target Version/s: 3.2.0  (was: 3.1.0)

> Authentication proxy filter may fail authorization because of getRemoteAddr
> ---
>
> Key: HADOOP-14295
> URL: https://issues.apache.org/jira/browse/HADOOP-14295
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.4, 3.0.0-alpha2, 2.8.1
>Reporter: Jeffrey E  Rodriguez
>Assignee: Jeffrey E  Rodriguez
>Priority: Critical
> Attachments: HADOOP-14295.002.patch, HADOOP-14295.003.patch, 
> HADOOP-14295.004.patch, hadoop-14295.001.patch
>
>
> When we turn on Hadoop UI Kerberos and try to access Datanode /logs the proxy 
> (Knox) would get an Authorization failure and it hosts would should as 
> 127.0.0.1 even though Knox wasn't in local host to Datanode, error message:
> {quote}
> "2017-04-08 07:01:23,029 ERROR security.AuthenticationWithProxyUserFilter 
> (AuthenticationWithProxyUserFilter.java:getRemoteUser(94)) - Unable to verify 
> proxy user: Unauthorized connection for super-user: knox from IP 127.0.0.1"
> {quote}
> We were able to figure out that Datanode have Jetty listening on localhost 
> and that Netty is used to server request to DataNode, this was a measure to 
> improve performance because of Netty Async NIO design.
> I propose to add a check for x-forwarded-for header since proxys usually 
> inject that header before we do a getRemoteAddr



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15137) ClassNotFoundException: org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using hadoop-client-minicluster

2018-02-27 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16379141#comment-16379141
 ] 

Wangda Tan commented on HADOOP-15137:
-

Thanks [~rohithsharma].

[~bharatviswa], have you validated the patch? Does minicluster works properly 
after the change? Given the patch fixed critical issues, I'd like to include 
this in 3.1.0 release. Is there any risk by doing that? 

> ClassNotFoundException: 
> org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using 
> hadoop-client-minicluster
> --
>
> Key: HADOOP-15137
> URL: https://issues.apache.org/jira/browse/HADOOP-15137
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Jeff Zhang
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HADOOP-15137.01.patch, HADOOP-15137.02.patch, 
> YARN-7673.00.patch
>
>
> I'd like to use hadoop-client-minicluster for hadoop downstream project, but 
> I encounter the following exception when starting hadoop minicluster.  And I 
> check the hadoop-client-minicluster, it indeed does not have this class. Is 
> this something that is missing when packaging the published jar ?
> {code}
> java.lang.NoClassDefFoundError: 
> org/apache/hadoop/yarn/server/api/DistributedSchedulingAMProtocol
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.createResourceManager(MiniYARNCluster.java:851)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:285)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15264) AWS "shaded" SDK 1.271 is pulling in netty 4.1.17

2018-02-26 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16377331#comment-16377331
 ] 

Wangda Tan commented on HADOOP-15264:
-

[~ste...@apache.org], from the latest comment on 
https://github.com/aws/aws-sdk-java/issues/1488, Andrew Shore mentioned he will 
add the mapping. So is there anything we need to do from our side? Should we 
still mark this to be blocker of 3.1.0?

> AWS "shaded" SDK 1.271 is pulling in netty 4.1.17
> -
>
> Key: HADOOP-15264
> URL: https://issues.apache.org/jira/browse/HADOOP-15264
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Priority: Blocker
>
> The latest versions of the AWS Shaded SDK are declaring a dependency on netty 
> 4.1.17
> {code}
> [INFO] +- org.apache.hadoop:hadoop-aws:jar:3.2.0-SNAPSHOT:compile
> [INFO] |  \- com.amazonaws:aws-java-sdk-bundle:jar:1.11.271:compile
> [INFO] | +- io.netty:netty-codec-http:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-codec:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-handler:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-buffer:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-common:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-transport:jar:4.1.17.Final:compile
> [INFO] | \- io.netty:netty-resolver:jar:4.1.17.Final:compile
> {code}
> We either exclude these or roll back HADOOP-15040.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   3   >