[jira] [Commented] (HADOOP-15482) Upgrade jackson-databind to version 2.9.5

2018-05-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483532#comment-16483532
 ] 

genericqa commented on HADOOP-15482:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
35m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
12s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15482 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12924478/HADOOP-15482.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux d4b6f29337dc 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5e88126 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14671/testReport/ |
| Max. process+thread count | 301 (vs. ulimit of 1) |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14671/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Upgrade jackson-databind to version 2.9.5
> -
>
> Key: HADOOP-15482
> URL: https://issues.apache.org/jira/browse/HADOOP-15482
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HADOOP-15482.001.patch, HADOOP-15482.002.patch
>
>
> This Jira aims to upgrade jackson-databind to version 

[jira] [Commented] (HADOOP-15487) ConcurrentModificationException resulting in Kerberos authentication error.

2018-05-21 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483427#comment-16483427
 ] 

Xiao Chen commented on HADOOP-15487:


Thanks [~jojochuang] for creating the jira and tagging me.

This appears to be the exact symptom I saw in HADOOP-15401. While this confirms 
that this is an issue in another version, unfortunately it's the symptom of 
someone else removing via the iterator without synchronizing, and we don't know 
_where_ this faulty removal happens.

In other words, this is the same symptom of HADOOP-15401 where we're being the 
victim, but the (likely same) culprit remains unclear...

> ConcurrentModificationException resulting in Kerberos authentication error.
> ---
>
> Key: HADOOP-15487
> URL: https://issues.apache.org/jira/browse/HADOOP-15487
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: CDH 5.13.3. Kerberized, Hadoop-HA, jdk1.8.0_152
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> We found the following exception message in a NameNode log. It seems the 
> ConcurrentModificationException caused Kerberos authentication error.
> It appears to be a JDK bug, similar to HADOOP-13433 (Race in 
> UGI.reloginFromKeytab) but the version of Hadoop (CDH5.13.3) already patched 
> HADOOP-13433. (The stacktrace also differs) This cluster runs on JDK 
> 1.8.0_152.
> {noformat}
> 2018-05-19 04:00:00,182 WARN org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:hdfs/no...@example.com (auth:KERBEROS) 
> cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> 2018-05-19 04:00:00,183 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 
> for port 8020: readAndProcess from client 10.16.20.122 threw exception 
> [java.util.ConcurrentModificationException]
> java.util.ConcurrentModificationException
> at 
> java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966)
> at java.util.LinkedList$ListItr.next(LinkedList.java:888)
> at javax.security.auth.Subject$SecureSet$1.next(Subject.java:1070)
> at javax.security.auth.Subject$ClassSet$1.run(Subject.java:1401)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject$ClassSet.populateSet(Subject.java:1399)
> at javax.security.auth.Subject$ClassSet.(Subject.java:1372)
> at javax.security.auth.Subject.getPrivateCredentials(Subject.java:767)
> at 
> sun.security.jgss.krb5.SubjectComber.findAux(SubjectComber.java:127)
> at 
> sun.security.jgss.krb5.SubjectComber.findMany(SubjectComber.java:69)
> at 
> sun.security.jgss.krb5.ServiceCreds.getInstance(ServiceCreds.java:96)
> at sun.security.jgss.krb5.Krb5Util.getServiceCreds(Krb5Util.java:203)
> at 
> sun.security.jgss.krb5.Krb5AcceptCredential$1.run(Krb5AcceptCredential.java:74)
> at 
> sun.security.jgss.krb5.Krb5AcceptCredential$1.run(Krb5AcceptCredential.java:72)
> at java.security.AccessController.doPrivileged(Native Method)
> at 
> sun.security.jgss.krb5.Krb5AcceptCredential.getInstance(Krb5AcceptCredential.java:71)
> at 
> sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:127)
> at 
> sun.security.jgss.GSSManagerImpl.getCredentialElement(GSSManagerImpl.java:193)
> at sun.security.jgss.GSSCredentialImpl.add(GSSCredentialImpl.java:427)
> at 
> sun.security.jgss.GSSCredentialImpl.(GSSCredentialImpl.java:62)
> at 
> sun.security.jgss.GSSManagerImpl.createCredential(GSSManagerImpl.java:154)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Server.(GssKrb5Server.java:108)
> at 
> com.sun.security.sasl.gsskerb.FactoryImpl.createSaslServer(FactoryImpl.java:85)
> at 
> org.apache.hadoop.security.SaslRpcServer$FastSaslServerFactory.createSaslServer(SaslRpcServer.java:398)
> at 
> org.apache.hadoop.security.SaslRpcServer$1.run(SaslRpcServer.java:164)
> at 
> org.apache.hadoop.security.SaslRpcServer$1.run(SaslRpcServer.java:161)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
> at 
> org.apache.hadoop.security.SaslRpcServer.create(SaslRpcServer.java:160)
> at 
> org.apache.hadoop.ipc.Server$Connection.createSaslServer(Server.java:1742)
> at 
> org.apache.hadoop.ipc.Server$Connection.processSaslMessage(Server.java:1522)
> at 
> org.apache.hadoop.ipc.Server$Connection.saslProcess(Server.java:1433)
> at 
> 

[jira] [Commented] (HADOOP-15401) ConcurrentModificationException on Subject.getPrivateCredentials in UGI constructor

2018-05-21 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483500#comment-16483500
 ] 

Xiao Chen commented on HADOOP-15401:


Thanks for the comment [~yzhangal].

For the specific stacktrace, it won't appear anymore after HADOOP-9747, which 
is why I closed the jira.
The issue of 'someone seems to be removing from the synchronizedset without 
synchronizing' is still there, and seems we can now use HADOOP-15487 to track 
it. :)

> ConcurrentModificationException on Subject.getPrivateCredentials in UGI 
> constructor
> ---
>
> Key: HADOOP-15401
> URL: https://issues.apache.org/jira/browse/HADOOP-15401
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Xiao Chen
>Priority: Major
>
> Seen a recent exception from KMS client provider as follows:
> {noformat}
> java.io.IOException: java.util.ConcurrentModificationException
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:488)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:776)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:287)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:283)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:123)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:283)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
> Caused by: java.util.ConcurrentModificationException
> at 
> java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966)
> at java.util.LinkedList$ListItr.next(LinkedList.java:888)
> at javax.security.auth.Subject$SecureSet$1.next(Subject.java:1070)
> at javax.security.auth.Subject$ClassSet$1.run(Subject.java:1401)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject$ClassSet.populateSet(Subject.java:1399)
> at javax.security.auth.Subject$ClassSet.(Subject.java:1372)
> at javax.security.auth.Subject.getPrivateCredentials(Subject.java:767)
> at 
> org.apache.hadoop.security.authentication.util.KerberosUtil.hasKerberosKeyTab(KerberosUtil.java:267)
> at 
> org.apache.hadoop.security.UserGroupInformation.(UserGroupInformation.java:715)
> at 
> org.apache.hadoop.security.UserGroupInformation.(UserGroupInformation.java:701)
> at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:742)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:141)
> at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:348)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:333)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:477)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:472)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:471)
> ... 12 more
> {noformat}
> It looks like we have ran into a race modifying jdk Subject class' 
> privCredentials.
> Found [https://bugs.openjdk.java.net/browse/JDK-4892913] but that jira was 
> created before Hadoop
> [~daryn], any thoughts on this?
>  (We have not seen this in 

[jira] [Updated] (HADOOP-15482) Upgrade jackson-databind to version 2.9.5

2018-05-21 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HADOOP-15482:
-
Attachment: HADOOP-15482.002.patch

> Upgrade jackson-databind to version 2.9.5
> -
>
> Key: HADOOP-15482
> URL: https://issues.apache.org/jira/browse/HADOOP-15482
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HADOOP-15482.001.patch, HADOOP-15482.002.patch
>
>
> This Jira aims to upgrade jackson-databind to version 2.9.5



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15401) ConcurrentModificationException on Subject.getPrivateCredentials in UGI constructor

2018-05-21 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483453#comment-16483453
 ] 

Yongjun Zhang commented on HADOOP-15401:


HI [~xiaochen],

Based on the discussion you had with [~daryn], it seems more reasonable to 
leave this jira Open, instead of Fixed. 

If we can not reproduce, we can mark it "Cant Reproduce". There is still the 
culprit to identify.

Thanks.


> ConcurrentModificationException on Subject.getPrivateCredentials in UGI 
> constructor
> ---
>
> Key: HADOOP-15401
> URL: https://issues.apache.org/jira/browse/HADOOP-15401
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Xiao Chen
>Priority: Major
>
> Seen a recent exception from KMS client provider as follows:
> {noformat}
> java.io.IOException: java.util.ConcurrentModificationException
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:488)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:776)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:287)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:283)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:123)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:283)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
> Caused by: java.util.ConcurrentModificationException
> at 
> java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966)
> at java.util.LinkedList$ListItr.next(LinkedList.java:888)
> at javax.security.auth.Subject$SecureSet$1.next(Subject.java:1070)
> at javax.security.auth.Subject$ClassSet$1.run(Subject.java:1401)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject$ClassSet.populateSet(Subject.java:1399)
> at javax.security.auth.Subject$ClassSet.(Subject.java:1372)
> at javax.security.auth.Subject.getPrivateCredentials(Subject.java:767)
> at 
> org.apache.hadoop.security.authentication.util.KerberosUtil.hasKerberosKeyTab(KerberosUtil.java:267)
> at 
> org.apache.hadoop.security.UserGroupInformation.(UserGroupInformation.java:715)
> at 
> org.apache.hadoop.security.UserGroupInformation.(UserGroupInformation.java:701)
> at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:742)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:141)
> at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:348)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:333)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:477)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:472)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:471)
> ... 12 more
> {noformat}
> It looks like we have ran into a race modifying jdk Subject class' 
> privCredentials.
> Found [https://bugs.openjdk.java.net/browse/JDK-4892913] but that jira was 
> created before Hadoop
> [~daryn], any thoughts on this?
>  (We have not seen this in versions pre-3.0 yet, but it seems HADOOP-9747 
> would 

[jira] [Commented] (HADOOP-15450) Avoid fsync storm triggered by DiskChecker and handle disk full situation

2018-05-21 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483469#comment-16483469
 ] 

Yongjun Zhang commented on HADOOP-15450:


Hi [~arpitagarwal],

Thanks for working on this issue.

I'm preparing for 3.0.3 release. Couple of questions:

1. This jira seems an important one, would you please confirm it's a blocker 
for 3.0.3?

2. If it's a blocker, do we need HDFS-13538 too?
{quote}
Split this out into separate changes. For now this fixes Hadoop Common to avoid 
doing disk IO unless explicitly requested to address the immediate concern.
Will add the disk full check via HDFS-13538.
{quote}

Thanks.


> Avoid fsync storm triggered by DiskChecker and handle disk full situation
> -
>
> Key: HADOOP-15450
> URL: https://issues.apache.org/jira/browse/HADOOP-15450
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Arpit Agarwal
>Priority: Blocker
> Attachments: HADOOP-15450.01.patch, HADOOP-15450.02.patch
>
>
> Fix disk checker issues reported by [~kihwal] in HADOOP-13738
> There are non-hdfs users of DiskChecker, who use it proactively, not just on 
> failures. This was fine before, but now it incurs heavy I/O due to 
> introduction of fsync() in the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15450) Avoid fsync storm triggered by DiskChecker and handle disk full situation

2018-05-21 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483469#comment-16483469
 ] 

Yongjun Zhang edited comment on HADOOP-15450 at 5/22/18 4:25 AM:
-

Hi [~arpitagarwal] and [~kihwal],

Thanks for working on this issue.

I'm preparing for 3.0.3 release. Couple of questions:

1. This jira seems an important one, would you please confirm it's a blocker 
for 3.0.3?

2. If it's a blocker, do we need HDFS-13538 too?
{quote}
Split this out into separate changes. For now this fixes Hadoop Common to avoid 
doing disk IO unless explicitly requested to address the immediate concern.
Will add the disk full check via HDFS-13538.
{quote}

Thanks.



was (Author: yzhangal):
Hi [~arpitagarwal],

Thanks for working on this issue.

I'm preparing for 3.0.3 release. Couple of questions:

1. This jira seems an important one, would you please confirm it's a blocker 
for 3.0.3?

2. If it's a blocker, do we need HDFS-13538 too?
{quote}
Split this out into separate changes. For now this fixes Hadoop Common to avoid 
doing disk IO unless explicitly requested to address the immediate concern.
Will add the disk full check via HDFS-13538.
{quote}

Thanks.


> Avoid fsync storm triggered by DiskChecker and handle disk full situation
> -
>
> Key: HADOOP-15450
> URL: https://issues.apache.org/jira/browse/HADOOP-15450
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Arpit Agarwal
>Priority: Blocker
> Attachments: HADOOP-15450.01.patch, HADOOP-15450.02.patch
>
>
> Fix disk checker issues reported by [~kihwal] in HADOOP-13738
> There are non-hdfs users of DiskChecker, who use it proactively, not just on 
> failures. This was fine before, but now it incurs heavy I/O due to 
> introduction of fsync() in the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10783) apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed

2018-05-21 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483403#comment-16483403
 ] 

Takanobu Asanuma commented on HADOOP-10783:
---

HADOOP-10783.6.patch: rebased

> apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed
> ---
>
> Key: HADOOP-10783
> URL: https://issues.apache.org/jira/browse/HADOOP-10783
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Dmitry Sivachenko
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-10783.2.patch, HADOOP-10783.3.patch, 
> HADOOP-10783.4.patch, HADOOP-10783.5.patch, HADOOP-10783.6.patch, 
> commons-lang3_1.patch
>
>
> Hadoop-2.4.1 ships with apache-commons.jar version 2.6.
> It does not support FreeBSD (IS_OS_UNIX returns False).
> This is fixed in recent versions of apache-commons.jar
> Please update apache-commons.jar to recent version so it correctly recognizes 
> FreeBSD as UNIX-like system.
> Right now I get in datanode's log:
> 2014-07-04 11:58:10,459 DEBUG 
> org.apache.hadoop.hdfs.server.datanode.ShortCircui
> tRegistry: Disabling ShortCircuitRegistry
> java.io.IOException: The OS is not UNIX.
> at 
> org.apache.hadoop.io.nativeio.SharedFileDescriptorFactory.create(SharedFileDescriptorFactory.java:77)
> at 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.(ShortCircuitRegistry.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:583)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:771)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:289)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1931)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1818)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1865)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2041)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2065)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10783) apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed

2018-05-21 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-10783:
--
Attachment: HADOOP-10783.6.patch

> apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed
> ---
>
> Key: HADOOP-10783
> URL: https://issues.apache.org/jira/browse/HADOOP-10783
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Dmitry Sivachenko
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-10783.2.patch, HADOOP-10783.3.patch, 
> HADOOP-10783.4.patch, HADOOP-10783.5.patch, HADOOP-10783.6.patch, 
> commons-lang3_1.patch
>
>
> Hadoop-2.4.1 ships with apache-commons.jar version 2.6.
> It does not support FreeBSD (IS_OS_UNIX returns False).
> This is fixed in recent versions of apache-commons.jar
> Please update apache-commons.jar to recent version so it correctly recognizes 
> FreeBSD as UNIX-like system.
> Right now I get in datanode's log:
> 2014-07-04 11:58:10,459 DEBUG 
> org.apache.hadoop.hdfs.server.datanode.ShortCircui
> tRegistry: Disabling ShortCircuitRegistry
> java.io.IOException: The OS is not UNIX.
> at 
> org.apache.hadoop.io.nativeio.SharedFileDescriptorFactory.create(SharedFileDescriptorFactory.java:77)
> at 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.(ShortCircuitRegistry.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:583)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:771)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:289)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1931)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1818)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1865)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2041)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2065)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15487) ConcurrentModificationException resulting in Kerberos authentication error.

2018-05-21 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483240#comment-16483240
 ] 

Wei-Chiu Chuang commented on HADOOP-15487:
--

In fact, it seems HADOOP-13433 is actually the only Hadoop code that modifies 
Subject's PrivateCredentials...

> ConcurrentModificationException resulting in Kerberos authentication error.
> ---
>
> Key: HADOOP-15487
> URL: https://issues.apache.org/jira/browse/HADOOP-15487
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: CDH 5.13.3. Kerberized, Hadoop-HA, jdk1.8.0_152
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> We found the following exception message in a NameNode log. It seems the 
> ConcurrentModificationException caused Kerberos authentication error.
> It appears to be a JDK bug, similar to HADOOP-13433 (Race in 
> UGI.reloginFromKeytab) but the version of Hadoop (CDH5.13.3) already patched 
> HADOOP-13433. (The stacktrace also differs) This cluster runs on JDK 
> 1.8.0_152.
> {noformat}
> 2018-05-19 04:00:00,182 WARN org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:hdfs/no...@example.com (auth:KERBEROS) 
> cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> 2018-05-19 04:00:00,183 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 
> for port 8020: readAndProcess from client 10.16.20.122 threw exception 
> [java.util.ConcurrentModificationException]
> java.util.ConcurrentModificationException
> at 
> java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966)
> at java.util.LinkedList$ListItr.next(LinkedList.java:888)
> at javax.security.auth.Subject$SecureSet$1.next(Subject.java:1070)
> at javax.security.auth.Subject$ClassSet$1.run(Subject.java:1401)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject$ClassSet.populateSet(Subject.java:1399)
> at javax.security.auth.Subject$ClassSet.(Subject.java:1372)
> at javax.security.auth.Subject.getPrivateCredentials(Subject.java:767)
> at 
> sun.security.jgss.krb5.SubjectComber.findAux(SubjectComber.java:127)
> at 
> sun.security.jgss.krb5.SubjectComber.findMany(SubjectComber.java:69)
> at 
> sun.security.jgss.krb5.ServiceCreds.getInstance(ServiceCreds.java:96)
> at sun.security.jgss.krb5.Krb5Util.getServiceCreds(Krb5Util.java:203)
> at 
> sun.security.jgss.krb5.Krb5AcceptCredential$1.run(Krb5AcceptCredential.java:74)
> at 
> sun.security.jgss.krb5.Krb5AcceptCredential$1.run(Krb5AcceptCredential.java:72)
> at java.security.AccessController.doPrivileged(Native Method)
> at 
> sun.security.jgss.krb5.Krb5AcceptCredential.getInstance(Krb5AcceptCredential.java:71)
> at 
> sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:127)
> at 
> sun.security.jgss.GSSManagerImpl.getCredentialElement(GSSManagerImpl.java:193)
> at sun.security.jgss.GSSCredentialImpl.add(GSSCredentialImpl.java:427)
> at 
> sun.security.jgss.GSSCredentialImpl.(GSSCredentialImpl.java:62)
> at 
> sun.security.jgss.GSSManagerImpl.createCredential(GSSManagerImpl.java:154)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Server.(GssKrb5Server.java:108)
> at 
> com.sun.security.sasl.gsskerb.FactoryImpl.createSaslServer(FactoryImpl.java:85)
> at 
> org.apache.hadoop.security.SaslRpcServer$FastSaslServerFactory.createSaslServer(SaslRpcServer.java:398)
> at 
> org.apache.hadoop.security.SaslRpcServer$1.run(SaslRpcServer.java:164)
> at 
> org.apache.hadoop.security.SaslRpcServer$1.run(SaslRpcServer.java:161)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
> at 
> org.apache.hadoop.security.SaslRpcServer.create(SaslRpcServer.java:160)
> at 
> org.apache.hadoop.ipc.Server$Connection.createSaslServer(Server.java:1742)
> at 
> org.apache.hadoop.ipc.Server$Connection.processSaslMessage(Server.java:1522)
> at 
> org.apache.hadoop.ipc.Server$Connection.saslProcess(Server.java:1433)
> at 
> org.apache.hadoop.ipc.Server$Connection.saslReadAndProcess(Server.java:1396)
> at 
> org.apache.hadoop.ipc.Server$Connection.processRpcOutOfBandRequest(Server.java:2080)
> at 
> org.apache.hadoop.ipc.Server$Connection.processOneRpc(Server.java:1920)
> at 
> org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1682)

[jira] [Comment Edited] (HADOOP-15487) ConcurrentModificationException resulting in Kerberos authentication error.

2018-05-21 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483232#comment-16483232
 ] 

Wei-Chiu Chuang edited comment on HADOOP-15487 at 5/22/18 12:07 AM:


BTW I think this is the same as HADOOP-15401. [~xiaochen] [~daryn]

bq. Searched impala code but don't find any faulty usage.
This exception is inside NameNode. So clearly must be in Hadoop code.


was (Author: jojochuang):
BTW I think this is the same as HADOOP-15401. [~xiaochen] [~daryn]

bq. Searched impala code but don't find any faulty usage.
Now here's an example.

> ConcurrentModificationException resulting in Kerberos authentication error.
> ---
>
> Key: HADOOP-15487
> URL: https://issues.apache.org/jira/browse/HADOOP-15487
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: CDH 5.13.3. Kerberized, Hadoop-HA, jdk1.8.0_152
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> We found the following exception message in a NameNode log. It seems the 
> ConcurrentModificationException caused Kerberos authentication error.
> It appears to be a JDK bug, similar to HADOOP-13433 (Race in 
> UGI.reloginFromKeytab) but the version of Hadoop (CDH5.13.3) already patched 
> HADOOP-13433. (The stacktrace also differs) This cluster runs on JDK 
> 1.8.0_152.
> {noformat}
> 2018-05-19 04:00:00,182 WARN org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:hdfs/no...@example.com (auth:KERBEROS) 
> cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> 2018-05-19 04:00:00,183 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 
> for port 8020: readAndProcess from client 10.16.20.122 threw exception 
> [java.util.ConcurrentModificationException]
> java.util.ConcurrentModificationException
> at 
> java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966)
> at java.util.LinkedList$ListItr.next(LinkedList.java:888)
> at javax.security.auth.Subject$SecureSet$1.next(Subject.java:1070)
> at javax.security.auth.Subject$ClassSet$1.run(Subject.java:1401)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject$ClassSet.populateSet(Subject.java:1399)
> at javax.security.auth.Subject$ClassSet.(Subject.java:1372)
> at javax.security.auth.Subject.getPrivateCredentials(Subject.java:767)
> at 
> sun.security.jgss.krb5.SubjectComber.findAux(SubjectComber.java:127)
> at 
> sun.security.jgss.krb5.SubjectComber.findMany(SubjectComber.java:69)
> at 
> sun.security.jgss.krb5.ServiceCreds.getInstance(ServiceCreds.java:96)
> at sun.security.jgss.krb5.Krb5Util.getServiceCreds(Krb5Util.java:203)
> at 
> sun.security.jgss.krb5.Krb5AcceptCredential$1.run(Krb5AcceptCredential.java:74)
> at 
> sun.security.jgss.krb5.Krb5AcceptCredential$1.run(Krb5AcceptCredential.java:72)
> at java.security.AccessController.doPrivileged(Native Method)
> at 
> sun.security.jgss.krb5.Krb5AcceptCredential.getInstance(Krb5AcceptCredential.java:71)
> at 
> sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:127)
> at 
> sun.security.jgss.GSSManagerImpl.getCredentialElement(GSSManagerImpl.java:193)
> at sun.security.jgss.GSSCredentialImpl.add(GSSCredentialImpl.java:427)
> at 
> sun.security.jgss.GSSCredentialImpl.(GSSCredentialImpl.java:62)
> at 
> sun.security.jgss.GSSManagerImpl.createCredential(GSSManagerImpl.java:154)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Server.(GssKrb5Server.java:108)
> at 
> com.sun.security.sasl.gsskerb.FactoryImpl.createSaslServer(FactoryImpl.java:85)
> at 
> org.apache.hadoop.security.SaslRpcServer$FastSaslServerFactory.createSaslServer(SaslRpcServer.java:398)
> at 
> org.apache.hadoop.security.SaslRpcServer$1.run(SaslRpcServer.java:164)
> at 
> org.apache.hadoop.security.SaslRpcServer$1.run(SaslRpcServer.java:161)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
> at 
> org.apache.hadoop.security.SaslRpcServer.create(SaslRpcServer.java:160)
> at 
> org.apache.hadoop.ipc.Server$Connection.createSaslServer(Server.java:1742)
> at 
> org.apache.hadoop.ipc.Server$Connection.processSaslMessage(Server.java:1522)
> at 
> org.apache.hadoop.ipc.Server$Connection.saslProcess(Server.java:1433)
> at 
> 

[jira] [Commented] (HADOOP-15487) ConcurrentModificationException resulting in Kerberos authentication error.

2018-05-21 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483232#comment-16483232
 ] 

Wei-Chiu Chuang commented on HADOOP-15487:
--

BTW I think this is the same as HADOOP-15401. [~xiaochen] [~daryn]

bq. Searched impala code but don't find any faulty usage.
Now here's an example.

> ConcurrentModificationException resulting in Kerberos authentication error.
> ---
>
> Key: HADOOP-15487
> URL: https://issues.apache.org/jira/browse/HADOOP-15487
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: CDH 5.13.3. Kerberized, Hadoop-HA, jdk1.8.0_152
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> We found the following exception message in a NameNode log. It seems the 
> ConcurrentModificationException caused Kerberos authentication error.
> It appears to be a JDK bug, similar to HADOOP-13433 (Race in 
> UGI.reloginFromKeytab) but the version of Hadoop (CDH5.13.3) already patched 
> HADOOP-13433. (The stacktrace also differs) This cluster runs on JDK 
> 1.8.0_152.
> {noformat}
> 2018-05-19 04:00:00,182 WARN org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:hdfs/no...@example.com (auth:KERBEROS) 
> cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> 2018-05-19 04:00:00,183 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 
> for port 8020: readAndProcess from client 10.16.20.122 threw exception 
> [java.util.ConcurrentModificationException]
> java.util.ConcurrentModificationException
> at 
> java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966)
> at java.util.LinkedList$ListItr.next(LinkedList.java:888)
> at javax.security.auth.Subject$SecureSet$1.next(Subject.java:1070)
> at javax.security.auth.Subject$ClassSet$1.run(Subject.java:1401)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject$ClassSet.populateSet(Subject.java:1399)
> at javax.security.auth.Subject$ClassSet.(Subject.java:1372)
> at javax.security.auth.Subject.getPrivateCredentials(Subject.java:767)
> at 
> sun.security.jgss.krb5.SubjectComber.findAux(SubjectComber.java:127)
> at 
> sun.security.jgss.krb5.SubjectComber.findMany(SubjectComber.java:69)
> at 
> sun.security.jgss.krb5.ServiceCreds.getInstance(ServiceCreds.java:96)
> at sun.security.jgss.krb5.Krb5Util.getServiceCreds(Krb5Util.java:203)
> at 
> sun.security.jgss.krb5.Krb5AcceptCredential$1.run(Krb5AcceptCredential.java:74)
> at 
> sun.security.jgss.krb5.Krb5AcceptCredential$1.run(Krb5AcceptCredential.java:72)
> at java.security.AccessController.doPrivileged(Native Method)
> at 
> sun.security.jgss.krb5.Krb5AcceptCredential.getInstance(Krb5AcceptCredential.java:71)
> at 
> sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:127)
> at 
> sun.security.jgss.GSSManagerImpl.getCredentialElement(GSSManagerImpl.java:193)
> at sun.security.jgss.GSSCredentialImpl.add(GSSCredentialImpl.java:427)
> at 
> sun.security.jgss.GSSCredentialImpl.(GSSCredentialImpl.java:62)
> at 
> sun.security.jgss.GSSManagerImpl.createCredential(GSSManagerImpl.java:154)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Server.(GssKrb5Server.java:108)
> at 
> com.sun.security.sasl.gsskerb.FactoryImpl.createSaslServer(FactoryImpl.java:85)
> at 
> org.apache.hadoop.security.SaslRpcServer$FastSaslServerFactory.createSaslServer(SaslRpcServer.java:398)
> at 
> org.apache.hadoop.security.SaslRpcServer$1.run(SaslRpcServer.java:164)
> at 
> org.apache.hadoop.security.SaslRpcServer$1.run(SaslRpcServer.java:161)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
> at 
> org.apache.hadoop.security.SaslRpcServer.create(SaslRpcServer.java:160)
> at 
> org.apache.hadoop.ipc.Server$Connection.createSaslServer(Server.java:1742)
> at 
> org.apache.hadoop.ipc.Server$Connection.processSaslMessage(Server.java:1522)
> at 
> org.apache.hadoop.ipc.Server$Connection.saslProcess(Server.java:1433)
> at 
> org.apache.hadoop.ipc.Server$Connection.saslReadAndProcess(Server.java:1396)
> at 
> org.apache.hadoop.ipc.Server$Connection.processRpcOutOfBandRequest(Server.java:2080)
> at 
> org.apache.hadoop.ipc.Server$Connection.processOneRpc(Server.java:1920)
> at 
> 

[jira] [Created] (HADOOP-15487) ConcurrentModificationException resulting in Kerberos authentication error.

2018-05-21 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-15487:


 Summary: ConcurrentModificationException resulting in Kerberos 
authentication error.
 Key: HADOOP-15487
 URL: https://issues.apache.org/jira/browse/HADOOP-15487
 Project: Hadoop Common
  Issue Type: Bug
 Environment: CDH 5.13.3. Kerberized, Hadoop-HA, jdk1.8.0_152
Reporter: Wei-Chiu Chuang


We found the following exception message in a NameNode log. It seems the 
ConcurrentModificationException caused Kerberos authentication error.

It appears to be a JDK bug, similar to HADOOP-13433 (Race in 
UGI.reloginFromKeytab) but the version of Hadoop (CDH5.13.3) already patched 
HADOOP-13433. (The stacktrace also differs) This cluster runs on JDK 1.8.0_152.

{noformat}
2018-05-19 04:00:00,182 WARN org.apache.hadoop.security.UserGroupInformation: 
PriviledgedActionException as:hdfs/no...@example.com (auth:KERBEROS) 
cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)]
2018-05-19 04:00:00,183 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for 
port 8020: readAndProcess from client 10.16.20.122 threw exception 
[java.util.ConcurrentModificationException]
java.util.ConcurrentModificationException
at 
java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966)
at java.util.LinkedList$ListItr.next(LinkedList.java:888)
at javax.security.auth.Subject$SecureSet$1.next(Subject.java:1070)
at javax.security.auth.Subject$ClassSet$1.run(Subject.java:1401)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject$ClassSet.populateSet(Subject.java:1399)
at javax.security.auth.Subject$ClassSet.(Subject.java:1372)
at javax.security.auth.Subject.getPrivateCredentials(Subject.java:767)
at sun.security.jgss.krb5.SubjectComber.findAux(SubjectComber.java:127)
at sun.security.jgss.krb5.SubjectComber.findMany(SubjectComber.java:69)
at sun.security.jgss.krb5.ServiceCreds.getInstance(ServiceCreds.java:96)
at sun.security.jgss.krb5.Krb5Util.getServiceCreds(Krb5Util.java:203)
at 
sun.security.jgss.krb5.Krb5AcceptCredential$1.run(Krb5AcceptCredential.java:74)
at 
sun.security.jgss.krb5.Krb5AcceptCredential$1.run(Krb5AcceptCredential.java:72)
at java.security.AccessController.doPrivileged(Native Method)
at 
sun.security.jgss.krb5.Krb5AcceptCredential.getInstance(Krb5AcceptCredential.java:71)
at 
sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:127)
at 
sun.security.jgss.GSSManagerImpl.getCredentialElement(GSSManagerImpl.java:193)
at sun.security.jgss.GSSCredentialImpl.add(GSSCredentialImpl.java:427)
at sun.security.jgss.GSSCredentialImpl.(GSSCredentialImpl.java:62)
at 
sun.security.jgss.GSSManagerImpl.createCredential(GSSManagerImpl.java:154)
at 
com.sun.security.sasl.gsskerb.GssKrb5Server.(GssKrb5Server.java:108)
at 
com.sun.security.sasl.gsskerb.FactoryImpl.createSaslServer(FactoryImpl.java:85)
at 
org.apache.hadoop.security.SaslRpcServer$FastSaslServerFactory.createSaslServer(SaslRpcServer.java:398)
at 
org.apache.hadoop.security.SaslRpcServer$1.run(SaslRpcServer.java:164)
at 
org.apache.hadoop.security.SaslRpcServer$1.run(SaslRpcServer.java:161)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
at 
org.apache.hadoop.security.SaslRpcServer.create(SaslRpcServer.java:160)
at 
org.apache.hadoop.ipc.Server$Connection.createSaslServer(Server.java:1742)
at 
org.apache.hadoop.ipc.Server$Connection.processSaslMessage(Server.java:1522)
at org.apache.hadoop.ipc.Server$Connection.saslProcess(Server.java:1433)
at 
org.apache.hadoop.ipc.Server$Connection.saslReadAndProcess(Server.java:1396)
at 
org.apache.hadoop.ipc.Server$Connection.processRpcOutOfBandRequest(Server.java:2080)
at 
org.apache.hadoop.ipc.Server$Connection.processOneRpc(Server.java:1920)
at 
org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1682)
at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:896)
at 
org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:752)
at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:723)
{noformat}

We saw a few GSSException in the NN log, but only one threw the 
ConcurrentModificationException. This NN had a failover, which is caused by 
ZKFC having GSSException too. Suspect it's related issue.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HADOOP-13066) UserGroupInformation.loginWithKerberos/getLoginUser is not thread-safe

2018-05-21 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483219#comment-16483219
 ] 

Wei-Chiu Chuang commented on HADOOP-13066:
--

Is it a dup of HADOOP-13433?

> UserGroupInformation.loginWithKerberos/getLoginUser is not thread-safe
> --
>
> Key: HADOOP-13066
> URL: https://issues.apache.org/jira/browse/HADOOP-13066
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Sergey Shelukhin
>Priority: Major
>
> When calling loginFromKerberos, a static variable is set up with the result. 
> If someone logs in as a different user from a different thread, the call to 
> getLoginUser will not return the correct UGI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15486) Add a config option to make NetworkTopology#netLock fair

2018-05-21 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15486:
-
Attachment: HADOOP-15486.000.patch

> Add a config option to make NetworkTopology#netLock fair
> 
>
> Key: HADOOP-15486
> URL: https://issues.apache.org/jira/browse/HADOOP-15486
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HADOOP-15486.000.patch
>
>
> Whenever a datanode is restarted, the registration call after the restart 
> received by NameNode lands in {{NetworkTopology#add}} via 
> {{DatanodeManager#registerDatanode}} requires write lock on 
> {{NetworkTopology#netLock}}. This registration thread is getting starved by 
> flood of {{FSNamesystem.getAdditionalDatanode}} calls, which are triggered by 
> clients those who were writing to the restarted datanode.
> The registration call which is waiting for write lock on 
> {{NetworkTopology#netLock}} is holding write lock on {{FSNamesystem#fsLock}}, 
> causing all the other RPC calls which require the lock on 
> {{FSNamesystem#fsLock}} wait.
> We can introduce a config property to make {{NetworkTopology#netLock}} lock 
> fair so that the registration thread will not starve.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15486) Add a config option to make NetworkTopology#netLock fair

2018-05-21 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15486:
-
Status: Patch Available  (was: Open)

> Add a config option to make NetworkTopology#netLock fair
> 
>
> Key: HADOOP-15486
> URL: https://issues.apache.org/jira/browse/HADOOP-15486
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HADOOP-15486.000.patch
>
>
> Whenever a datanode is restarted, the registration call after the restart 
> received by NameNode lands in {{NetworkTopology#add}} via 
> {{DatanodeManager#registerDatanode}} requires write lock on 
> {{NetworkTopology#netLock}}. This registration thread is getting starved by 
> flood of {{FSNamesystem.getAdditionalDatanode}} calls, which are triggered by 
> clients those who were writing to the restarted datanode.
> The registration call which is waiting for write lock on 
> {{NetworkTopology#netLock}} is holding write lock on {{FSNamesystem#fsLock}}, 
> causing all the other RPC calls which require the lock on 
> {{FSNamesystem#fsLock}} wait.
> We can introduce a config property to make {{NetworkTopology#netLock}} lock 
> fair so that the registration thread will not starve.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15482) Upgrade jackson-databind to version 2.9.5

2018-05-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483030#comment-16483030
 ] 

Steve Loughran commented on HADOOP-15482:
-

Jitendra. The term "upgrade jackson" is one to bring fear into anyone who has 
ever tried to or deal with an upgrade jackson'

* it's only going to go in as a synchronized update of the entire jackson 2 
package. You can't update one JAR and expect things to work any more than you 
can safely update hadoop-hdfs-client JAR while leaving the rest of hadoop-* out 
of sync.
* It's got a real risk of breaking things downstream, including Hive and Spark.

So
# why?
# what tests have you done of hive and spark with the version bumped up?


> Upgrade jackson-databind to version 2.9.5
> -
>
> Key: HADOOP-15482
> URL: https://issues.apache.org/jira/browse/HADOOP-15482
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HADOOP-15482.001.patch
>
>
> This Jira aims to upgrade jackson-databind to version 2.9.5



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15486) Add a config option to make NetworkTopology#netLock fair

2018-05-21 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15486:
-
Description: 
Whenever a datanode is restarted, the registration call after the restart 
received by NameNode lands in {{NetworkTopology#add}} via 
{{DatanodeManager#registerDatanode}} requires write lock on 
{{NetworkTopology#netLock}}. This registration thread is getting starved by 
flood of {{FSNamesystem.getAdditionalDatanode}} calls, which are triggered by 
clients those who were writing to the restarted datanode.
The registration call which is waiting for write lock on 
{{NetworkTopology#netLock}} is holding write lock on {{FSNamesystem#fsLock}}, 
causing all the other RPC calls which require the lock on 
{{FSNamesystem#fsLock}} wait.
We can introduce a config property to make {{NetworkTopology#netLock}} lock 
fair so that the registration thread will not starve.

  was:
Whenever a datanode is restarted, the registration call after the restart 
received by NameNode lands in {{NetworkTopology#add}} via 
{{DatanodeManager#registerDatanode}} requires write lock on 
{{NetworkTopology#netLock}}. This registration thread is getting starved by 
flood of {{FSNamesystem.getAdditionalDatanode}} calls, which are triggered by 
clients those who were writing to the restarted datanode.
The registration call which is waiting for write lock on 
{{NetworkTopology#netLock}} is holding write lock on {{FSNamesystem#fsLock}}, 
causing all the other RPC calls which require the lock on 
{{FSNamesystem#fsLock}} wait.
We can make {{NetworkTopology#netLock}} lock fair so that the registration 
thread will not starve.


> Add a config option to make NetworkTopology#netLock fair
> 
>
> Key: HADOOP-15486
> URL: https://issues.apache.org/jira/browse/HADOOP-15486
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>
> Whenever a datanode is restarted, the registration call after the restart 
> received by NameNode lands in {{NetworkTopology#add}} via 
> {{DatanodeManager#registerDatanode}} requires write lock on 
> {{NetworkTopology#netLock}}. This registration thread is getting starved by 
> flood of {{FSNamesystem.getAdditionalDatanode}} calls, which are triggered by 
> clients those who were writing to the restarted datanode.
> The registration call which is waiting for write lock on 
> {{NetworkTopology#netLock}} is holding write lock on {{FSNamesystem#fsLock}}, 
> causing all the other RPC calls which require the lock on 
> {{FSNamesystem#fsLock}} wait.
> We can introduce a config property to make {{NetworkTopology#netLock}} lock 
> fair so that the registration thread will not starve.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15486) Add a config option to make NetworkTopology#netLock fair

2018-05-21 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-15486:
---
Summary: Add a config option to make NetworkTopology#netLock fair  (was: 
Make NetworkTopology#netLock fair)

> Add a config option to make NetworkTopology#netLock fair
> 
>
> Key: HADOOP-15486
> URL: https://issues.apache.org/jira/browse/HADOOP-15486
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>
> Whenever a datanode is restarted, the registration call after the restart 
> received by NameNode lands in {{NetworkTopology#add}} via 
> {{DatanodeManager#registerDatanode}} requires write lock on 
> {{NetworkTopology#netLock}}. This registration thread is getting starved by 
> flood of {{FSNamesystem.getAdditionalDatanode}} calls, which are triggered by 
> clients those who were writing to the restarted datanode.
> The registration call which is waiting for write lock on 
> {{NetworkTopology#netLock}} is holding write lock on {{FSNamesystem#fsLock}}, 
> causing all the other RPC calls which require the lock on 
> {{FSNamesystem#fsLock}} wait.
> We can make {{NetworkTopology#netLock}} lock fair so that the registration 
> thread will not starve.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14425) Add more s3guard metrics

2018-05-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16482938#comment-16482938
 ] 

Steve Loughran commented on HADOOP-14425:
-

some ops also return count of IOPS consumed. We should publish them as counter 
+ maybe moving average. Counter => lets you get the cost of a specific query. 
moving ave => understand your costs

> Add more s3guard metrics
> 
>
> Key: HADOOP-14425
> URL: https://issues.apache.org/jira/browse/HADOOP-14425
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Ai Deng
>Priority: Major
>
> The metrics suggested to add:
> Status:
> S3GUARD_METADATASTORE_ENABLED
> S3GUARD_METADATASTORE_IS_AUTHORITATIVE
> Operations:
> S3GUARD_METADATASTORE_INITIALIZATION
> S3GUARD_METADATASTORE_DELETE_PATH
> S3GUARD_METADATASTORE_DELETE_PATH_LATENCY
> S3GUARD_METADATASTORE_DELETE_SUBTREE_PATCH
> S3GUARD_METADATASTORE_GET_PATH
> S3GUARD_METADATASTORE_GET_PATH_LATENCY
> S3GUARD_METADATASTORE_GET_CHILDREN_PATH
> S3GUARD_METADATASTORE_GET_CHILDREN_PATH_LATENCY
> S3GUARD_METADATASTORE_MOVE_PATH
> S3GUARD_METADATASTORE_PUT_PATH
> S3GUARD_METADATASTORE_PUT_PATH_LATENCY
> S3GUARD_METADATASTORE_CLOSE
> S3GUARD_METADATASTORE_DESTORY
> From S3Guard:
> S3GUARD_METADATASTORE_MERGE_DIRECTORY
> For the failures:
> S3GUARD_METADATASTORE_DELETE_FAILURE
> S3GUARD_METADATASTORE_GET_FAILURE
> S3GUARD_METADATASTORE_PUT_FAILURE
> Etc:
> S3GUARD_METADATASTORE_PUT_RETRY_TIMES



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15486) Make NetworkTopology#netLock fair

2018-05-21 Thread Nanda kumar (JIRA)
Nanda kumar created HADOOP-15486:


 Summary: Make NetworkTopology#netLock fair
 Key: HADOOP-15486
 URL: https://issues.apache.org/jira/browse/HADOOP-15486
 Project: Hadoop Common
  Issue Type: Improvement
  Components: net
Reporter: Nanda kumar
Assignee: Nanda kumar


Whenever a datanode is restarted, the registration call after the restart 
received by NameNode lands in {{NetworkTopology#add}} via 
{{DatanodeManager#registerDatanode}} requires write lock on 
{{NetworkTopology#netLock}}. This registration thread is getting starved by 
flood of {{FSNamesystem.getAdditionalDatanode}} calls, which are triggered by 
clients those who were writing to the restarted datanode.
The registration call which is waiting for write lock on 
{{NetworkTopology#netLock}} is holding write lock on {{FSNamesystem#fsLock}}, 
causing all the other RPC calls which require the lock on 
{{FSNamesystem#fsLock}} wait.
We can make {{NetworkTopology#netLock}} lock fair so that the registration 
thread will not starve.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15482) Upgrade jackson-databind to version 2.9.5

2018-05-21 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16482893#comment-16482893
 ] 

Jitendra Nath Pandey commented on HADOOP-15482:
---

{quote}Can we modify the jackson2.version property instead of overwriting its 
usage?
{quote}
Agreed, that makes sense.

> Upgrade jackson-databind to version 2.9.5
> -
>
> Key: HADOOP-15482
> URL: https://issues.apache.org/jira/browse/HADOOP-15482
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HADOOP-15482.001.patch
>
>
> This Jira aims to upgrade jackson-databind to version 2.9.5



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15482) Upgrade jackson-databind to version 2.9.5

2018-05-21 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16482861#comment-16482861
 ] 

Sean Mackrory commented on HADOOP-15482:


[~jnp] Can we modify the jackson2.version property instead of overwriting its 
usage?

> Upgrade jackson-databind to version 2.9.5
> -
>
> Key: HADOOP-15482
> URL: https://issues.apache.org/jira/browse/HADOOP-15482
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HADOOP-15482.001.patch
>
>
> This Jira aims to upgrade jackson-databind to version 2.9.5



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15457) Add Security-Related HTTP Response Header in WEBUIs.

2018-05-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16482858#comment-16482858
 ] 

genericqa commented on HADOOP-15457:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 44 new + 91 unchanged - 3 fixed = 135 total (was 94) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
24s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}122m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15457 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12924370/HADOOP-15457.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 85d47ed286cd 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f48fec8 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14668/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14668/testReport/ |
| Max. process+thread count | 1700 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14668/console |
| Powered by | Apache Yetus 

[jira] [Commented] (HADOOP-15482) Upgrade jackson-databind to version 2.9.5

2018-05-21 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16482851#comment-16482851
 ] 

Jitendra Nath Pandey commented on HADOOP-15482:
---

[~mackrorysd], [~ste...@apache.org] , thoughts? 

> Upgrade jackson-databind to version 2.9.5
> -
>
> Key: HADOOP-15482
> URL: https://issues.apache.org/jira/browse/HADOOP-15482
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HADOOP-15482.001.patch
>
>
> This Jira aims to upgrade jackson-databind to version 2.9.5



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15482) Upgrade jackson-databind to version 2.9.5

2018-05-21 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16482844#comment-16482844
 ] 

Jitendra Nath Pandey commented on HADOOP-15482:
---

HADOOP-15299 updates the dependency to 2.9.4. Since 3.2 is not released yet, it 
is better to upgrade to 2.9.5 as well with no additional impact.

> Upgrade jackson-databind to version 2.9.5
> -
>
> Key: HADOOP-15482
> URL: https://issues.apache.org/jira/browse/HADOOP-15482
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HADOOP-15482.001.patch
>
>
> This Jira aims to upgrade jackson-databind to version 2.9.5



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15426) S3guard throttle event on delete => 400 error code => exception

2018-05-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16482686#comment-16482686
 ] 

Steve Loughran commented on HADOOP-15426:
-

Looking at this a bit more. The AWS docs say "We automatically handle this". My 
stack traces says "no they don't"

https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchWriteItem.html

> S3guard throttle event on delete => 400 error code => exception
> ---
>
> Key: HADOOP-15426
> URL: https://issues.apache.org/jira/browse/HADOOP-15426
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Major
>
> managed to create on a parallel test run
> {code}
> org.apache.hadoop.fs.s3a.AWSServiceThrottledException: delete on 
> s3a://hwdev-steve-ireland-new/fork-0005/test/existing-dir/existing-file: 
> com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException:
>  The level of configured provisioned throughput for the table was exceeded. 
> Consider increasing your provisioning level with the UpdateTable API. 
> (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: 
> ProvisionedThroughputExceededException; Request ID: 
> RDM3370REDBBJQ0SLCLOFC8G43VV4KQNSO5AEMVJF66Q9ASUAAJG): The level of 
> configured provisioned throughput for the table was exceeded. Consider 
> increasing your provisioning level with the UpdateTable API. (Service: 
> AmazonDynamoDBv2; Status Code: 400; Error Code: 
> ProvisionedThroughputExceededException; Request ID: 
> RDM3370REDBBJQ0SLCLOFC8G43VV4KQNSO5AEMVJF66Q9ASUAAJG)
>   at 
> {code}
> We should be able to handle this. 400 "bad things happened" error though, not 
> the 503 from S3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15457) Add Security-Related HTTP Response Header in WEBUIs.

2018-05-21 Thread Kanwaljeet Sachdev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kanwaljeet Sachdev updated HADOOP-15457:

Attachment: HADOOP-15457.003.patch

> Add Security-Related HTTP Response Header in WEBUIs.
> 
>
> Key: HADOOP-15457
> URL: https://issues.apache.org/jira/browse/HADOOP-15457
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kanwaljeet Sachdev
>Assignee: Kanwaljeet Sachdev
>Priority: Major
>  Labels: security
> Attachments: HADOOP-15457.001.patch, HADOOP-15457.002.patch, 
> HADOOP-15457.003.patch, YARN-8198.001.patch, YARN-8198.002.patch, 
> YARN-8198.003.patch, YARN-8198.004.patch, YARN-8198.005.patch
>
>
> As of today, YARN web-ui lacks certain security related http response 
> headers. We are planning to add few default ones and also add support for 
> headers to be able to get added via xml config. Planning to make the below 
> two as default.
>  * X-XSS-Protection: 1; mode=block
>  * X-Content-Type-Options: nosniff
>  
> Support for headers via config properties in core-site.xml will be along the 
> below lines
> {code:java}
> 
>  hadoop.http.header.Strict_Transport_Security
>  valHSTSFromXML
>  {code}
>  
> A regex matcher will lift these properties and add into the response header 
> when Jetty prepares the response.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15485) reduce/tune read failure fault injection on inconsistent client

2018-05-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16482658#comment-16482658
 ] 

Steve Loughran commented on HADOOP-15485:
-

alternatively: once a stream read has succeeded once, it can't ever fail again.

Note that ti's the default "failure = 1.0f" value which is trouble here

> reduce/tune read failure fault injection on inconsistent client
> ---
>
> Key: HADOOP-15485
> URL: https://issues.apache.org/jira/browse/HADOOP-15485
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Priority: Major
>
> If you crank up the s3guard directory inconsistency rate to stress test the 
> directory listings, then the read failure rate can go up high enough that 
> things read IO fails,.
> Maybe  that read injection should only happen for the first few seconds of a 
> stream being created, to better model delayed consistency, or least limit the 
> #of times it can surface in a stream. (This woluld imply some kind of 
> stream-specific binding)
> Otherwise: provide a way to explicitly set it, including disable



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15485) reduce/tune read failure fault injection on inconsistent client

2018-05-21 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15485:
---

 Summary: reduce/tune read failure fault injection on inconsistent 
client
 Key: HADOOP-15485
 URL: https://issues.apache.org/jira/browse/HADOOP-15485
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 3.1.0
Reporter: Steve Loughran


If you crank up the s3guard directory inconsistency rate to stress test the 
directory listings, then the read failure rate can go up high enough that 
things read IO fails,.

Maybe  that read injection should only happen for the first few seconds of a 
stream being created, to better model delayed consistency, or least limit the 
#of times it can surface in a stream. (This woluld imply some kind of 
stream-specific binding)

Otherwise: provide a way to explicitly set it, including disable



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14734) add option to tag DDB table(s) created

2018-05-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16482549#comment-16482549
 ] 

genericqa commented on HADOOP-14734:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
42s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-14734 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911617/HADOOP-14734-003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 76ebb5976cc4 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ba84284 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14667/testReport/ |
| Max. process+thread count | 302 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14667/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> add option to tag DDB table(s) created
> --
>
> Key: HADOOP-14734
> URL: 

[jira] [Updated] (HADOOP-15400) Improve S3Guard documentation on Authoritative Mode implementation

2018-05-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15400:

Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-15226

> Improve S3Guard documentation on Authoritative Mode implementation
> --
>
> Key: HADOOP-15400
> URL: https://issues.apache.org/jira/browse/HADOOP-15400
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
>
> Part of the design of S3Guard is support for skipping the call to S3 
> listObjects and serving directory listings out of the MetadataStore under 
> certain circumstances.  This feature is called "authoritative" mode.  I've 
> talked to many people about this feature and it seems to be universally 
> confusing.
> I suggest we improve / add a section to the s3guard.md site docs elaborating 
> on what Authoritative Mode is.
> It is *not* treating the MetadataStore (e.g. dynamodb) as the source of truth 
> in general.
> It *is* the ability to short-circuit S3 list objects and serve listings from 
> the MetadataStore in some circumstances: 
> For S3A to skip S3's list objects on some *path*, and serve it directly from 
> the MetadataStore, the following things must all be true:
>  # The MetadataStore implementation persists the bit 
> {{DirListingMetadata.isAuthorititative}} set when calling 
> {{MetadataStore#put(DirListingMetadata)}}
>  # The S3A client is configured to allow metadatastore to be authoritative 
> source of a directory listing (fs.s3a.metadatastore.authoritative=true).
>  # The MetadataStore has a full listing for *path* stored in it.  This only 
> happens if the FS client (s3a) explicitly has stored a full directory listing 
> with {{DirListingMetadata.isAuthorititative=true}} before the said listing 
> request happens.
> Note that #1 only currently happens in LocalMetadataStore. Adding support to 
> DynamoDBMetadataStore is covered in HADOOP-14154.
> Also, the multiple uses of the word "authoritative" are confusing. Two 
> meanings are used:
>  1. In the FS client configuration fs.s3a.metadatastore.authoritative
>  - Behavior of S3A code (not MetadataStore)
>  - "S3A is allowed to skip S3.list() when it has full listing from 
> MetadataStore"
> 2. MetadataStore
>  When storing a dir listing, can set a bit isAuthoritative
>  1 : "full contents of directory"
>  0 : "may not be full listing"
> Note that a MetadataStore *MAY* persist this bit. (not *MUST*).
> We should probably rename the {{DirListingMetadata.isAuthorititative}} to 
> {{.fullListing}} or at least put a comment where it is used to clarify its 
> meaning.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15480) AbstractS3GuardToolTestBase.testDiffCommand fails when using dynamo

2018-05-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16482502#comment-16482502
 ] 

Steve Loughran commented on HADOOP-15480:
-

LGTM. As usual, which endpoint have you tested with?

> AbstractS3GuardToolTestBase.testDiffCommand fails when using dynamo
> ---
>
> Key: HADOOP-15480
> URL: https://issues.apache.org/jira/browse/HADOOP-15480
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15480.001.patch
>
>
> When running org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB, the 
> testDiffCommand test fails with the following:
> {noformat}
> testDiffCommand(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)  
> Time elapsed: 8.059 s  <<< FAILURE!
> java.lang.AssertionError: 
> Mismatched metadata store outputs: MS D   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only
> MSF   100 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-0
> MSF   100 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-1
> MSF   100 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-3
> MSF   100 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-2
> MSF   100 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-4
> S3F   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-1
> MSF   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-1
> S3F   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-0
> MSF   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-0
> S3F   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-2
> MSF   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-2
> S3F   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-3
> MSF   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-3
> S3F   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-4
> MSF   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-4
>  expected:<[
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-0, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-1, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-3, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-2, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-4]> 
> but was:<[
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-1, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-0, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-0, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-1, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-3, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-2, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-2, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-3, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-4, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-4]>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.testDiffCommand(AbstractS3GuardToolTestBase.java:382)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>  

[jira] [Commented] (HADOOP-14734) add option to tag DDB table(s) created

2018-05-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16482497#comment-16482497
 ] 

Steve Loughran commented on HADOOP-14734:
-

[~abrahamfine]: I've not forgotten about this, just deep in other things. When 
I sit down to do a big "where are we with s3guard", this will get my attention. 
honest!

> add option to tag DDB table(s) created
> --
>
> Key: HADOOP-14734
> URL: https://issues.apache.org/jira/browse/HADOOP-14734
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Abraham Fine
>Priority: Minor
> Attachments: HADOOP-14734-001.patch, HADOOP-14734-002.patch, 
> HADOOP-14734-003.patch
>
>
> Many organisations have a "no untagged" resource policy; s3guard runs into 
> this when a table is created untagged. If there's a strict "delete untagged 
> resources" policy, the tables will go without warning.
> Proposed: we add an option which can be used to declare the tags for a table 
> when created, use it in creation. No need to worry about updating/viewing 
> tags, as the AWS console can do that



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15478) WASB: hflush() and hsync() regression

2018-05-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15478:

   Resolution: Fixed
Fix Version/s: 3.1.1
   2.10.0
   Status: Resolved  (was: Patch Available)

+1
applied to branch-3.1&, then cherrypicked to branch-2 and retested against 
Azure ireland. new test was happy.

I did see a failure of {{ITestAzureFileSystemInstrumentation}}; its assertion 
about bytes written in the last second, even when I tried a standalone run of it
{code}
[ERROR]   
ITestAzureFileSystemInstrumentation.testMetricsOnFileCreateRead:162->Assert.assertTrue:41->Assert.fail:88
 The bytes written in the last second 0 is pretty far from the expected range 
of around 1000 bytes plus a little overhead.
{code}

I think my network is just playing up today, with bandwidth/latency below what 
the tests expect. If its recurrent, we might have to think about making the 
assertion checks tuneable.

> WASB: hflush() and hsync() regression
> -
>
> Key: HADOOP-15478
> URL: https://issues.apache.org/jira/browse/HADOOP-15478
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.2
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
>Priority: Major
> Fix For: 2.10.0, 3.1.1
>
> Attachments: HADOOP-15478-002.patch, HADOOP-15478.001.patch
>
>
> HADOOP-14520 introduced a regression in hflush() and hsync().  Previously, 
> for the default case where users upload data as block blobs, these were 
> no-ops.  Unfortunately, HADOOP-14520 accidentally implemented hflush() and 
> hsync() by default, so any data buffered in the stream is immediately 
> uploaded to storage.  This new behavior is undesirable, because block blobs 
> have a limit of 50,000 blocks.  Spark users are now seeing failures due to 
> exceeding the block limit, since Spark frequently invokes hflush().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15478) WASB: hflush() and hsync() regression

2018-05-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16482385#comment-16482385
 ] 

Hudson commented on HADOOP-15478:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14241 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14241/])
HADOOP-15478. WASB: hflush() and hsync() regression. Contributed by (stevel: 
rev ba842847c94d31d3f737226d954c566b5d88656b)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/PageBlobOutputStream.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestOutputStreamSemantics.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/SyncableDataOutputStream.java


> WASB: hflush() and hsync() regression
> -
>
> Key: HADOOP-15478
> URL: https://issues.apache.org/jira/browse/HADOOP-15478
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.2
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
>Priority: Major
> Attachments: HADOOP-15478-002.patch, HADOOP-15478.001.patch
>
>
> HADOOP-14520 introduced a regression in hflush() and hsync().  Previously, 
> for the default case where users upload data as block blobs, these were 
> no-ops.  Unfortunately, HADOOP-14520 accidentally implemented hflush() and 
> hsync() by default, so any data buffered in the stream is immediately 
> uploaded to storage.  This new behavior is undesirable, because block blobs 
> have a limit of 50,000 blocks.  Spark users are now seeing failures due to 
> exceeding the block limit, since Spark frequently invokes hflush().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15482) Upgrade jackson-databind to version 2.9.5

2018-05-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16482384#comment-16482384
 ] 

genericqa commented on HADOOP-15482:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
36m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
12s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15482 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12924322/HADOOP-15482.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux 22e207203b61 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a23ff8d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14665/testReport/ |
| Max. process+thread count | 330 (vs. ulimit of 1) |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14665/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Upgrade jackson-databind to version 2.9.5
> -
>
> Key: HADOOP-15482
> URL: https://issues.apache.org/jira/browse/HADOOP-15482
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HADOOP-15482.001.patch
>
>
> This Jira aims to upgrade jackson-databind to version 2.9.5



--
This message 

[jira] [Commented] (HADOOP-15478) WASB: hflush() and hsync() regression

2018-05-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16482371#comment-16482371
 ] 

Steve Loughran commented on HADOOP-15478:
-

patch 002 is patch as applied: added LF at end of test file, reordered imports 
slightly.

> WASB: hflush() and hsync() regression
> -
>
> Key: HADOOP-15478
> URL: https://issues.apache.org/jira/browse/HADOOP-15478
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.2
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
>Priority: Major
> Attachments: HADOOP-15478-002.patch, HADOOP-15478.001.patch
>
>
> HADOOP-14520 introduced a regression in hflush() and hsync().  Previously, 
> for the default case where users upload data as block blobs, these were 
> no-ops.  Unfortunately, HADOOP-14520 accidentally implemented hflush() and 
> hsync() by default, so any data buffered in the stream is immediately 
> uploaded to storage.  This new behavior is undesirable, because block blobs 
> have a limit of 50,000 blocks.  Spark users are now seeing failures due to 
> exceeding the block limit, since Spark frequently invokes hflush().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15478) WASB: hflush() and hsync() regression

2018-05-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16482373#comment-16482373
 ] 

genericqa commented on HADOOP-15478:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HADOOP-15478 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15478 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12924329/HADOOP-15478-002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14666/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> WASB: hflush() and hsync() regression
> -
>
> Key: HADOOP-15478
> URL: https://issues.apache.org/jira/browse/HADOOP-15478
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.2
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
>Priority: Major
> Attachments: HADOOP-15478-002.patch, HADOOP-15478.001.patch
>
>
> HADOOP-14520 introduced a regression in hflush() and hsync().  Previously, 
> for the default case where users upload data as block blobs, these were 
> no-ops.  Unfortunately, HADOOP-14520 accidentally implemented hflush() and 
> hsync() by default, so any data buffered in the stream is immediately 
> uploaded to storage.  This new behavior is undesirable, because block blobs 
> have a limit of 50,000 blocks.  Spark users are now seeing failures due to 
> exceeding the block limit, since Spark frequently invokes hflush().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15478) WASB: hflush() and hsync() regression

2018-05-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15478:

Attachment: HADOOP-15478-002.patch

> WASB: hflush() and hsync() regression
> -
>
> Key: HADOOP-15478
> URL: https://issues.apache.org/jira/browse/HADOOP-15478
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.2
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
>Priority: Major
> Attachments: HADOOP-15478-002.patch, HADOOP-15478.001.patch
>
>
> HADOOP-14520 introduced a regression in hflush() and hsync().  Previously, 
> for the default case where users upload data as block blobs, these were 
> no-ops.  Unfortunately, HADOOP-14520 accidentally implemented hflush() and 
> hsync() by default, so any data buffered in the stream is immediately 
> uploaded to storage.  This new behavior is undesirable, because block blobs 
> have a limit of 50,000 blocks.  Spark users are now seeing failures due to 
> exceeding the block limit, since Spark frequently invokes hflush().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15478) WASB: hflush() and hsync() regression

2018-05-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16482364#comment-16482364
 ] 

Steve Loughran commented on HADOOP-15478:
-

+1 for trunk & branch-3.1; running the new test on branch-2 too to verify it's 
OK to go in there

> WASB: hflush() and hsync() regression
> -
>
> Key: HADOOP-15478
> URL: https://issues.apache.org/jira/browse/HADOOP-15478
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.2
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
>Priority: Major
> Attachments: HADOOP-15478.001.patch
>
>
> HADOOP-14520 introduced a regression in hflush() and hsync().  Previously, 
> for the default case where users upload data as block blobs, these were 
> no-ops.  Unfortunately, HADOOP-14520 accidentally implemented hflush() and 
> hsync() by default, so any data buffered in the stream is immediately 
> uploaded to storage.  This new behavior is undesirable, because block blobs 
> have a limit of 50,000 blocks.  Spark users are now seeing failures due to 
> exceeding the block limit, since Spark frequently invokes hflush().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15482) Upgrade jackson-databind to version 2.9.5

2018-05-21 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HADOOP-15482:
-
Status: Patch Available  (was: Open)

> Upgrade jackson-databind to version 2.9.5
> -
>
> Key: HADOOP-15482
> URL: https://issues.apache.org/jira/browse/HADOOP-15482
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HADOOP-15482.001.patch
>
>
> This Jira aims to upgrade jackson-databind to version 2.9.5



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15482) Upgrade jackson-databind to version 2.9.5

2018-05-21 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HADOOP-15482:
-
Attachment: HADOOP-15482.001.patch

> Upgrade jackson-databind to version 2.9.5
> -
>
> Key: HADOOP-15482
> URL: https://issues.apache.org/jira/browse/HADOOP-15482
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HADOOP-15482.001.patch
>
>
> This Jira aims to upgrade jackson-databind to version 2.9.5



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15482) Upgrade jackson-databind to version 2.9.5

2018-05-21 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HADOOP-15482:
--
Summary: Upgrade jackson-databind to version 2.9.5  (was: Upgrade 
jackson-databind to version 2.8.11.1)

> Upgrade jackson-databind to version 2.9.5
> -
>
> Key: HADOOP-15482
> URL: https://issues.apache.org/jira/browse/HADOOP-15482
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>
> This Jira aims to upgrade jackson-databind to version 2.8.11.1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15482) Upgrade jackson-databind to version 2.9.5

2018-05-21 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HADOOP-15482:
--
Description: This Jira aims to upgrade jackson-databind to version 2.9.5  
(was: This Jira aims to upgrade jackson-databind to version 2.8.11.1.)

> Upgrade jackson-databind to version 2.9.5
> -
>
> Key: HADOOP-15482
> URL: https://issues.apache.org/jira/browse/HADOOP-15482
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>
> This Jira aims to upgrade jackson-databind to version 2.9.5



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org