[jira] [Comment Edited] (HADOOP-15593) UserGroupInformation TGT renewer throws NPE
[ https://issues.apache.org/jira/browse/HADOOP-15593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553838#comment-16553838 ] Xiao Chen edited comment on HADOOP-15593 at 7/24/18 5:57 AM: - Thanks [~gabor.bota] and [~eyang]. Workaround the NPE sounds good to me (but sad). :) I'm also looking at this particular code block: {code} try { Date endTime = tgt.getEndTime(); if (tgt != null && endTime != null && !tgt.isDestroyed()) { tgtEndTime = endTime.getTime(); } } catch (NullPointerException npe) { {code} - Do we really need the tgt==null check at all? What's the scenario that tgt can be null here? (If it's needed, the check should happen before {{getEndTime}} call, but it doesn't look possible to me that tgt can be null. - Suggest to make the NPE try-catch strictly around the line we're trying to workaround: tgt.getEndTime(); Then also add a pointer to the JDK issue JDK-8147772 in the comment, to save future people the time to search on this jira. Should also explain the fact that the NPE is only possible prior to the JDK fix. - We also need a unit test for this. This can be done by using a mocked tgt was (Author: xiaochen): I'm also looking at this particular code block: {code} try { Date endTime = tgt.getEndTime(); if (tgt != null && endTime != null && !tgt.isDestroyed()) { tgtEndTime = endTime.getTime(); } } catch (NullPointerException npe) { {code} - Do we really need the tgt==null check at all? What's the scenario that tgt can be null here? (If it's needed, the check should happen before {{getEndTime}} call, but it doesn't look possible to me that tgt can be null. - Suggest to make the NPE try-catch strictly around the line we're trying to workaround: tgt.getEndTime(); Then also add a pointer to the JDK issue JDK-8147772 in the comment, to save future people the time to search on this jira. Should also explain the fact that the NPE is only possible prior to the JDK fix. - We also need a unit test for this. This can be done by using a mocked tgt > UserGroupInformation TGT renewer throws NPE > --- > > Key: HADOOP-15593 > URL: https://issues.apache.org/jira/browse/HADOOP-15593 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Gabor Bota >Priority: Blocker > Attachments: HADOOP-15593.001.patch, HADOOP-15593.002.patch, > HADOOP-15593.003.patch > > > Found the following NPE thrown in UGI tgt renewer. The NPE was thrown within > an exception handler so the original exception was hidden, though it's likely > caused by expired tgt. > {noformat} > 18/07/02 10:30:57 ERROR util.SparkUncaughtExceptionHandler: Uncaught > exception in thread Thread[TGT Renewer for f...@example.com,5,main] > java.lang.NullPointerException > at > javax.security.auth.kerberos.KerberosTicket.getEndTime(KerberosTicket.java:482) > at > org.apache.hadoop.security.UserGroupInformation$1.run(UserGroupInformation.java:894) > at java.lang.Thread.run(Thread.java:748){noformat} > Suspect it's related to [https://bugs.openjdk.java.net/browse/JDK-8154889]. > The relevant code was added in HADOOP-13590. File this jira to handle the > exception better. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15593) UserGroupInformation TGT renewer throws NPE
[ https://issues.apache.org/jira/browse/HADOOP-15593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553838#comment-16553838 ] Xiao Chen commented on HADOOP-15593: I'm also looking at this particular code block: {code} try { Date endTime = tgt.getEndTime(); if (tgt != null && endTime != null && !tgt.isDestroyed()) { tgtEndTime = endTime.getTime(); } } catch (NullPointerException npe) { {code} - Do we really need the tgt==null check at all? What's the scenario that tgt can be null here? (If it's needed, the check should happen before {{getEndTime}} call, but it doesn't look possible to me that tgt can be null. - Suggest to make the NPE try-catch strictly around the line we're trying to workaround: tgt.getEndTime(); Then also add a pointer to the JDK issue JDK-8147772 in the comment, to save future people the time to search on this jira. Should also explain the fact that the NPE is only possible prior to the JDK fix. - We also need a unit test for this. This can be done by using a mocked tgt > UserGroupInformation TGT renewer throws NPE > --- > > Key: HADOOP-15593 > URL: https://issues.apache.org/jira/browse/HADOOP-15593 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Gabor Bota >Priority: Blocker > Attachments: HADOOP-15593.001.patch, HADOOP-15593.002.patch, > HADOOP-15593.003.patch > > > Found the following NPE thrown in UGI tgt renewer. The NPE was thrown within > an exception handler so the original exception was hidden, though it's likely > caused by expired tgt. > {noformat} > 18/07/02 10:30:57 ERROR util.SparkUncaughtExceptionHandler: Uncaught > exception in thread Thread[TGT Renewer for f...@example.com,5,main] > java.lang.NullPointerException > at > javax.security.auth.kerberos.KerberosTicket.getEndTime(KerberosTicket.java:482) > at > org.apache.hadoop.security.UserGroupInformation$1.run(UserGroupInformation.java:894) > at java.lang.Thread.run(Thread.java:748){noformat} > Suspect it's related to [https://bugs.openjdk.java.net/browse/JDK-8154889]. > The relevant code was added in HADOOP-13590. File this jira to handle the > exception better. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15609) Retry KMS calls when SSLHandshakeException occurs
[ https://issues.apache.org/jira/browse/HADOOP-15609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553822#comment-16553822 ] Xiao Chen commented on HADOOP-15609: Thanks Kitti, patch 4 LGTM. Will wait for another day in case [~ste...@apache.org] or other watchers have further comments. > Retry KMS calls when SSLHandshakeException occurs > - > > Key: HADOOP-15609 > URL: https://issues.apache.org/jira/browse/HADOOP-15609 > Project: Hadoop Common > Issue Type: Improvement > Components: common, kms >Affects Versions: 3.1.0 >Reporter: Kitti Nanasi >Assignee: Kitti Nanasi >Priority: Major > Attachments: HADOOP-15609.001.patch, HADOOP-15609.002.patch, > HADOOP-15609.003.patch, HADOOP-15609.004.patch > > > KMS call should retry when javax.net.ssl.SSLHandshakeException occurs and > FailoverOnNetworkExceptionRetry policy is used. > For example in the following stack trace, we can see that the KMS Provider's > connection is lost, an SSLHandshakeException is thrown and the operation is > not retried: > {code} > W0711 18:19:50.213472 1508 LoadBalancingKMSClientProvider.java:132] KMS > provider at [https://example.com:16000/kms/v1/] threw an IOException: > Java exception follows: > javax.net.ssl.SSLHandshakeException: Remote host closed connection during > handshake > at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1002) > at > sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385) > at > sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413) > at > sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397) > at > sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559) > at > sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185) > at > sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1316) > at > sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1291) > at > sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:250) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:512) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:502) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:791) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284) > at > org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532) > at > org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927) > at > org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946) > at > org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) > at > org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323) > Caused by: java.io.EOFException: SSL peer shut down incorrectly > at sun.security.ssl.InputRecord.read(InputRecord.java:505) > at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983) > ... 22 more > W0711 18:19:50.239328 1508 LoadBalancingKMSClientProvider.java:149] Aborting > since the Request has failed with all KMS providers(depending on > hadoop.security.kms.client.failover.max.retries=1 setting and numProviders=1) > in the group OR the exception is not recoverable > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user
[ https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553814#comment-16553814 ] genericqa commented on HADOOP-12953: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 52s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 22m 39s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-hdfs-project/hadoop-hdfs-native-client {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 16s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 1m 3s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 7m 54s{color} | {color:red} hadoop-hdfs-native-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 35m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 35m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 35m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 13m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 52s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-hdfs-project/hadoop-hdfs-native-client {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 17s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}115m 47s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 19s{color} | {color:red} hadoop-hdfs-native-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 56s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}315m 22s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed CTEST tests |
[jira] [Commented] (HADOOP-15611) Improve log in FairCallQueue
[ https://issues.apache.org/jira/browse/HADOOP-15611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553697#comment-16553697 ] Yiqun Lin commented on HADOOP-15611: Almost looks great to me now. [~jianliang.wu], would you mind adding some output logs tested in your local and then we can see if the log is we wanted. +1 once addressed. > Improve log in FairCallQueue > > > Key: HADOOP-15611 > URL: https://issues.apache.org/jira/browse/HADOOP-15611 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: Ryan Wu >Priority: Minor > Attachments: HADOOP-15611.001.patch, HADOOP-15611.002.patch, > HADOOP-15611.003.patch > > > In the usage of the FairCallQueue, we find there missing some Key log. Only a > few logs are printed, it makes us hard to learn and debug this feature. > At least, following places can print more logs. > * DecayRpcScheduler#decayCurrentCounts > * WeightedRoundRobinMultiplexer#moveToNextQueue -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15544) ABFS: validate packing, transient classpath, hadoop fs CLI
[ https://issues.apache.org/jira/browse/HADOOP-15544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553664#comment-16553664 ] Steve Loughran commented on HADOOP-15544: - Also worth exploring is my cloudstore/storediag module, which I keep to debug this stuff: [https://github.com/steveloughran/cloudstore/releases] {code} bin/hadoop -jar cloudstore-2.8.jar abfs://storeuri/path {code} That attempts to load the abfs, look for (and log) classes/dependencies it knows of, then does some basic IO. It's intended to be for remote debugging of support calls, so can be expanded to do stuff about hostname and proxy lookup, once the details are known > ABFS: validate packing, transient classpath, hadoop fs CLI > -- > > Key: HADOOP-15544 > URL: https://issues.apache.org/jira/browse/HADOOP-15544 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: dependencies.txt > > > Validate the packaging and dependencies of ABFS > * hadoop-cloud-storage artifact to export everything needed > * {{hadoop fs -ls abfs://path}} to work in ASF distributions > * check transient CP (e.g spark) > Spark master;s hadoop-cloud module depends on hadoop-cloud-storage if you > build with the hadoop-3.1 profile, so it should automatically get in there. > Just need to check that it picks it up too -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15544) ABFS: validate packing, transient classpath, hadoop fs CLI
[ https://issues.apache.org/jira/browse/HADOOP-15544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553662#comment-16553662 ] Steve Loughran commented on HADOOP-15544: - no need for a cluster Run: mvn package -Pdist -DskipTests -Dmaven.javadoc.skip=true -DskipShade this puts things under hadoop-dist/target/hadoop-3.2.0-SNAPSHOT * cd there * copy into etc/hadoop under there your log4j and core-site.xml settings * in your ~/.hadooprc file, add what you want for Hadoop modules to always load; here's one of mine {code} > cat ~/.hadooprc hadoop_add_to_classpath_tools hadoop-aws hadoop-azure hadoop-azuredatalake {code} then go {{bin/hadoop fs -ls abfs://something}} to see what happens > ABFS: validate packing, transient classpath, hadoop fs CLI > -- > > Key: HADOOP-15544 > URL: https://issues.apache.org/jira/browse/HADOOP-15544 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: dependencies.txt > > > Validate the packaging and dependencies of ABFS > * hadoop-cloud-storage artifact to export everything needed > * {{hadoop fs -ls abfs://path}} to work in ASF distributions > * check transient CP (e.g spark) > Spark master;s hadoop-cloud module depends on hadoop-cloud-storage if you > build with the hadoop-3.1 profile, so it should automatically get in there. > Just need to check that it picks it up too -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15627) S3A ITests failing if bucket explicitly set to s3guard+DDB
[ https://issues.apache.org/jira/browse/HADOOP-15627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553636#comment-16553636 ] Steve Loughran commented on HADOOP-15627: - I'm also seeing this {code} n: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 48.842 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency [ERROR] testInconsistentS3ClientDeletes(org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency) Time elapsed: 48.72 s <<< FAILURE! java.lang.AssertionError: InconsistentAmazonS3Client added back objects incorrectly in a non-recursive listing expected:<3> but was:<2> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testInconsistentS3ClientDeletes(ITestS3GuardListConsistency.java:528) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {code} FWIW, I'm running these tests against S3 Ireland + dynamo, but this time *from the US*. There will be different latencies. I worry that this is triggering the problems, as now we are seeing more real-world inconsistency or delays > S3A ITests failing if bucket explicitly set to s3guard+DDB > -- > > Key: HADOOP-15627 > URL: https://issues.apache.org/jira/browse/HADOOP-15627 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > Repeatable failure in {{ITestS3GuardWriteBack.testListStatusWriteBack}} > Possible causes could include > * test not setting up the three fs instances > * (disabled) caching not isolating properly > * something more serious -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15627) S3A ITests failing if bucket explicitly set to s3guard+DDB
[ https://issues.apache.org/jira/browse/HADOOP-15627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553631#comment-16553631 ] Steve Loughran commented on HADOOP-15627: - Similarly {code} ERROR] Tests run: 22, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 309.672 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocal [ERROR] testDiffCommand(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocal) Time elapsed: 29.885 s <<< FAILURE! java.lang.AssertionError: Mismatched metadata store outputs: S3 F 0 s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/s3_only/file-0 MS F 0 s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/s3_only/file-0 S3 F 0 s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/s3_only/file-2 MS F 0 s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/s3_only/file-2 S3 F 0 s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/s3_only/file-3 MS F 0 s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/s3_only/file-3 S3 F 0 s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/s3_only/file-1 MS F 0 s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/s3_only/file-1 S3 F 0 s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/s3_only/file-4 MS F 0 s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/s3_only/file-4 MS D 0 s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/ms_only MS F 100 s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/ms_only/file-3 MS F 100 s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/ms_only/file-2 MS F 100 s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/ms_only/file-4 MS F 100 s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/ms_only/file-0 MS F 100 s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/ms_only/file-1 expected:<[s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/ms_only/file-3, s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/ms_only, s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/ms_only/file-2, s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/ms_only/file-4, s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/ms_only/file-0, s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/ms_only/file-1]> but was:<[s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/ms_only/file-3, s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/s3_only/file-0, s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/ms_only, s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/ms_only/file-2, s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/ms_only/file-4, s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/s3_only/file-2, s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/ms_only/file-0, s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/s3_only/file-3, s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/ms_only/file-1, s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/s3_only/file-1, s3a://hwdev-steve-ireland-new/fork-0005/test/test-diff/s3_only/file-4]> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.testDiffCommand(AbstractS3GuardToolTestBase.java:449) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {code} > S3A ITests failing if bucket explicitly set to s3guard+DDB > -- > > Key: HADOOP-15627 > URL: https://issues.apache.org/jira/browse/HADOOP-15627 > Project: Hadoop Common > Issue Type:
[jira] [Updated] (HADOOP-15627) S3A ITests failing if bucket explicitly set to s3guard+DDB
[ https://issues.apache.org/jira/browse/HADOOP-15627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15627: Summary: S3A ITests failing if bucket explicitly set to s3guard+DDB (was: Failure in ITestS3GuardWriteBack.testListStatusWriteBack) > S3A ITests failing if bucket explicitly set to s3guard+DDB > -- > > Key: HADOOP-15627 > URL: https://issues.apache.org/jira/browse/HADOOP-15627 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > Repeatable failure in {{ITestS3GuardWriteBack.testListStatusWriteBack}} > Possible causes could include > * test not setting up the three fs instances > * (disabled) caching not isolating properly > * something more serious -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15612) Improve exception when tfile fails to load LzoCodec
[ https://issues.apache.org/jira/browse/HADOOP-15612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553600#comment-16553600 ] Steve Loughran commented on HADOOP-15612: - LGTM +1 > Improve exception when tfile fails to load LzoCodec > > > Key: HADOOP-15612 > URL: https://issues.apache.org/jira/browse/HADOOP-15612 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: Gera Shegalov >Assignee: Gera Shegalov >Priority: Major > Attachments: HADOOP-15612.001.patch, HADOOP-15612.002.patch, > HADOOP-15612.003.patch > > > When hadoop-lzo is not on classpath you get > {code:java} > java.io.IOException: LZO codec class not specified. Did you forget to set > property io.compression.codec.lzo.class?{code} > which is probably rarely the real cause given the default class name. The > real root cause is not attached to the exception thrown. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15627) Failure in ITestS3GuardWriteBack.testListStatusWriteBack
[ https://issues.apache.org/jira/browse/HADOOP-15627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553587#comment-16553587 ] Steve Loughran commented on HADOOP-15627: - Cause here is that if the test FS has s3guard enabled always, then the{{Assume.assumeTrue(getFileSystem().hasMetadataStore());}} check at the start of the test holds, but the FS creation code is only consistent if you set the -Ds3guard on the test run. otherwise, metadata setup in maybeEnabledS3Guard is skipped, and your test filesystems are all copies of the FS as created/configured by default. Which does have s3guard enabled, and with whatever writeback/auth options that comes with. Fix: change how the new FS instances are configured and created > Failure in ITestS3GuardWriteBack.testListStatusWriteBack > > > Key: HADOOP-15627 > URL: https://issues.apache.org/jira/browse/HADOOP-15627 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > Repeatable failure in {{ITestS3GuardWriteBack.testListStatusWriteBack}} > Possible causes could include > * test not setting up the three fs instances > * (disabled) caching not isolating properly > * something more serious -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user
[ https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553562#comment-16553562 ] genericqa commented on HADOOP-12953: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 21m 49s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-hdfs-project/hadoop-hdfs-native-client {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 17s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 27m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 3s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-hdfs-project/hadoop-hdfs-native-client {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 23s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 75m 24s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 39s{color} | {color:red} hadoop-hdfs-native-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 47s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}252m 40s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed CTEST tests | test_test_libhdfs_threaded_hdfs_static | | | test_libhdfs_threaded_hdfspp_test_shim_static | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce
[jira] [Commented] (HADOOP-15627) Failure in ITestS3GuardWriteBack.testListStatusWriteBack
[ https://issues.apache.org/jira/browse/HADOOP-15627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553560#comment-16553560 ] Steve Loughran commented on HADOOP-15627: - {code} rors: 0, Skipped: 0, Time elapsed: 40.619 s - in org.apache.hadoop.fs.s3a.yarn.ITestS3AMiniYarnCluster [INFO] Running org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 15.955 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3GuardWriteBack [ERROR] testListStatusWriteBack(org.apache.hadoop.fs.s3a.ITestS3GuardWriteBack) Time elapsed: 15.817 s <<< FAILURE! java.lang.AssertionError: No results from listChildren s3a://hwdev-steve-ireland-new/fork-0002/test/ListStatusWriteBack at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertNotNull(Assert.java:621) at org.apache.hadoop.fs.s3a.ITestS3GuardWriteBack.testListStatusWriteBack(ITestS3GuardWriteBack.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {code} > Failure in ITestS3GuardWriteBack.testListStatusWriteBack > > > Key: HADOOP-15627 > URL: https://issues.apache.org/jira/browse/HADOOP-15627 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > Repeatable failure in {{ITestS3GuardWriteBack.testListStatusWriteBack}} > Possible causes could include > * test not setting up the three fs instances > * (disabled) caching not isolating properly > * something more serious -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15627) Failure in ITestS3GuardWriteBack.testListStatusWriteBack
Steve Loughran created HADOOP-15627: --- Summary: Failure in ITestS3GuardWriteBack.testListStatusWriteBack Key: HADOOP-15627 URL: https://issues.apache.org/jira/browse/HADOOP-15627 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3, test Affects Versions: 3.2.0 Reporter: Steve Loughran Assignee: Steve Loughran Repeatable failure in {{ITestS3GuardWriteBack.testListStatusWriteBack}} Possible causes could include * test not setting up the three fs instances * (disabled) caching not isolating properly * something more serious -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14396) Add builder interface to FileContext
[ https://issues.apache.org/jira/browse/HADOOP-14396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553532#comment-16553532 ] Steve Loughran commented on HADOOP-14396: - ...answer to that is probably: source file doesn't exist and FileContext.append() downgrades to create if the file doesn't exist yet > Add builder interface to FileContext > > > Key: HADOOP-14396 > URL: https://issues.apache.org/jira/browse/HADOOP-14396 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 2.9.0, 3.0.0-alpha3 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-14396.00.patch, HADOOP-14396.01.patch, > HADOOP-14396.02.patch, HADOOP-14396.02.patch > > > Add builder interface for {{FileContext#create}} and {{FileContext#append}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14396) Add builder interface to FileContext
[ https://issues.apache.org/jira/browse/HADOOP-14396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553527#comment-16553527 ] Steve Loughran commented on HADOOP-14396: - [~eddyxu] Filed HADOOP-15626; new test is failing as S3A doesn't support append. we can just skip this test there, but now I'm curious: why don't any of the other FC append tests fail? > Add builder interface to FileContext > > > Key: HADOOP-14396 > URL: https://issues.apache.org/jira/browse/HADOOP-14396 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 2.9.0, 3.0.0-alpha3 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-14396.00.patch, HADOOP-14396.01.patch, > HADOOP-14396.02.patch, HADOOP-14396.02.patch > > > Add builder interface for {{FileContext#create}} and {{FileContext#append}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15626) FileContextMainOperationsBaseTest.testBuilderCreateAppendExistingFile fails on filesystems without append.
[ https://issues.apache.org/jira/browse/HADOOP-15626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553521#comment-16553521 ] Steve Loughran commented on HADOOP-15626: - {code} [ERROR] Tests run: 67, Failures: 0, Errors: 1, Skipped: 3, Time elapsed: 973.571 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextMainOperations [ERROR] testBuilderCreateAppendExistingFile(org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextMainOperations) Time elapsed: 10.883 s <<< ERROR! java.lang.UnsupportedOperationException: Append is not supported by S3AFileSystem at org.apache.hadoop.fs.s3a.S3AFileSystem.append(S3AFileSystem.java:834) at org.apache.hadoop.fs.FileSystem.primitiveCreate(FileSystem.java:1276) at org.apache.hadoop.fs.DelegateToFileSystem.createInternal(DelegateToFileSystem.java:100) at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:605) at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:696) at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:692) at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) at org.apache.hadoop.fs.FileContext.create(FileContext.java:698) at org.apache.hadoop.fs.FileContext$FCDataOutputStreamBuilder.build(FileContext.java:739) at org.apache.hadoop.fs.FileContextMainOperationsBaseTest.testBuilderCreateAppendExistingFile(FileContextMainOperationsBaseTest.java:840) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413) {code} > FileContextMainOperationsBaseTest.testBuilderCreateAppendExistingFile fails > on filesystems without append. > -- > > Key: HADOOP-15626 > URL: https://issues.apache.org/jira/browse/HADOOP-15626 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3, test >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Priority: Major > > After HADOOP-14396. one of the new tests fails on S3A because append() isn't > supported there -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15626) FileContextMainOperationsBaseTest.testBuilderCreateAppendExistingFile fails on filesystems without append.
Steve Loughran created HADOOP-15626: --- Summary: FileContextMainOperationsBaseTest.testBuilderCreateAppendExistingFile fails on filesystems without append. Key: HADOOP-15626 URL: https://issues.apache.org/jira/browse/HADOOP-15626 Project: Hadoop Common Issue Type: Bug Components: fs/s3, test Affects Versions: 3.2.0 Reporter: Steve Loughran After HADOOP-14396. one of the new tests fails on S3A because append() isn't supported there -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-9629) Support Windows Azure Storage - Blob as a file system in Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-9629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HADOOP-9629: --- Assignee: Chris Nauroth (was: Wei-Chiu Chuang) > Support Windows Azure Storage - Blob as a file system in Hadoop > --- > > Key: HADOOP-9629 > URL: https://issues.apache.org/jira/browse/HADOOP-9629 > Project: Hadoop Common > Issue Type: New Feature > Components: tools >Reporter: Mostafa Elhemali >Assignee: Chris Nauroth >Priority: Major > Fix For: 2.7.0 > > Attachments: HADOOP-9629 - Azure Filesystem - Information for > developers.docx, HADOOP-9629 - Azure Filesystem - Information for > developers.pdf, HADOOP-9629.2.patch, HADOOP-9629.3.patch, HADOOP-9629.patch, > HADOOP-9629.trunk.1.patch, HADOOP-9629.trunk.2.patch, > HADOOP-9629.trunk.3.patch, HADOOP-9629.trunk.4.patch, > HADOOP-9629.trunk.5.patch > > > h2. Description > This JIRA incorporates adding a new file system implementation for accessing > Windows Azure Storage - Blob from within Hadoop, such as using blobs as input > to MR jobs or configuring MR jobs to put their output directly into blob > storage. > h2. High level design > At a high level, the code here extends the FileSystem class to provide an > implementation for accessing blob storage; the scheme wasb is used for > accessing it over HTTP, and wasbs for accessing over HTTPS. We use the URI > scheme: {code}wasb[s]://@/path/to/file{code} to address > individual blobs. We use the standard Azure Java SDK > (com.microsoft.windowsazure) to do most of the work. In order to map a > hierarchical file system over the flat name-value pair nature of blob > storage, we create a specially tagged blob named path/to/dir whenever we > create a directory called path/to/dir, then files under that are stored as > normal blobs path/to/dir/file. We have many metrics implemented for it using > the Metrics2 interface. Tests are implemented mostly using a mock > implementation for the Azure SDK functionality, with an option to test > against a real blob storage if configured (instructions provided inside in > README.txt). > h2. Credits and history > This has been ongoing work for a while, and the early version of this work > can be seen in HADOOP-8079. This JIRA is a significant revision of that and > we'll post the patch here for Hadoop trunk first, then post a patch for > branch-1 as well for backporting the functionality if accepted. Credit for > this work goes to the early team: [~minwei], [~davidlao], [~lengningliu] and > [~stojanovic] as well as multiple people who have taken over this work since > then (hope I don't forget anyone): [~dexterb], Johannes Klein, [~ivanmi], > Michael Rys, [~mostafae], [~brian_swan], [~mikelid], [~xifang], and > [~chuanliu]. > h2. Test > Besides unit tests, we have used WASB as the default file system in our > service product. (HDFS is also used but not as default file system.) Various > different customer and test workloads have been run against clusters with > such configurations for quite some time. The current version reflects to the > version of the code tested and used in our production environment. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-9629) Support Windows Azure Storage - Blob as a file system in Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-9629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HADOOP-9629: --- Assignee: Wei-Chiu Chuang (was: Chris Nauroth) > Support Windows Azure Storage - Blob as a file system in Hadoop > --- > > Key: HADOOP-9629 > URL: https://issues.apache.org/jira/browse/HADOOP-9629 > Project: Hadoop Common > Issue Type: New Feature > Components: tools >Reporter: Mostafa Elhemali >Assignee: Wei-Chiu Chuang >Priority: Major > Fix For: 2.7.0 > > Attachments: HADOOP-9629 - Azure Filesystem - Information for > developers.docx, HADOOP-9629 - Azure Filesystem - Information for > developers.pdf, HADOOP-9629.2.patch, HADOOP-9629.3.patch, HADOOP-9629.patch, > HADOOP-9629.trunk.1.patch, HADOOP-9629.trunk.2.patch, > HADOOP-9629.trunk.3.patch, HADOOP-9629.trunk.4.patch, > HADOOP-9629.trunk.5.patch > > > h2. Description > This JIRA incorporates adding a new file system implementation for accessing > Windows Azure Storage - Blob from within Hadoop, such as using blobs as input > to MR jobs or configuring MR jobs to put their output directly into blob > storage. > h2. High level design > At a high level, the code here extends the FileSystem class to provide an > implementation for accessing blob storage; the scheme wasb is used for > accessing it over HTTP, and wasbs for accessing over HTTPS. We use the URI > scheme: {code}wasb[s]://@/path/to/file{code} to address > individual blobs. We use the standard Azure Java SDK > (com.microsoft.windowsazure) to do most of the work. In order to map a > hierarchical file system over the flat name-value pair nature of blob > storage, we create a specially tagged blob named path/to/dir whenever we > create a directory called path/to/dir, then files under that are stored as > normal blobs path/to/dir/file. We have many metrics implemented for it using > the Metrics2 interface. Tests are implemented mostly using a mock > implementation for the Azure SDK functionality, with an option to test > against a real blob storage if configured (instructions provided inside in > README.txt). > h2. Credits and history > This has been ongoing work for a while, and the early version of this work > can be seen in HADOOP-8079. This JIRA is a significant revision of that and > we'll post the patch here for Hadoop trunk first, then post a patch for > branch-1 as well for backporting the functionality if accepted. Credit for > this work goes to the early team: [~minwei], [~davidlao], [~lengningliu] and > [~stojanovic] as well as multiple people who have taken over this work since > then (hope I don't forget anyone): [~dexterb], Johannes Klein, [~ivanmi], > Michael Rys, [~mostafae], [~brian_swan], [~mikelid], [~xifang], and > [~chuanliu]. > h2. Test > Besides unit tests, we have used WASB as the default file system in our > service product. (HDFS is also used but not as default file system.) Various > different customer and test workloads have been run against clusters with > such configurations for quite some time. The current version reflects to the > version of the code tested and used in our production environment. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14937) initial part uploads seem to block unnecessarily in S3ABlockOutputStream
[ https://issues.apache.org/jira/browse/HADOOP-14937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553421#comment-16553421 ] Steven Rand commented on HADOOP-14937: -- Sorry, I've dropped the ball on this one, and haven't investigated further. I don't imagine I'll be able to get to it anytime soon either – feel free to reassign or close. > initial part uploads seem to block unnecessarily in S3ABlockOutputStream > > > Key: HADOOP-14937 > URL: https://issues.apache.org/jira/browse/HADOOP-14937 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steven Rand >Assignee: Steven Rand >Priority: Major > Attachments: yjp_threads.png > > > From looking at a YourKit snapshot of an FsShell process running a {{hadoop > fs -put file:///... s3a://...}}, it seems that the first part in the > multipart upload doesn't begin to upload until n of the > {{s3a-transfer-shared-pool}} threads are able to start uploading, where n is > the value of {{fs.s3a.fast.upload.active.blocks}}. > To hopefully clarify a bit, the series of events that I expected to see with > {{fs.s3a.fast.upload.active.blocks}} set to 4 is: > 1. An amount of data equal to {{fs.s3a.multipart.size}} is buffered into > off-heap memory (I have {{fs.s3a.fast.upload.buffer = bytebuffer}}). > 2. As soon as that happens, a thread begins to upload that part. Meanwhile, > the main thread continues to buffer data into off-heap memory. > 3. Once another part has been buffered into off-heap memory, a separate > thread uploads that part, and so on. > Whereas what I think the YK snapshot shows happening is: > 1. An amount of data equal to {{fs.s3a.multipart.size}} * 4 is buffered into > off-heap memory. > 2. Four threads start to upload one part each at the same time. > I've attached a picture of the "Threads" tab to show what I mean. Basically > the times at which the first four {{s3a-transfer-shared-pool}} threads start > to upload are roughly the same, whereas I would've expected them to be more > staggered. > I'm actually not sure whether this is the expected behavior or not, so feel > free to close if this doesn't come as a surprise to anyone. > For some context, I've been trying to get a sense for roughly which values of > {{fs.s3a.multipart.size}} perform the best at different file sizes. One thing > that I found confusing is that a part size of 5 MB seems to outperform a part > size of 64 MB up until files that are upwards of about 500 MB in size. This > seems odd, since each {{uploadPart}} call is its own HTTP request, and I > would've expected the overhead of those to become costly at small part sizes. > My suspicion is that with 4 concurrent part uploads and 64 MB blocks, we have > to wait until 256 MB are buffered before we can start uploading, while with 5 > MB blocks we can start uploading as soon as we buffer 20 MB, and that's what > gives the smaller parts the advantage for smaller files. > I'm happy to submit a patch if this is in fact a problem, but wanted to check > to make sure I'm not just misunderstanding something. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15544) ABFS: validate packing, transient classpath, hadoop fs CLI
[ https://issues.apache.org/jira/browse/HADOOP-15544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553419#comment-16553419 ] Da Zhou commented on HADOOP-15544: -- Hi [~ste...@apache.org], could you share with me the steps to test hadoop CLI from Intellij? Or I have to deploy a cluster for hadoop fs CLI test? > ABFS: validate packing, transient classpath, hadoop fs CLI > -- > > Key: HADOOP-15544 > URL: https://issues.apache.org/jira/browse/HADOOP-15544 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: dependencies.txt > > > Validate the packaging and dependencies of ABFS > * hadoop-cloud-storage artifact to export everything needed > * {{hadoop fs -ls abfs://path}} to work in ASF distributions > * check transient CP (e.g spark) > Spark master;s hadoop-cloud module depends on hadoop-cloud-storage if you > build with the hadoop-3.1 profile, so it should automatically get in there. > Just need to check that it picks it up too -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15546) ABFS: tune imports & javadocs; stabilise tests
[ https://issues.apache.org/jira/browse/HADOOP-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553395#comment-16553395 ] Da Zhou commented on HADOOP-15546: -- Hi [~mackrorysd], although both WASB and ABFS have property "fs.azure.test.account.name", their values are different. For WASB, its value is: ACCOUNT_NAME.*blob*.core.windows.net For ABFS, its valueis : ACCOUNT_NAME.*dfs*.core.windows.net So when merging WASB and ABFS configuration file and making them share same properties (eg: fs.azure.test.account.name) , their configuration parser also need to be updated to make sure they are using correct full account name. > ABFS: tune imports & javadocs; stabilise tests > -- > > Key: HADOOP-15546 > URL: https://issues.apache.org/jira/browse/HADOOP-15546 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: HADOOP-15407 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-15546-001.patch, > HADOOP-15546-HADOOP-15407-001.patch, HADOOP-15546-HADOOP-15407-002.patch, > HADOOP-15546-HADOOP-15407-003.patch, HADOOP-15546-HADOOP-15407-004.patch, > HADOOP-15546-HADOOP-15407-005.patch, HADOOP-15546-HADOOP-15407-006.patch, > HADOOP-15546-HADOOP-15407-006.patch, HADOOP-15546-HADOOP-15407-007.patch, > HADOOP-15546-HADOOP-15407-008.patch, azure-auth-keys.xml > > > Followup on HADOOP-15540 with some initial review tuning > h2. Tuning > * ordering of imports > * rely on azure-auth-keys.xml to store credentials (change imports, > docs,.gitignore) > * log4j -> info > * add a "." to the first sentence of all the javadocs I noticed. > * remove @Public annotations except for some constants (which includes some > commitment to maintain them). > * move the AbstractFS declarations out of the src/test/resources XML file > into core-default.xml for all to use > * other IDE-suggested tweaks > h2. Testing > Review the tests, move to ContractTestUtil assertions, make more consistent > to contract test setup, and general work to make the tests work well over > slower links, document, etc. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15566) Remove HTrace support
[ https://issues.apache.org/jira/browse/HADOOP-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553366#comment-16553366 ] Elek, Marton commented on HADOOP-15566: --- [~bensigelman] Thank you very much your answer. It was very informative and reasonable arguments. Especially the last paragraph: {quote} Also, building a general-purpose adapter to convert OpenTracing instrumentation into OpenCensus API calls would be straightforward (due to the relative "thickness" and numbers of implementation assumptions made in each project). Going the other way would be challenging or impossible, depending on reliance on OpenCensus wire formats. {quote} > Remove HTrace support > - > > Key: HADOOP-15566 > URL: https://issues.apache.org/jira/browse/HADOOP-15566 > Project: Hadoop Common > Issue Type: Improvement > Components: metrics >Affects Versions: 3.1.0 >Reporter: Todd Lipcon >Priority: Major > Attachments: Screen Shot 2018-06-29 at 11.59.16 AM.png, > ss-trace-s3a.png > > > The HTrace incubator project has voted to retire itself and won't be making > further releases. The Hadoop project currently has various hooks with HTrace. > It seems in some cases (eg HDFS-13702) these hooks have had measurable > performance overhead. Given these two factors, I think we should consider > removing the HTrace integration. If there is someone willing to do the work, > replacing it with OpenTracing might be a better choice since there is an > active community. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15625) S3A input stream to use etags to detect changed source files
Steve Loughran created HADOOP-15625: --- Summary: S3A input stream to use etags to detect changed source files Key: HADOOP-15625 URL: https://issues.apache.org/jira/browse/HADOOP-15625 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 3.2.0 Reporter: Brahma Reddy Battula S3A input stream doesn't handle changing source files any better than the other cloud store connectors. Specifically: it doesn't noticed it has changed, caches the length from startup, and whenever a seek triggers a new GET, you may get one of: old data, new data, and even perhaps go from new data to old data due to eventual consistency. We can't do anything to stop this, but we could detect changes by # caching the etag of the first HEAD/GET (we don't get that HEAD on open with S3Guard, BTW) # on future GET requests, verify the etag of the response # raise an IOE if the remote file changed during the read. It's a more dramatic failure, but it stops changes silently corrupting things. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15546) ABFS: tune imports & javadocs; stabilise tests
[ https://issues.apache.org/jira/browse/HADOOP-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553285#comment-16553285 ] Sean Mackrory commented on HADOOP-15546: At least some of the WASB issues are coming from the overloading of the fs.azure.test.account.name property. If I set that to .blob.core.windows.net, the WASB unit tests pass. But then more of the ABFS integration tests start failing earlier. > ABFS: tune imports & javadocs; stabilise tests > -- > > Key: HADOOP-15546 > URL: https://issues.apache.org/jira/browse/HADOOP-15546 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: HADOOP-15407 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-15546-001.patch, > HADOOP-15546-HADOOP-15407-001.patch, HADOOP-15546-HADOOP-15407-002.patch, > HADOOP-15546-HADOOP-15407-003.patch, HADOOP-15546-HADOOP-15407-004.patch, > HADOOP-15546-HADOOP-15407-005.patch, HADOOP-15546-HADOOP-15407-006.patch, > HADOOP-15546-HADOOP-15407-006.patch, HADOOP-15546-HADOOP-15407-007.patch, > HADOOP-15546-HADOOP-15407-008.patch, azure-auth-keys.xml > > > Followup on HADOOP-15540 with some initial review tuning > h2. Tuning > * ordering of imports > * rely on azure-auth-keys.xml to store credentials (change imports, > docs,.gitignore) > * log4j -> info > * add a "." to the first sentence of all the javadocs I noticed. > * remove @Public annotations except for some constants (which includes some > commitment to maintain them). > * move the AbstractFS declarations out of the src/test/resources XML file > into core-default.xml for all to use > * other IDE-suggested tweaks > h2. Testing > Review the tests, move to ContractTestUtil assertions, make more consistent > to contract test setup, and general work to make the tests work well over > slower links, document, etc. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14212) Expose SecurityEnabled boolean field in JMX for other services besides NameNode
[ https://issues.apache.org/jira/browse/HADOOP-14212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553272#comment-16553272 ] genericqa commented on HADOOP-14212: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 48s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 22s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 2s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 29s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 16s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 12s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 48s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 53s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}296m 35s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.client.impl.TestBlockReaderLocal | | | hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HADOOP-14212 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12932707/HADOOP-14212.005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux d75a07909d5a 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Commented] (HADOOP-15586) Fix wrong log statement in AbstractService
[ https://issues.apache.org/jira/browse/HADOOP-15586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553260#comment-16553260 ] Hudson commented on HADOOP-15586: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14618 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14618/]) HADOOP-15586. Fix wrong log statement in AbstractService. (Szilard (haibochen: rev 17e26163ec1b71cd13a6a82150aca94283f10ed1) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/service/AbstractService.java > Fix wrong log statement in AbstractService > -- > > Key: HADOOP-15586 > URL: https://issues.apache.org/jira/browse/HADOOP-15586 > Project: Hadoop Common > Issue Type: Improvement > Components: util >Affects Versions: 2.9.0, 3.1.0 >Reporter: Szilard Nemeth >Assignee: Szilard Nemeth >Priority: Minor > Attachments: HADOOP-15586-001.patch, HADOOP-15586-002.patch, > HADOOP-15586-003.patch > > > There are some wrong logging statements in AbstractService, here is one > example: > {code:java} > LOG.debug("noteFailure {}" + exception); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15546) ABFS: tune imports & javadocs; stabilise tests
[ https://issues.apache.org/jira/browse/HADOOP-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553256#comment-16553256 ] Sean Mackrory commented on HADOOP-15546: Upon looking at this with fresh eyes, I see the above failures were all in the original WASB package (org.apache.hadoop.fs.azure). There are other failures within the new org.apache.hadoop.fs.azurebfs package, but they are distinct. I posted the stack traces here: https://gist.github.com/mackrorysd/dc0abebfea74f394e184e392215b47fe. Looks like the contract tests are mostly running and passing. > ABFS: tune imports & javadocs; stabilise tests > -- > > Key: HADOOP-15546 > URL: https://issues.apache.org/jira/browse/HADOOP-15546 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: HADOOP-15407 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-15546-001.patch, > HADOOP-15546-HADOOP-15407-001.patch, HADOOP-15546-HADOOP-15407-002.patch, > HADOOP-15546-HADOOP-15407-003.patch, HADOOP-15546-HADOOP-15407-004.patch, > HADOOP-15546-HADOOP-15407-005.patch, HADOOP-15546-HADOOP-15407-006.patch, > HADOOP-15546-HADOOP-15407-006.patch, HADOOP-15546-HADOOP-15407-007.patch, > HADOOP-15546-HADOOP-15407-008.patch, azure-auth-keys.xml > > > Followup on HADOOP-15540 with some initial review tuning > h2. Tuning > * ordering of imports > * rely on azure-auth-keys.xml to store credentials (change imports, > docs,.gitignore) > * log4j -> info > * add a "." to the first sentence of all the javadocs I noticed. > * remove @Public annotations except for some constants (which includes some > commitment to maintain them). > * move the AbstractFS declarations out of the src/test/resources XML file > into core-default.xml for all to use > * other IDE-suggested tweaks > h2. Testing > Review the tests, move to ContractTestUtil assertions, make more consistent > to contract test setup, and general work to make the tests work well over > slower links, document, etc. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15395) DefaultImpersonationProvider fails to parse proxy user config if username has . in it
[ https://issues.apache.org/jira/browse/HADOOP-15395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553253#comment-16553253 ] genericqa commented on HADOOP-15395: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 12s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 31m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 28s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 37s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 40s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}134m 40s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HADOOP-15395 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12921804/HADOOP-15395.03.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux f66a5b41f25f 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / bbe2f62 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/14932/testReport/ | | Max. process+thread count | 1346 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/14932/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > DefaultImpersonationProvider fails to parse proxy user config if username has > . in it >
[jira] [Created] (HADOOP-15624) Release Hadoop 2.7.8
Steve Loughran created HADOOP-15624: --- Summary: Release Hadoop 2.7.8 Key: HADOOP-15624 URL: https://issues.apache.org/jira/browse/HADOOP-15624 Project: Hadoop Common Issue Type: Task Components: build Affects Versions: 2.7.7 Reporter: Steve Loughran Planning ahead for the 2.7.8 release -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-15509) Release Hadoop 2.7.7
[ https://issues.apache.org/jira/browse/HADOOP-15509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-15509. - Resolution: Fixed Fix Version/s: 2.7.7 > Release Hadoop 2.7.7 > > > Key: HADOOP-15509 > URL: https://issues.apache.org/jira/browse/HADOOP-15509 > Project: Hadoop Common > Issue Type: Task > Components: build >Affects Versions: 2.7.6 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Fix For: 2.7.7 > > > Time to get a new Hadoop 2.7.x out the door. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15586) Fix wrong log statement in AbstractService
[ https://issues.apache.org/jira/browse/HADOOP-15586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated HADOOP-15586: Resolution: Fixed Status: Resolved (was: Patch Available) Thanks [~snemeth] for the fix and [~bsteinbach] for the additional review. I have committed the patch to trunk. > Fix wrong log statement in AbstractService > -- > > Key: HADOOP-15586 > URL: https://issues.apache.org/jira/browse/HADOOP-15586 > Project: Hadoop Common > Issue Type: Improvement > Components: util >Affects Versions: 2.9.0, 3.1.0 >Reporter: Szilard Nemeth >Assignee: Szilard Nemeth >Priority: Minor > Attachments: HADOOP-15586-001.patch, HADOOP-15586-002.patch, > HADOOP-15586-003.patch > > > There are some wrong logging statements in AbstractService, here is one > example: > {code:java} > LOG.debug("noteFailure {}" + exception); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15586) Fix wrong log statement in AbstractService
[ https://issues.apache.org/jira/browse/HADOOP-15586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated HADOOP-15586: Summary: Fix wrong log statement in AbstractService (was: Fix wrong log statements in AbstractService) > Fix wrong log statement in AbstractService > -- > > Key: HADOOP-15586 > URL: https://issues.apache.org/jira/browse/HADOOP-15586 > Project: Hadoop Common > Issue Type: Improvement > Components: util >Affects Versions: 2.9.0, 3.1.0 >Reporter: Szilard Nemeth >Assignee: Szilard Nemeth >Priority: Minor > Attachments: HADOOP-15586-001.patch, HADOOP-15586-002.patch, > HADOOP-15586-003.patch > > > There are some wrong logging statements in AbstractService, here is one > example: > {code:java} > LOG.debug("noteFailure {}" + exception); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15618) Fix debug log for property IPC_CLIENT_BIND_WILDCARD_ADDR_KEY
[ https://issues.apache.org/jira/browse/HADOOP-15618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HADOOP-15618: Resolution: Duplicate Status: Resolved (was: Patch Available) > Fix debug log for property IPC_CLIENT_BIND_WILDCARD_ADDR_KEY > > > Key: HADOOP-15618 > URL: https://issues.apache.org/jira/browse/HADOOP-15618 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Minor > Attachments: HADOOP-15618.00.patch > > > Fix log message in Client class. > Currently:{code}LOG.debug("{} set to true. Will bind client sockets to > wildcard " > + "address.", > CommonConfigurationKeys.IPC_CLIENT_BIND_WILDCARD_ADDR_KEY);{code} > to > {code}LOG.debug("{} set to {}", > CommonConfigurationKeys.IPC_CLIENT_BIND_WILDCARD_ADDR_KEY, > bindToWildCardAddress);{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15593) UserGroupInformation TGT renewer throws NPE
[ https://issues.apache.org/jira/browse/HADOOP-15593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553145#comment-16553145 ] Eric Yang commented on HADOOP-15593: Two possible approaches to fix the findbug error: {code} Date endTime = tgt.getEndTime(); if (tgt != null && endTime != null && !tgt.isDestroyed()) { tgtEndTime = endTime.getTime(); } else { tgtEndTime = now; } {code} or {code} try { Date endTime = tgt.getEndTime(); tgtEndTime = endTime.getTime(); } catch (NullPointerException npe) { LOG.warn("NPE thrown while getting KerberosTicket endTime. The " + "endTime will be set to Time.now()"); tgtEndTime = now; } {code} Both will work equally well. > UserGroupInformation TGT renewer throws NPE > --- > > Key: HADOOP-15593 > URL: https://issues.apache.org/jira/browse/HADOOP-15593 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Gabor Bota >Priority: Blocker > Attachments: HADOOP-15593.001.patch, HADOOP-15593.002.patch, > HADOOP-15593.003.patch > > > Found the following NPE thrown in UGI tgt renewer. The NPE was thrown within > an exception handler so the original exception was hidden, though it's likely > caused by expired tgt. > {noformat} > 18/07/02 10:30:57 ERROR util.SparkUncaughtExceptionHandler: Uncaught > exception in thread Thread[TGT Renewer for f...@example.com,5,main] > java.lang.NullPointerException > at > javax.security.auth.kerberos.KerberosTicket.getEndTime(KerberosTicket.java:482) > at > org.apache.hadoop.security.UserGroupInformation$1.run(UserGroupInformation.java:894) > at java.lang.Thread.run(Thread.java:748){noformat} > Suspect it's related to [https://bugs.openjdk.java.net/browse/JDK-8154889]. > The relevant code was added in HADOOP-13590. File this jira to handle the > exception better. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15566) Remove HTrace support
[ https://issues.apache.org/jira/browse/HADOOP-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553144#comment-16553144 ] stack commented on HADOOP-15566: For me, the hard part is not which tracing lib to use -- if a tracing lib discussion, lets do it out on dev? We should also invite others to the discussion -- but rather discussion around resourcing: * Ensuring traces tell a good narrative across the different code paths and over processes, and that trace paths remain intact across code churn; they are brittle and easily broken/disconnected as dev goes on. * Instrumenting/coverage -- inserting trace points is time consuming whose value is only realized down-the-road by operator/dev trying to figure a slowdown (so the https://github.com/opentracing-contrib/java-tracerresolver looks interesting). * Tooling to enable tracing and visualize needs to be easy-to-deploy and use else all will go to rot (Some orgs trace every transaction with a simple switch for dumping to visualizer that is up and always available..) * Ensuring traces are friction-free else they'll be removed or not taken-on in the first place. * Evangelizing and pushing trace across hadoop components; the more components instrumented, the more we all will benefit. Thanks. > Remove HTrace support > - > > Key: HADOOP-15566 > URL: https://issues.apache.org/jira/browse/HADOOP-15566 > Project: Hadoop Common > Issue Type: Improvement > Components: metrics >Affects Versions: 3.1.0 >Reporter: Todd Lipcon >Priority: Major > Attachments: Screen Shot 2018-06-29 at 11.59.16 AM.png, > ss-trace-s3a.png > > > The HTrace incubator project has voted to retire itself and won't be making > further releases. The Hadoop project currently has various hooks with HTrace. > It seems in some cases (eg HDFS-13702) these hooks have had measurable > performance overhead. Given these two factors, I think we should consider > removing the HTrace integration. If there is someone willing to do the work, > replacing it with OpenTracing might be a better choice since there is an > active community. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15546) ABFS: tune imports & javadocs; stabilise tests
[ https://issues.apache.org/jira/browse/HADOOP-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553129#comment-16553129 ] Da Zhou commented on HADOOP-15546: -- I'm looking into this, will update it once I find the cause. > ABFS: tune imports & javadocs; stabilise tests > -- > > Key: HADOOP-15546 > URL: https://issues.apache.org/jira/browse/HADOOP-15546 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: HADOOP-15407 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-15546-001.patch, > HADOOP-15546-HADOOP-15407-001.patch, HADOOP-15546-HADOOP-15407-002.patch, > HADOOP-15546-HADOOP-15407-003.patch, HADOOP-15546-HADOOP-15407-004.patch, > HADOOP-15546-HADOOP-15407-005.patch, HADOOP-15546-HADOOP-15407-006.patch, > HADOOP-15546-HADOOP-15407-006.patch, HADOOP-15546-HADOOP-15407-007.patch, > HADOOP-15546-HADOOP-15407-008.patch, azure-auth-keys.xml > > > Followup on HADOOP-15540 with some initial review tuning > h2. Tuning > * ordering of imports > * rely on azure-auth-keys.xml to store credentials (change imports, > docs,.gitignore) > * log4j -> info > * add a "." to the first sentence of all the javadocs I noticed. > * remove @Public annotations except for some constants (which includes some > commitment to maintain them). > * move the AbstractFS declarations out of the src/test/resources XML file > into core-default.xml for all to use > * other IDE-suggested tweaks > h2. Testing > Review the tests, move to ContractTestUtil assertions, make more consistent > to contract test setup, and general work to make the tests work well over > slower links, document, etc. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15395) DefaultImpersonationProvider fails to parse proxy user config if username has . in it
[ https://issues.apache.org/jira/browse/HADOOP-15395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HADOOP-15395: --- Status: Patch Available (was: Open) > DefaultImpersonationProvider fails to parse proxy user config if username has > . in it > - > > Key: HADOOP-15395 > URL: https://issues.apache.org/jira/browse/HADOOP-15395 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Attachments: HADOOP-15395.00.patch, HADOOP-15395.01.patch, > HADOOP-15395.02.patch, HADOOP-15395.03.patch > > > DefaultImpersonationProvider fails to parse proxy user config if username has > . in it. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15395) DefaultImpersonationProvider fails to parse proxy user config if username has . in it
[ https://issues.apache.org/jira/browse/HADOOP-15395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HADOOP-15395: --- Status: Open (was: Patch Available) > DefaultImpersonationProvider fails to parse proxy user config if username has > . in it > - > > Key: HADOOP-15395 > URL: https://issues.apache.org/jira/browse/HADOOP-15395 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Attachments: HADOOP-15395.00.patch, HADOOP-15395.01.patch, > HADOOP-15395.02.patch, HADOOP-15395.03.patch > > > DefaultImpersonationProvider fails to parse proxy user config if username has > . in it. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14212) Expose SecurityEnabled boolean field in JMX for other services besides NameNode
[ https://issues.apache.org/jira/browse/HADOOP-14212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552996#comment-16552996 ] Ajay Kumar commented on HADOOP-14212: - [~adam.antal] thanks for working on this. * TestDataNodeMXBean: L114-117 removal of finally block is not required. * Shall we test case when security is enabled as well? (In current test cases we are getting default value i.e false) Checkstyle NITs: ResourceManagerMXBean,SecondaryNameNodeInfoMXBean, : public modifier is redundant in function signatures. DataNodeMxBean: L155 public modifier is redundant. > Expose SecurityEnabled boolean field in JMX for other services besides > NameNode > --- > > Key: HADOOP-14212 > URL: https://issues.apache.org/jira/browse/HADOOP-14212 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ray Burgemeestre >Assignee: Adam Antal >Priority: Minor > Labels: newbie, security > Attachments: HADOOP-14212.001.patch, HADOOP-14212.002.patch, > HADOOP-14212.003.patch, HADOOP-14212.004.patch, HADOOP-14212.005.patch, > HADOOP-14212.005.patch, HADOOP-14212.005.patch > > > The following commit > https://github.com/apache/hadoop/commit/dc17bda4b677e30c02c2a9a053895a43e41f7a12 > introduced a "SecurityEnabled" field in the JMX output for the NameNode. I > believe it would be nice to add this same change to the JMX output of other > services: Secondary Namenode, ResourceManager, NodeManagers, DataNodes, etc. > So that it can be queried whether Security is enabled in all JMX resources. > The reason I am suggesting this feature / improvement is that I think it > would provide a clean way to check whether your cluster is completely > Kerberized or not. I don't think there is an easy/clean way to do this now, > other than checking the logs, checking ports etc.? > The file where the change was made is > hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java > has the following function now: > {code:java} > @Override // NameNodeStatusMXBean > public boolean isSecurityEnabled() { > return UserGroupInformation.isSecurityEnabled(); > } > {code} > I would be happy to develop a patch if it seems useful by others as well? > This is a snippet from the JMX output from the NameNode in case security is > not enabled: > {code} > { > "name" : "Hadoop:service=NameNode,name=NameNodeStatus", > "modelerType" : "org.apache.hadoop.hdfs.server.namenode.NameNode", > "NNRole" : "NameNode", > "HostAndPort" : "node001.cm.cluster:8020", > "SecurityEnabled" : false, > "LastHATransitionTime" : 0, > "State" : "standby" > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15593) UserGroupInformation TGT renewer throws NPE
[ https://issues.apache.org/jira/browse/HADOOP-15593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552947#comment-16552947 ] genericqa commented on HADOOP-15593: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 42s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 9s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 47s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 35s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}125m 2s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-common-project/hadoop-common | | | Nullcheck of UserGroupInformation$AutoRenewalForUserCredsRunnable.tgt at line 928 of value previously dereferenced in org.apache.hadoop.security.UserGroupInformation$AutoRenewalForUserCredsRunnable.run() At UserGroupInformation.java:928 of value previously dereferenced in org.apache.hadoop.security.UserGroupInformation$AutoRenewalForUserCredsRunnable.run() At UserGroupInformation.java:[line 927] | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HADOOP-15593 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12932693/HADOOP-15593.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 7d61fc953217 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / bbe2f62 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC1 | | findbugs |
[jira] [Updated] (HADOOP-14212) Expose SecurityEnabled boolean field in JMX for other services besides NameNode
[ https://issues.apache.org/jira/browse/HADOOP-14212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Antal updated HADOOP-14212: Status: Patch Available (was: Open) > Expose SecurityEnabled boolean field in JMX for other services besides > NameNode > --- > > Key: HADOOP-14212 > URL: https://issues.apache.org/jira/browse/HADOOP-14212 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ray Burgemeestre >Assignee: Adam Antal >Priority: Minor > Labels: newbie, security > Attachments: HADOOP-14212.001.patch, HADOOP-14212.002.patch, > HADOOP-14212.003.patch, HADOOP-14212.004.patch, HADOOP-14212.005.patch, > HADOOP-14212.005.patch, HADOOP-14212.005.patch > > > The following commit > https://github.com/apache/hadoop/commit/dc17bda4b677e30c02c2a9a053895a43e41f7a12 > introduced a "SecurityEnabled" field in the JMX output for the NameNode. I > believe it would be nice to add this same change to the JMX output of other > services: Secondary Namenode, ResourceManager, NodeManagers, DataNodes, etc. > So that it can be queried whether Security is enabled in all JMX resources. > The reason I am suggesting this feature / improvement is that I think it > would provide a clean way to check whether your cluster is completely > Kerberized or not. I don't think there is an easy/clean way to do this now, > other than checking the logs, checking ports etc.? > The file where the change was made is > hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java > has the following function now: > {code:java} > @Override // NameNodeStatusMXBean > public boolean isSecurityEnabled() { > return UserGroupInformation.isSecurityEnabled(); > } > {code} > I would be happy to develop a patch if it seems useful by others as well? > This is a snippet from the JMX output from the NameNode in case security is > not enabled: > {code} > { > "name" : "Hadoop:service=NameNode,name=NameNodeStatus", > "modelerType" : "org.apache.hadoop.hdfs.server.namenode.NameNode", > "NNRole" : "NameNode", > "HostAndPort" : "node001.cm.cluster:8020", > "SecurityEnabled" : false, > "LastHATransitionTime" : 0, > "State" : "standby" > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15566) Remove HTrace support
[ https://issues.apache.org/jira/browse/HADOOP-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552907#comment-16552907 ] Ben Sigelman commented on HADOOP-15566: --- [~elek] the projects have similar goals but take different approaches. OpenTracing's surface area is intentionally "as narrow as possible" which means that it brings in almost no dependencies (OpenCensus is more of a fully-featured "agent" model, which necessarily gives it a larger footprint). OpenTracing also makes no assumptions about the serialization formats (or header names, etc) between peered processes in the distributed system/application, or the serialization format of the tracing system itself. This means that OpenTracing instrumentation can be used/reused for a wider variety of things: straightforward distributed trace collectors/indexers/viewers like Zipkin, Jaeger, etc, but also distributed debuggers, security applications, and so forth. Also, building a general-purpose adapter to convert OpenTracing instrumentation into OpenCensus API calls would be straightforward (due to the relative "thickness" and numbers of implementation assumptions made in each project). Going the other way would be challenging or impossible, depending on reliance on OpenCensus wire formats. > Remove HTrace support > - > > Key: HADOOP-15566 > URL: https://issues.apache.org/jira/browse/HADOOP-15566 > Project: Hadoop Common > Issue Type: Improvement > Components: metrics >Affects Versions: 3.1.0 >Reporter: Todd Lipcon >Priority: Major > Attachments: Screen Shot 2018-06-29 at 11.59.16 AM.png, > ss-trace-s3a.png > > > The HTrace incubator project has voted to retire itself and won't be making > further releases. The Hadoop project currently has various hooks with HTrace. > It seems in some cases (eg HDFS-13702) these hooks have had measurable > performance overhead. Given these two factors, I think we should consider > removing the HTrace integration. If there is someone willing to do the work, > replacing it with OpenTracing might be a better choice since there is an > active community. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14212) Expose SecurityEnabled boolean field in JMX for other services besides NameNode
[ https://issues.apache.org/jira/browse/HADOOP-14212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Antal updated HADOOP-14212: Status: Open (was: Patch Available) > Expose SecurityEnabled boolean field in JMX for other services besides > NameNode > --- > > Key: HADOOP-14212 > URL: https://issues.apache.org/jira/browse/HADOOP-14212 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ray Burgemeestre >Assignee: Adam Antal >Priority: Minor > Labels: newbie, security > Attachments: HADOOP-14212.001.patch, HADOOP-14212.002.patch, > HADOOP-14212.003.patch, HADOOP-14212.004.patch, HADOOP-14212.005.patch, > HADOOP-14212.005.patch, HADOOP-14212.005.patch > > > The following commit > https://github.com/apache/hadoop/commit/dc17bda4b677e30c02c2a9a053895a43e41f7a12 > introduced a "SecurityEnabled" field in the JMX output for the NameNode. I > believe it would be nice to add this same change to the JMX output of other > services: Secondary Namenode, ResourceManager, NodeManagers, DataNodes, etc. > So that it can be queried whether Security is enabled in all JMX resources. > The reason I am suggesting this feature / improvement is that I think it > would provide a clean way to check whether your cluster is completely > Kerberized or not. I don't think there is an easy/clean way to do this now, > other than checking the logs, checking ports etc.? > The file where the change was made is > hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java > has the following function now: > {code:java} > @Override // NameNodeStatusMXBean > public boolean isSecurityEnabled() { > return UserGroupInformation.isSecurityEnabled(); > } > {code} > I would be happy to develop a patch if it seems useful by others as well? > This is a snippet from the JMX output from the NameNode in case security is > not enabled: > {code} > { > "name" : "Hadoop:service=NameNode,name=NameNodeStatus", > "modelerType" : "org.apache.hadoop.hdfs.server.namenode.NameNode", > "NNRole" : "NameNode", > "HostAndPort" : "node001.cm.cluster:8020", > "SecurityEnabled" : false, > "LastHATransitionTime" : 0, > "State" : "standby" > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14212) Expose SecurityEnabled boolean field in JMX for other services besides NameNode
[ https://issues.apache.org/jira/browse/HADOOP-14212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Antal updated HADOOP-14212: Attachment: HADOOP-14212.005.patch > Expose SecurityEnabled boolean field in JMX for other services besides > NameNode > --- > > Key: HADOOP-14212 > URL: https://issues.apache.org/jira/browse/HADOOP-14212 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ray Burgemeestre >Assignee: Adam Antal >Priority: Minor > Labels: newbie, security > Attachments: HADOOP-14212.001.patch, HADOOP-14212.002.patch, > HADOOP-14212.003.patch, HADOOP-14212.004.patch, HADOOP-14212.005.patch, > HADOOP-14212.005.patch, HADOOP-14212.005.patch > > > The following commit > https://github.com/apache/hadoop/commit/dc17bda4b677e30c02c2a9a053895a43e41f7a12 > introduced a "SecurityEnabled" field in the JMX output for the NameNode. I > believe it would be nice to add this same change to the JMX output of other > services: Secondary Namenode, ResourceManager, NodeManagers, DataNodes, etc. > So that it can be queried whether Security is enabled in all JMX resources. > The reason I am suggesting this feature / improvement is that I think it > would provide a clean way to check whether your cluster is completely > Kerberized or not. I don't think there is an easy/clean way to do this now, > other than checking the logs, checking ports etc.? > The file where the change was made is > hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java > has the following function now: > {code:java} > @Override // NameNodeStatusMXBean > public boolean isSecurityEnabled() { > return UserGroupInformation.isSecurityEnabled(); > } > {code} > I would be happy to develop a patch if it seems useful by others as well? > This is a snippet from the JMX output from the NameNode in case security is > not enabled: > {code} > { > "name" : "Hadoop:service=NameNode,name=NameNodeStatus", > "modelerType" : "org.apache.hadoop.hdfs.server.namenode.NameNode", > "NNRole" : "NameNode", > "HostAndPort" : "node001.cm.cluster:8020", > "SecurityEnabled" : false, > "LastHATransitionTime" : 0, > "State" : "standby" > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14212) Expose SecurityEnabled boolean field in JMX for other services besides NameNode
[ https://issues.apache.org/jira/browse/HADOOP-14212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552846#comment-16552846 ] genericqa commented on HADOOP-14212: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} HADOOP-14212 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-14212 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12932700/HADOOP-14212.005.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/14930/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Expose SecurityEnabled boolean field in JMX for other services besides > NameNode > --- > > Key: HADOOP-14212 > URL: https://issues.apache.org/jira/browse/HADOOP-14212 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ray Burgemeestre >Assignee: Adam Antal >Priority: Minor > Labels: newbie, security > Attachments: HADOOP-14212.001.patch, HADOOP-14212.002.patch, > HADOOP-14212.003.patch, HADOOP-14212.004.patch, HADOOP-14212.005.patch, > HADOOP-14212.005.patch > > > The following commit > https://github.com/apache/hadoop/commit/dc17bda4b677e30c02c2a9a053895a43e41f7a12 > introduced a "SecurityEnabled" field in the JMX output for the NameNode. I > believe it would be nice to add this same change to the JMX output of other > services: Secondary Namenode, ResourceManager, NodeManagers, DataNodes, etc. > So that it can be queried whether Security is enabled in all JMX resources. > The reason I am suggesting this feature / improvement is that I think it > would provide a clean way to check whether your cluster is completely > Kerberized or not. I don't think there is an easy/clean way to do this now, > other than checking the logs, checking ports etc.? > The file where the change was made is > hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java > has the following function now: > {code:java} > @Override // NameNodeStatusMXBean > public boolean isSecurityEnabled() { > return UserGroupInformation.isSecurityEnabled(); > } > {code} > I would be happy to develop a patch if it seems useful by others as well? > This is a snippet from the JMX output from the NameNode in case security is > not enabled: > {code} > { > "name" : "Hadoop:service=NameNode,name=NameNodeStatus", > "modelerType" : "org.apache.hadoop.hdfs.server.namenode.NameNode", > "NNRole" : "NameNode", > "HostAndPort" : "node001.cm.cluster:8020", > "SecurityEnabled" : false, > "LastHATransitionTime" : 0, > "State" : "standby" > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14212) Expose SecurityEnabled boolean field in JMX for other services besides NameNode
[ https://issues.apache.org/jira/browse/HADOOP-14212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552833#comment-16552833 ] Adam Antal commented on HADOOP-14212: - Reuploaded the same patch to trigger jenkins. > Expose SecurityEnabled boolean field in JMX for other services besides > NameNode > --- > > Key: HADOOP-14212 > URL: https://issues.apache.org/jira/browse/HADOOP-14212 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ray Burgemeestre >Assignee: Adam Antal >Priority: Minor > Labels: newbie, security > Attachments: HADOOP-14212.001.patch, HADOOP-14212.002.patch, > HADOOP-14212.003.patch, HADOOP-14212.004.patch, HADOOP-14212.005.patch, > HADOOP-14212.005.patch > > > The following commit > https://github.com/apache/hadoop/commit/dc17bda4b677e30c02c2a9a053895a43e41f7a12 > introduced a "SecurityEnabled" field in the JMX output for the NameNode. I > believe it would be nice to add this same change to the JMX output of other > services: Secondary Namenode, ResourceManager, NodeManagers, DataNodes, etc. > So that it can be queried whether Security is enabled in all JMX resources. > The reason I am suggesting this feature / improvement is that I think it > would provide a clean way to check whether your cluster is completely > Kerberized or not. I don't think there is an easy/clean way to do this now, > other than checking the logs, checking ports etc.? > The file where the change was made is > hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java > has the following function now: > {code:java} > @Override // NameNodeStatusMXBean > public boolean isSecurityEnabled() { > return UserGroupInformation.isSecurityEnabled(); > } > {code} > I would be happy to develop a patch if it seems useful by others as well? > This is a snippet from the JMX output from the NameNode in case security is > not enabled: > {code} > { > "name" : "Hadoop:service=NameNode,name=NameNodeStatus", > "modelerType" : "org.apache.hadoop.hdfs.server.namenode.NameNode", > "NNRole" : "NameNode", > "HostAndPort" : "node001.cm.cluster:8020", > "SecurityEnabled" : false, > "LastHATransitionTime" : 0, > "State" : "standby" > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14212) Expose SecurityEnabled boolean field in JMX for other services besides NameNode
[ https://issues.apache.org/jira/browse/HADOOP-14212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Antal updated HADOOP-14212: Status: Patch Available (was: Open) > Expose SecurityEnabled boolean field in JMX for other services besides > NameNode > --- > > Key: HADOOP-14212 > URL: https://issues.apache.org/jira/browse/HADOOP-14212 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ray Burgemeestre >Assignee: Adam Antal >Priority: Minor > Labels: newbie, security > Attachments: HADOOP-14212.001.patch, HADOOP-14212.002.patch, > HADOOP-14212.003.patch, HADOOP-14212.004.patch, HADOOP-14212.005.patch, > HADOOP-14212.005.patch > > > The following commit > https://github.com/apache/hadoop/commit/dc17bda4b677e30c02c2a9a053895a43e41f7a12 > introduced a "SecurityEnabled" field in the JMX output for the NameNode. I > believe it would be nice to add this same change to the JMX output of other > services: Secondary Namenode, ResourceManager, NodeManagers, DataNodes, etc. > So that it can be queried whether Security is enabled in all JMX resources. > The reason I am suggesting this feature / improvement is that I think it > would provide a clean way to check whether your cluster is completely > Kerberized or not. I don't think there is an easy/clean way to do this now, > other than checking the logs, checking ports etc.? > The file where the change was made is > hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java > has the following function now: > {code:java} > @Override // NameNodeStatusMXBean > public boolean isSecurityEnabled() { > return UserGroupInformation.isSecurityEnabled(); > } > {code} > I would be happy to develop a patch if it seems useful by others as well? > This is a snippet from the JMX output from the NameNode in case security is > not enabled: > {code} > { > "name" : "Hadoop:service=NameNode,name=NameNodeStatus", > "modelerType" : "org.apache.hadoop.hdfs.server.namenode.NameNode", > "NNRole" : "NameNode", > "HostAndPort" : "node001.cm.cluster:8020", > "SecurityEnabled" : false, > "LastHATransitionTime" : 0, > "State" : "standby" > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14212) Expose SecurityEnabled boolean field in JMX for other services besides NameNode
[ https://issues.apache.org/jira/browse/HADOOP-14212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Antal updated HADOOP-14212: Attachment: HADOOP-14212.005.patch > Expose SecurityEnabled boolean field in JMX for other services besides > NameNode > --- > > Key: HADOOP-14212 > URL: https://issues.apache.org/jira/browse/HADOOP-14212 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ray Burgemeestre >Assignee: Adam Antal >Priority: Minor > Labels: newbie, security > Attachments: HADOOP-14212.001.patch, HADOOP-14212.002.patch, > HADOOP-14212.003.patch, HADOOP-14212.004.patch, HADOOP-14212.005.patch, > HADOOP-14212.005.patch > > > The following commit > https://github.com/apache/hadoop/commit/dc17bda4b677e30c02c2a9a053895a43e41f7a12 > introduced a "SecurityEnabled" field in the JMX output for the NameNode. I > believe it would be nice to add this same change to the JMX output of other > services: Secondary Namenode, ResourceManager, NodeManagers, DataNodes, etc. > So that it can be queried whether Security is enabled in all JMX resources. > The reason I am suggesting this feature / improvement is that I think it > would provide a clean way to check whether your cluster is completely > Kerberized or not. I don't think there is an easy/clean way to do this now, > other than checking the logs, checking ports etc.? > The file where the change was made is > hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java > has the following function now: > {code:java} > @Override // NameNodeStatusMXBean > public boolean isSecurityEnabled() { > return UserGroupInformation.isSecurityEnabled(); > } > {code} > I would be happy to develop a patch if it seems useful by others as well? > This is a snippet from the JMX output from the NameNode in case security is > not enabled: > {code} > { > "name" : "Hadoop:service=NameNode,name=NameNodeStatus", > "modelerType" : "org.apache.hadoop.hdfs.server.namenode.NameNode", > "NNRole" : "NameNode", > "HostAndPort" : "node001.cm.cluster:8020", > "SecurityEnabled" : false, > "LastHATransitionTime" : 0, > "State" : "standby" > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14212) Expose SecurityEnabled boolean field in JMX for other services besides NameNode
[ https://issues.apache.org/jira/browse/HADOOP-14212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Antal updated HADOOP-14212: Status: Open (was: Patch Available) > Expose SecurityEnabled boolean field in JMX for other services besides > NameNode > --- > > Key: HADOOP-14212 > URL: https://issues.apache.org/jira/browse/HADOOP-14212 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ray Burgemeestre >Assignee: Adam Antal >Priority: Minor > Labels: newbie, security > Attachments: HADOOP-14212.001.patch, HADOOP-14212.002.patch, > HADOOP-14212.003.patch, HADOOP-14212.004.patch, HADOOP-14212.005.patch > > > The following commit > https://github.com/apache/hadoop/commit/dc17bda4b677e30c02c2a9a053895a43e41f7a12 > introduced a "SecurityEnabled" field in the JMX output for the NameNode. I > believe it would be nice to add this same change to the JMX output of other > services: Secondary Namenode, ResourceManager, NodeManagers, DataNodes, etc. > So that it can be queried whether Security is enabled in all JMX resources. > The reason I am suggesting this feature / improvement is that I think it > would provide a clean way to check whether your cluster is completely > Kerberized or not. I don't think there is an easy/clean way to do this now, > other than checking the logs, checking ports etc.? > The file where the change was made is > hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java > has the following function now: > {code:java} > @Override // NameNodeStatusMXBean > public boolean isSecurityEnabled() { > return UserGroupInformation.isSecurityEnabled(); > } > {code} > I would be happy to develop a patch if it seems useful by others as well? > This is a snippet from the JMX output from the NameNode in case security is > not enabled: > {code} > { > "name" : "Hadoop:service=NameNode,name=NameNodeStatus", > "modelerType" : "org.apache.hadoop.hdfs.server.namenode.NameNode", > "NNRole" : "NameNode", > "HostAndPort" : "node001.cm.cluster:8020", > "SecurityEnabled" : false, > "LastHATransitionTime" : 0, > "State" : "standby" > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15566) Remove HTrace support
[ https://issues.apache.org/jira/browse/HADOOP-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552808#comment-16552808 ] Elek, Marton commented on HADOOP-15566: --- As far as I know the problem could be solved with both Opentracing and Opencensus. Is there any reason to prefer opentracing? What would be the advantages/disadvantages to use OC/OT? > Remove HTrace support > - > > Key: HADOOP-15566 > URL: https://issues.apache.org/jira/browse/HADOOP-15566 > Project: Hadoop Common > Issue Type: Improvement > Components: metrics >Affects Versions: 3.1.0 >Reporter: Todd Lipcon >Priority: Major > Attachments: Screen Shot 2018-06-29 at 11.59.16 AM.png, > ss-trace-s3a.png > > > The HTrace incubator project has voted to retire itself and won't be making > further releases. The Hadoop project currently has various hooks with HTrace. > It seems in some cases (eg HDFS-13702) these hooks have had measurable > performance overhead. Given these two factors, I think we should consider > removing the HTrace integration. If there is someone willing to do the work, > replacing it with OpenTracing might be a better choice since there is an > active community. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15622) UserGroupInformation TGT renewer refreshTime should be based on getNextTgtRenewalTime
[ https://issues.apache.org/jira/browse/HADOOP-15622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota updated HADOOP-15622: Description: The calculation of nextRefresh in UserGroupInformation#spawnAutoRenewalThreadForUserCreds is currently based on: {code:java} nextRefresh = Math.max(getRefreshTime(tgt), now + kerberosMinSecondsBeforeRelogin); {code} Most of the time nextRefresh = getRefreshTime(tgt). If it is renewing exactly on refreshTime, and there are parallel operations using expired ticket. There is a time gap that some operations might not perform until the next tgt is obtained. Ideally, we want to keep service uninterrupted, therefore getNextTgtRenewalTime supposed to calculate the time a few minutes before Kerberos tgt expired to determine the nextRefresh time. It looks like we are not using getNextTgtRenewalTime method to calculate nextRefresh instead opt-in to use ticket expiration time as base line for nextRefresh. Kudos for [~eyang] for reporting the issue originally in HADOOP-15593. was: The calculation of nextRefresh in UserGroupInformation#spawnAutoRenewalThreadForUserCreds is currently based on: {code:java} nextRefresh = Math.max(getRefreshTime(tgt), now + kerberosMinSecondsBeforeRelogin); {code} Most of the time nextRefresh = getRefreshTime(tgt). If it is renewing exactly on refreshTime, and there are parallel operations using expired ticket. There is a time gap that some operations might not perform until the next tgt is obtained. Ideally, we want to keep service uninterrupted, therefore getNextTgtRenewalTime supposed to calculate the time a few minutes before Kerberos tgt expired to determine the nextRefresh time. It looks like we are not using getNextTgtRenewalTime method to calculate nextRefresh instead opt-in to use ticket expiration time as base line for nextRefresh. > UserGroupInformation TGT renewer refreshTime should be based on > getNextTgtRenewalTime > - > > Key: HADOOP-15622 > URL: https://issues.apache.org/jira/browse/HADOOP-15622 > Project: Hadoop Common > Issue Type: Bug >Reporter: Gabor Bota >Priority: Major > > The calculation of nextRefresh in > UserGroupInformation#spawnAutoRenewalThreadForUserCreds is currently based on: > {code:java} > nextRefresh = Math.max(getRefreshTime(tgt), > now + kerberosMinSecondsBeforeRelogin); > {code} > Most of the time nextRefresh = getRefreshTime(tgt). If it is renewing exactly > on refreshTime, and there are parallel operations using expired ticket. > There is a time gap that some operations might not perform until the next tgt > is obtained. Ideally, we want to keep service uninterrupted, therefore > getNextTgtRenewalTime supposed to calculate the time a few minutes before > Kerberos tgt expired to determine the nextRefresh time. > It looks like we are not using getNextTgtRenewalTime method to calculate > nextRefresh instead opt-in to use ticket expiration time as base line for > nextRefresh. > Kudos for [~eyang] for reporting the issue originally in HADOOP-15593. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15593) UserGroupInformation TGT renewer throws NPE
[ https://issues.apache.org/jira/browse/HADOOP-15593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552760#comment-16552760 ] Gabor Bota commented on HADOOP-15593: - [~xiaochen], [~eyang] thanks for your comments. Based on your advice, I've added the check for getEndTime and catching the NPE to the implementation. I've also added a comment for future reference why's there a try-catch there. > UserGroupInformation TGT renewer throws NPE > --- > > Key: HADOOP-15593 > URL: https://issues.apache.org/jira/browse/HADOOP-15593 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Gabor Bota >Priority: Blocker > Attachments: HADOOP-15593.001.patch, HADOOP-15593.002.patch, > HADOOP-15593.003.patch > > > Found the following NPE thrown in UGI tgt renewer. The NPE was thrown within > an exception handler so the original exception was hidden, though it's likely > caused by expired tgt. > {noformat} > 18/07/02 10:30:57 ERROR util.SparkUncaughtExceptionHandler: Uncaught > exception in thread Thread[TGT Renewer for f...@example.com,5,main] > java.lang.NullPointerException > at > javax.security.auth.kerberos.KerberosTicket.getEndTime(KerberosTicket.java:482) > at > org.apache.hadoop.security.UserGroupInformation$1.run(UserGroupInformation.java:894) > at java.lang.Thread.run(Thread.java:748){noformat} > Suspect it's related to [https://bugs.openjdk.java.net/browse/JDK-8154889]. > The relevant code was added in HADOOP-13590. File this jira to handle the > exception better. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15593) UserGroupInformation TGT renewer throws NPE
[ https://issues.apache.org/jira/browse/HADOOP-15593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota updated HADOOP-15593: Attachment: HADOOP-15593.003.patch > UserGroupInformation TGT renewer throws NPE > --- > > Key: HADOOP-15593 > URL: https://issues.apache.org/jira/browse/HADOOP-15593 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Gabor Bota >Priority: Blocker > Attachments: HADOOP-15593.001.patch, HADOOP-15593.002.patch, > HADOOP-15593.003.patch > > > Found the following NPE thrown in UGI tgt renewer. The NPE was thrown within > an exception handler so the original exception was hidden, though it's likely > caused by expired tgt. > {noformat} > 18/07/02 10:30:57 ERROR util.SparkUncaughtExceptionHandler: Uncaught > exception in thread Thread[TGT Renewer for f...@example.com,5,main] > java.lang.NullPointerException > at > javax.security.auth.kerberos.KerberosTicket.getEndTime(KerberosTicket.java:482) > at > org.apache.hadoop.security.UserGroupInformation$1.run(UserGroupInformation.java:894) > at java.lang.Thread.run(Thread.java:748){noformat} > Suspect it's related to [https://bugs.openjdk.java.net/browse/JDK-8154889]. > The relevant code was added in HADOOP-13590. File this jira to handle the > exception better. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15611) Improve log in FairCallQueue
[ https://issues.apache.org/jira/browse/HADOOP-15611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552678#comment-16552678 ] genericqa commented on HADOOP-15611: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 39s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 20s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 43s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 7s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 41s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}136m 35s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HADOOP-15611 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12932675/HADOOP-15611.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 7ded1e76c2d7 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 08:53:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / bbe2f62 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/14928/testReport/ | | Max. process+thread count | 1433 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/14928/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Improve log in
[jira] [Commented] (HADOOP-15611) Improve log in FairCallQueue
[ https://issues.apache.org/jira/browse/HADOOP-15611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552656#comment-16552656 ] genericqa commented on HADOOP-15611: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 49s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 3s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 25s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 40s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}125m 12s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HADOOP-15611 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12932671/HADOOP-15611.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 13cce03856b8 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / bbe2f62 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/14927/testReport/ | | Max. process+thread count | 1353 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/14927/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Improve log in
[jira] [Updated] (HADOOP-14212) Expose SecurityEnabled boolean field in JMX for other services besides NameNode
[ https://issues.apache.org/jira/browse/HADOOP-14212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Antal updated HADOOP-14212: Status: Patch Available (was: Open) > Expose SecurityEnabled boolean field in JMX for other services besides > NameNode > --- > > Key: HADOOP-14212 > URL: https://issues.apache.org/jira/browse/HADOOP-14212 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ray Burgemeestre >Assignee: Adam Antal >Priority: Minor > Labels: newbie, security > Attachments: HADOOP-14212.001.patch, HADOOP-14212.002.patch, > HADOOP-14212.003.patch, HADOOP-14212.004.patch, HADOOP-14212.005.patch > > > The following commit > https://github.com/apache/hadoop/commit/dc17bda4b677e30c02c2a9a053895a43e41f7a12 > introduced a "SecurityEnabled" field in the JMX output for the NameNode. I > believe it would be nice to add this same change to the JMX output of other > services: Secondary Namenode, ResourceManager, NodeManagers, DataNodes, etc. > So that it can be queried whether Security is enabled in all JMX resources. > The reason I am suggesting this feature / improvement is that I think it > would provide a clean way to check whether your cluster is completely > Kerberized or not. I don't think there is an easy/clean way to do this now, > other than checking the logs, checking ports etc.? > The file where the change was made is > hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java > has the following function now: > {code:java} > @Override // NameNodeStatusMXBean > public boolean isSecurityEnabled() { > return UserGroupInformation.isSecurityEnabled(); > } > {code} > I would be happy to develop a patch if it seems useful by others as well? > This is a snippet from the JMX output from the NameNode in case security is > not enabled: > {code} > { > "name" : "Hadoop:service=NameNode,name=NameNodeStatus", > "modelerType" : "org.apache.hadoop.hdfs.server.namenode.NameNode", > "NNRole" : "NameNode", > "HostAndPort" : "node001.cm.cluster:8020", > "SecurityEnabled" : false, > "LastHATransitionTime" : 0, > "State" : "standby" > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14212) Expose SecurityEnabled boolean field in JMX for other services besides NameNode
[ https://issues.apache.org/jira/browse/HADOOP-14212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Antal updated HADOOP-14212: Status: Open (was: Patch Available) > Expose SecurityEnabled boolean field in JMX for other services besides > NameNode > --- > > Key: HADOOP-14212 > URL: https://issues.apache.org/jira/browse/HADOOP-14212 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ray Burgemeestre >Assignee: Adam Antal >Priority: Minor > Labels: newbie, security > Attachments: HADOOP-14212.001.patch, HADOOP-14212.002.patch, > HADOOP-14212.003.patch, HADOOP-14212.004.patch, HADOOP-14212.005.patch > > > The following commit > https://github.com/apache/hadoop/commit/dc17bda4b677e30c02c2a9a053895a43e41f7a12 > introduced a "SecurityEnabled" field in the JMX output for the NameNode. I > believe it would be nice to add this same change to the JMX output of other > services: Secondary Namenode, ResourceManager, NodeManagers, DataNodes, etc. > So that it can be queried whether Security is enabled in all JMX resources. > The reason I am suggesting this feature / improvement is that I think it > would provide a clean way to check whether your cluster is completely > Kerberized or not. I don't think there is an easy/clean way to do this now, > other than checking the logs, checking ports etc.? > The file where the change was made is > hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java > has the following function now: > {code:java} > @Override // NameNodeStatusMXBean > public boolean isSecurityEnabled() { > return UserGroupInformation.isSecurityEnabled(); > } > {code} > I would be happy to develop a patch if it seems useful by others as well? > This is a snippet from the JMX output from the NameNode in case security is > not enabled: > {code} > { > "name" : "Hadoop:service=NameNode,name=NameNodeStatus", > "modelerType" : "org.apache.hadoop.hdfs.server.namenode.NameNode", > "NNRole" : "NameNode", > "HostAndPort" : "node001.cm.cluster:8020", > "SecurityEnabled" : false, > "LastHATransitionTime" : 0, > "State" : "standby" > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15611) Improve log in FairCallQueue
[ https://issues.apache.org/jira/browse/HADOOP-15611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan Wu updated HADOOP-15611: - Attachment: HADOOP-15611.003.patch > Improve log in FairCallQueue > > > Key: HADOOP-15611 > URL: https://issues.apache.org/jira/browse/HADOOP-15611 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: Ryan Wu >Priority: Minor > Attachments: HADOOP-15611.001.patch, HADOOP-15611.002.patch, > HADOOP-15611.003.patch > > > In the usage of the FairCallQueue, we find there missing some Key log. Only a > few logs are printed, it makes us hard to learn and debug this feature. > At least, following places can print more logs. > * DecayRpcScheduler#decayCurrentCounts > * WeightedRoundRobinMultiplexer#moveToNextQueue -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15611) Improve log in FairCallQueue
[ https://issues.apache.org/jira/browse/HADOOP-15611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan Wu updated HADOOP-15611: - Attachment: (was: HADOOP-15611.003.patch) > Improve log in FairCallQueue > > > Key: HADOOP-15611 > URL: https://issues.apache.org/jira/browse/HADOOP-15611 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: Ryan Wu >Priority: Minor > Attachments: HADOOP-15611.001.patch, HADOOP-15611.002.patch > > > In the usage of the FairCallQueue, we find there missing some Key log. Only a > few logs are printed, it makes us hard to learn and debug this feature. > At least, following places can print more logs. > * DecayRpcScheduler#decayCurrentCounts > * WeightedRoundRobinMultiplexer#moveToNextQueue -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15611) Improve log in FairCallQueue
[ https://issues.apache.org/jira/browse/HADOOP-15611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552503#comment-16552503 ] Ryan Wu commented on HADOOP-15611: -- The HADOOP-15611.003.patch fixed some code style bugs based on HADOOP-15611.002.patch > Improve log in FairCallQueue > > > Key: HADOOP-15611 > URL: https://issues.apache.org/jira/browse/HADOOP-15611 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: Ryan Wu >Priority: Minor > Attachments: HADOOP-15611.001.patch, HADOOP-15611.002.patch, > HADOOP-15611.003.patch > > > In the usage of the FairCallQueue, we find there missing some Key log. Only a > few logs are printed, it makes us hard to learn and debug this feature. > At least, following places can print more logs. > * DecayRpcScheduler#decayCurrentCounts > * WeightedRoundRobinMultiplexer#moveToNextQueue -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15611) Improve log in FairCallQueue
[ https://issues.apache.org/jira/browse/HADOOP-15611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan Wu updated HADOOP-15611: - Attachment: HADOOP-15611.003.patch > Improve log in FairCallQueue > > > Key: HADOOP-15611 > URL: https://issues.apache.org/jira/browse/HADOOP-15611 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: Ryan Wu >Priority: Minor > Attachments: HADOOP-15611.001.patch, HADOOP-15611.002.patch, > HADOOP-15611.003.patch > > > In the usage of the FairCallQueue, we find there missing some Key log. Only a > few logs are printed, it makes us hard to learn and debug this feature. > At least, following places can print more logs. > * DecayRpcScheduler#decayCurrentCounts > * WeightedRoundRobinMultiplexer#moveToNextQueue -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15611) Improve log in FairCallQueue
[ https://issues.apache.org/jira/browse/HADOOP-15611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552497#comment-16552497 ] Ryan Wu commented on HADOOP-15611: -- Thanks [~linyiqun] for gicing me some advice.I have modified the patch to make it more standard. > Improve log in FairCallQueue > > > Key: HADOOP-15611 > URL: https://issues.apache.org/jira/browse/HADOOP-15611 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: Ryan Wu >Priority: Minor > Attachments: HADOOP-15611.001.patch, HADOOP-15611.002.patch > > > In the usage of the FairCallQueue, we find there missing some Key log. Only a > few logs are printed, it makes us hard to learn and debug this feature. > At least, following places can print more logs. > * DecayRpcScheduler#decayCurrentCounts > * WeightedRoundRobinMultiplexer#moveToNextQueue -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15611) Improve log in FairCallQueue
[ https://issues.apache.org/jira/browse/HADOOP-15611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan Wu updated HADOOP-15611: - Attachment: HADOOP-15611.002.patch > Improve log in FairCallQueue > > > Key: HADOOP-15611 > URL: https://issues.apache.org/jira/browse/HADOOP-15611 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: Ryan Wu >Priority: Minor > Attachments: HADOOP-15611.001.patch, HADOOP-15611.002.patch > > > In the usage of the FairCallQueue, we find there missing some Key log. Only a > few logs are printed, it makes us hard to learn and debug this feature. > At least, following places can print more logs. > * DecayRpcScheduler#decayCurrentCounts > * WeightedRoundRobinMultiplexer#moveToNextQueue -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15611) Improve log in FairCallQueue
[ https://issues.apache.org/jira/browse/HADOOP-15611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan Wu updated HADOOP-15611: - Attachment: (was: HADOOP-15611.002.patch) > Improve log in FairCallQueue > > > Key: HADOOP-15611 > URL: https://issues.apache.org/jira/browse/HADOOP-15611 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: Ryan Wu >Priority: Minor > Attachments: HADOOP-15611.001.patch > > > In the usage of the FairCallQueue, we find there missing some Key log. Only a > few logs are printed, it makes us hard to learn and debug this feature. > At least, following places can print more logs. > * DecayRpcScheduler#decayCurrentCounts > * WeightedRoundRobinMultiplexer#moveToNextQueue -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15611) Improve log in FairCallQueue
[ https://issues.apache.org/jira/browse/HADOOP-15611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan Wu updated HADOOP-15611: - Attachment: HADOOP-15611.002.patch > Improve log in FairCallQueue > > > Key: HADOOP-15611 > URL: https://issues.apache.org/jira/browse/HADOOP-15611 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: Ryan Wu >Priority: Minor > Attachments: HADOOP-15611.001.patch, HADOOP-15611.002.patch > > > In the usage of the FairCallQueue, we find there missing some Key log. Only a > few logs are printed, it makes us hard to learn and debug this feature. > At least, following places can print more logs. > * DecayRpcScheduler#decayCurrentCounts > * WeightedRoundRobinMultiplexer#moveToNextQueue -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15607) AliyunOSS: fix duplicated partNumber issue in AliyunOSSBlockOutputStream
[ https://issues.apache.org/jira/browse/HADOOP-15607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552455#comment-16552455 ] wujinhu edited comment on HADOOP-15607 at 7/23/18 8:05 AM: --- Thanks [~Sammi] [~uncleGen] for your comments. As the code shows, uploadCurrentPart() executes blockId++ when submit tasks to the thread pool. If task executes later than the main thread, then the blockId has been changed. Changing blockFiles type is to fix another problem(we cannot delete files in write(), because they may be used by upload threads), so we can track upload files by blockId. I have tried to add unit test to reproduce this issue in my mac environment, but failed. It seems difficult to reproduce this in local environment except when I debug old code sometimes. was (Author: wujinhu): Thanks [~Sammi] [~uncleGen] for your comments. As the code shows, uploadCurrentPart() executes blockId++ when submit tasks to the thread pool. If task executes later than the main thread, then the blockId has been changed. Changing blockFiles type is to fix another problem(we cannot delete files in write(), because they may be used by upload threads), so we can track upload files by blockId. I have tried to add unit test to reproduce this issue in my mac environment, but failed. It seems difficult to reproduce this in local environment only when I debug old code sometimes. > AliyunOSS: fix duplicated partNumber issue in AliyunOSSBlockOutputStream > - > > Key: HADOOP-15607 > URL: https://issues.apache.org/jira/browse/HADOOP-15607 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.10.0, 2.9.1, 3.2.0, 3.1.1, 3.0.3 >Reporter: wujinhu >Assignee: wujinhu >Priority: Major > Attachments: HADOOP-15607.001.patch, HADOOP-15607.002.patch > > > When I generated data with hive-tpcds tool, I got exception below: > 2018-07-16 14:50:43,680 INFO mapreduce.Job: Task Id : > attempt_1531723399698_0001_m_52_0, Status : FAILED > Error: com.aliyun.oss.OSSException: The list of parts was not in ascending > order. Parts list must specified in order by part number. > [ErrorCode]: InvalidPartOrder > [RequestId]: 5B4C40425FCC208D79D1EAF5 > [HostId]: 100.103.0.137 > [ResponseError]: > > > InvalidPartOrder > The list of parts was not in ascending order. Parts list must > specified in order by part number. > 5B4C40425FCC208D79D1EAF5 > xx.xx.xx.xx > current PartNumber 3, you given part number 3is not in > ascending order > > at > com.aliyun.oss.common.utils.ExceptionFactory.createOSSException(ExceptionFactory.java:99) > at > com.aliyun.oss.internal.OSSErrorResponseHandler.handle(OSSErrorResponseHandler.java:69) > at > com.aliyun.oss.common.comm.ServiceClient.handleResponse(ServiceClient.java:248) > at > com.aliyun.oss.common.comm.ServiceClient.sendRequestImpl(ServiceClient.java:130) > at > com.aliyun.oss.common.comm.ServiceClient.sendRequest(ServiceClient.java:68) > at com.aliyun.oss.internal.OSSOperation.send(OSSOperation.java:94) > at com.aliyun.oss.internal.OSSOperation.doOperation(OSSOperation.java:149) > at com.aliyun.oss.internal.OSSOperation.doOperation(OSSOperation.java:113) > at > com.aliyun.oss.internal.OSSMultipartOperation.completeMultipartUpload(OSSMultipartOperation.java:185) > at com.aliyun.oss.OSSClient.completeMultipartUpload(OSSClient.java:790) > at > org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystemStore.completeMultipartUpload(AliyunOSSFileSystemStore.java:643) > at > org.apache.hadoop.fs.aliyun.oss.AliyunOSSBlockOutputStream.close(AliyunOSSBlockOutputStream.java:120) > at > org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72) > at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101) > at > org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:106) > at > org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.close(MultipleOutputs.java:574) > at org.notmysock.tpcds.GenTable$DSDGen.cleanup(GenTable.java:169) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:149) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1686) > > I reviewed code below, > {code:java} > blockId {code} > has thread synchronization problem > {code:java} > // code placeholder > private void uploadCurrentPart() throws IOException { >
[jira] [Commented] (HADOOP-15607) AliyunOSS: fix duplicated partNumber issue in AliyunOSSBlockOutputStream
[ https://issues.apache.org/jira/browse/HADOOP-15607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552455#comment-16552455 ] wujinhu commented on HADOOP-15607: -- Thanks [~Sammi] [~uncleGen] for your comments. As the code shows, uploadCurrentPart() executes blockId++ when submit tasks to the thread pool. If task executes later than the main thread, then the blockId has been changed. Changing blockFiles type is to fix another problem(we cannot delete files in write(), because they may be used by upload threads), so we can track upload files by blockId. I have tried to add unit test to reproduce this issue in my mac environment, but failed. It seems difficult to reproduce this in local environment only when I debug old code sometimes. > AliyunOSS: fix duplicated partNumber issue in AliyunOSSBlockOutputStream > - > > Key: HADOOP-15607 > URL: https://issues.apache.org/jira/browse/HADOOP-15607 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.10.0, 2.9.1, 3.2.0, 3.1.1, 3.0.3 >Reporter: wujinhu >Assignee: wujinhu >Priority: Major > Attachments: HADOOP-15607.001.patch, HADOOP-15607.002.patch > > > When I generated data with hive-tpcds tool, I got exception below: > 2018-07-16 14:50:43,680 INFO mapreduce.Job: Task Id : > attempt_1531723399698_0001_m_52_0, Status : FAILED > Error: com.aliyun.oss.OSSException: The list of parts was not in ascending > order. Parts list must specified in order by part number. > [ErrorCode]: InvalidPartOrder > [RequestId]: 5B4C40425FCC208D79D1EAF5 > [HostId]: 100.103.0.137 > [ResponseError]: > > > InvalidPartOrder > The list of parts was not in ascending order. Parts list must > specified in order by part number. > 5B4C40425FCC208D79D1EAF5 > xx.xx.xx.xx > current PartNumber 3, you given part number 3is not in > ascending order > > at > com.aliyun.oss.common.utils.ExceptionFactory.createOSSException(ExceptionFactory.java:99) > at > com.aliyun.oss.internal.OSSErrorResponseHandler.handle(OSSErrorResponseHandler.java:69) > at > com.aliyun.oss.common.comm.ServiceClient.handleResponse(ServiceClient.java:248) > at > com.aliyun.oss.common.comm.ServiceClient.sendRequestImpl(ServiceClient.java:130) > at > com.aliyun.oss.common.comm.ServiceClient.sendRequest(ServiceClient.java:68) > at com.aliyun.oss.internal.OSSOperation.send(OSSOperation.java:94) > at com.aliyun.oss.internal.OSSOperation.doOperation(OSSOperation.java:149) > at com.aliyun.oss.internal.OSSOperation.doOperation(OSSOperation.java:113) > at > com.aliyun.oss.internal.OSSMultipartOperation.completeMultipartUpload(OSSMultipartOperation.java:185) > at com.aliyun.oss.OSSClient.completeMultipartUpload(OSSClient.java:790) > at > org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystemStore.completeMultipartUpload(AliyunOSSFileSystemStore.java:643) > at > org.apache.hadoop.fs.aliyun.oss.AliyunOSSBlockOutputStream.close(AliyunOSSBlockOutputStream.java:120) > at > org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72) > at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101) > at > org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:106) > at > org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.close(MultipleOutputs.java:574) > at org.notmysock.tpcds.GenTable$DSDGen.cleanup(GenTable.java:169) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:149) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1686) > > I reviewed code below, > {code:java} > blockId {code} > has thread synchronization problem > {code:java} > // code placeholder > private void uploadCurrentPart() throws IOException { > blockFiles.add(blockFile); > blockStream.flush(); > blockStream.close(); > if (blockId == 0) { > uploadId = store.getUploadId(key); > } > ListenableFuture partETagFuture = > executorService.submit(() -> { > PartETag partETag = store.uploadPart(blockFile, key, uploadId, > blockId + 1); > return partETag; > }); > partETagsFutures.add(partETagFuture); > blockFile = newBlockFile(); > blockId++; > blockStream = new BufferedOutputStream(new FileOutputStream(blockFile)); > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To