[jira] [Created] (HADOOP-16713) Use PathCapabilities for default configuring append mode for RollingFileSystemSink
Adam Antal created HADOOP-16713: --- Summary: Use PathCapabilities for default configuring append mode for RollingFileSystemSink Key: HADOOP-16713 URL: https://issues.apache.org/jira/browse/HADOOP-16713 Project: Hadoop Common Issue Type: Bug Components: metrics Affects Versions: 3.3.0 Reporter: Adam Antal {{RollingFileSystemSink}} uses a filesystem to store metrics. The key {{allow-append}} is disabled by default, but if enabled, new metrics can be appended to an existing file. Given that we can have the {{PathCapabilities}} interface, we can change the default of {{allow-append}} mode depending on the support of the append operation decided by the {{FileSystem.hasPathCapability()}} call -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-16683) Disable retry of FailoverOnNetworkExceptionRetry in case of wrapped AccessControlException
[ https://issues.apache.org/jira/browse/HADOOP-16683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Antal reopened HADOOP-16683: - Let's backport this issue to lower branches. > Disable retry of FailoverOnNetworkExceptionRetry in case of wrapped > AccessControlException > -- > > Key: HADOOP-16683 > URL: https://issues.apache.org/jira/browse/HADOOP-16683 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 3.3.0 >Reporter: Adam Antal >Assignee: Adam Antal >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-16683.001.patch, HADOOP-16683.002.patch, > HADOOP-16683.003.patch > > > Follow up patch on HADOOP-16580. > We successfully disabled the retry in case of an AccessControlException which > has resolved some of the cases, but in other cases AccessControlException is > wrapped inside another IOException and you can only get the original > exception by calling getCause(). > Let's add this extra case as well. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16683) Disable retry of FailoverOnNetworkExceptionRetry in case of wrapped AccessControlException
Adam Antal created HADOOP-16683: --- Summary: Disable retry of FailoverOnNetworkExceptionRetry in case of wrapped AccessControlException Key: HADOOP-16683 URL: https://issues.apache.org/jira/browse/HADOOP-16683 Project: Hadoop Common Issue Type: Bug Components: common Affects Versions: 3.3.0 Reporter: Adam Antal Assignee: Adam Antal Follow up patch on HADOOP-16580. We successfully disabled the retry in case of an AccessControlException which has resolved some of the cases, but in other cases AccessControlException is wrapped inside another IOException and you can only get the original exception by calling getCause(). Let's add this extra case as well. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16580) Disable retry of FailoverOnNetworkExceptionRetry in case of AccessControlException
Adam Antal created HADOOP-16580: --- Summary: Disable retry of FailoverOnNetworkExceptionRetry in case of AccessControlException Key: HADOOP-16580 URL: https://issues.apache.org/jira/browse/HADOOP-16580 Project: Hadoop Common Issue Type: Bug Components: common Affects Versions: 3.3.0 Reporter: Adam Antal Assignee: Adam Antal HADOOP-14982 handled the case where a SaslException is thrown. The issue still persists, since the exception that is thrown is an *AccessControlException* because user has no kerberos credentials. My suggestion is that we should add this case as well to {{FailoverOnNetworkExceptionRetry}}. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16512) [hadoop-tools] Fix order of actual and expected expression in assert statements
Adam Antal created HADOOP-16512: --- Summary: [hadoop-tools] Fix order of actual and expected expression in assert statements Key: HADOOP-16512 URL: https://issues.apache.org/jira/browse/HADOOP-16512 Project: Hadoop Common Issue Type: Sub-task Affects Versions: 3.2.0 Reporter: Adam Antal Fix order of actual and expected expression in assert statements which gives misleading message when test case fails. Attached file has some of the places where it is placed wrongly. {code:java} [ERROR] testNodeRemovalGracefully(org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService) Time elapsed: 3.385 s <<< FAILURE! java.lang.AssertionError: Shutdown nodes should be 0 now expected:<1> but was:<0> {code} For long term, [AssertJ|http://joel-costigliola.github.io/assertj/] can be used for new test cases which avoids such mistakes. This is a follow-up Jira on the fix for the hadoop-tools project. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16511) [hadoop-hdfs] Fix order of actual and expected expression in assert statements
Adam Antal created HADOOP-16511: --- Summary: [hadoop-hdfs] Fix order of actual and expected expression in assert statements Key: HADOOP-16511 URL: https://issues.apache.org/jira/browse/HADOOP-16511 Project: Hadoop Common Issue Type: Sub-task Affects Versions: 3.2.0 Reporter: Adam Antal Fix order of actual and expected expression in assert statements which gives misleading message when test case fails. Attached file has some of the places where it is placed wrongly. {code:java} [ERROR] testNodeRemovalGracefully(org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService) Time elapsed: 3.385 s <<< FAILURE! java.lang.AssertionError: Shutdown nodes should be 0 now expected:<1> but was:<0> {code} For long term, [AssertJ|http://joel-costigliola.github.io/assertj/] can be used for new test cases which avoids such mistakes. This is a follow-up jira for the hadoop-hdfs project. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16510) [hadoop-common] Fix order of actual and expected expression in assert statements
Adam Antal created HADOOP-16510: --- Summary: [hadoop-common] Fix order of actual and expected expression in assert statements Key: HADOOP-16510 URL: https://issues.apache.org/jira/browse/HADOOP-16510 Project: Hadoop Common Issue Type: Sub-task Affects Versions: 3.2.0 Reporter: Adam Antal Fix order of actual and expected expression in assert statements which gives misleading message when test case fails. Attached file has some of the places where it is placed wrongly. {code:java} [ERROR] testNodeRemovalGracefully(org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService) Time elapsed: 3.385 s <<< FAILURE! java.lang.AssertionError: Shutdown nodes should be 0 now expected:<1> but was:<0> {code} For long term, [AssertJ|http://joel-costigliola.github.io/assertj/] can be used for new test cases which avoids such mistakes. This is a follow-up jira for the hadoop-common project. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16503) [JDK11] TestLeafQueue tests are failing due to WrongTypeOfReturnValue
Adam Antal created HADOOP-16503: --- Summary: [JDK11] TestLeafQueue tests are failing due to WrongTypeOfReturnValue Key: HADOOP-16503 URL: https://issues.apache.org/jira/browse/HADOOP-16503 Project: Hadoop Common Issue Type: Sub-task Affects Versions: 3.2.0 Reporter: Adam Antal Assignee: Adam Antal Many of the tests in {{org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue}} fails with the following error message running on JDK11: {noformat} [ERROR] testSingleQueueWithOneUser(org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue) Time elapsed: 0.204 s <<< ERROR! org.mockito.exceptions.misusing.WrongTypeOfReturnValue: YarnConfiguration cannot be returned by getRMNodes() getRMNodes() should return ConcurrentMap *** If you're unsure why you're getting above error read on. Due to the nature of the syntax above problem might occur because: 1. This exception *might* occur in wrongly written multi-threaded tests. Please refer to Mockito FAQ on limitations of concurrency testing. 2. A spy is stubbed using when(spy.foo()).then() syntax. It is safer to stub spies - - with doReturn|Throw() family of methods. More in javadocs for Mockito.spy() method. at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue.setUpInternal(TestLeafQueue.java:221) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue.setUp(TestLeafQueue.java:144) ... {noformat} This is due to the actual execution of the call, while we need to record only the invocation of it. According to the javadocs and other folks. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16168) mvn clean site is not compiling in trunk
Adam Antal created HADOOP-16168: --- Summary: mvn clean site is not compiling in trunk Key: HADOOP-16168 URL: https://issues.apache.org/jira/browse/HADOOP-16168 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.2.1 Reporter: Adam Antal This is a follow-up Jira for HDFS-14118. {{mvn clean site}} is not compiling on trunk with the following error message: {noformat} [INFO] - [ERROR] COMPILATION ERROR : [INFO] - [ERROR] /Users/adamantal/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestConfiguredFailoverProxyProvider.java:[23,29] cannot find symbol symbol: class MockDomainNameResolver location: package org.apache.hadoop.net [ERROR] /Users/adamantal/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestConfiguredFailoverProxyProvider.java:[149,11] cannot find symbol symbol: variable MockDomainNameResolver location: class org.apache.hadoop.hdfs.server.namenode.ha.TestConfiguredFailoverProxyProvider [ERROR] /Users/adamantal/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestConfiguredFailoverProxyProvider.java:[150,11] cannot find symbol symbol: variable MockDomainNameResolver location: class org.apache.hadoop.hdfs.server.namenode.ha.TestConfiguredFailoverProxyProvider [ERROR] /Users/adamantal/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestConfiguredFailoverProxyProvider.java:[162,9] cannot find symbol symbol: class MockDomainNameResolver location: class org.apache.hadoop.hdfs.server.namenode.ha.TestConfiguredFailoverProxyProvider [ERROR] /Users/adamantal/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestConfiguredFailoverProxyProvider.java:[261,9] cannot find symbol symbol: variable MockDomainNameResolver location: class org.apache.hadoop.hdfs.server.namenode.ha.TestConfiguredFailoverProxyProvider [ERROR] /Users/adamantal/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestConfiguredFailoverProxyProvider.java:[263,9] cannot find symbol symbol: variable MockDomainNameResolver location: class org.apache.hadoop.hdfs.server.namenode.ha.TestConfiguredFailoverProxyProvider [ERROR] /Users/adamantal/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestConfiguredFailoverProxyProvider.java:[288,19] cannot find symbol symbol: variable MockDomainNameResolver location: class org.apache.hadoop.hdfs.server.namenode.ha.TestConfiguredFailoverProxyProvider [ERROR] /Users/adamantal/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestConfiguredFailoverProxyProvider.java:[292,19] cannot find symbol symbol: variable MockDomainNameResolver location: class org.apache.hadoop.hdfs.server.namenode.ha.TestConfiguredFailoverProxyProvider {noformat} {{MockDomainNameResolver}} is in {{hadoop-common-project/hadoop-common/src/test}} while {{TestConfiguredFailoverProxyProvider}} is in {{hadoop-hdfs-project/hadoop-hdfs-client/src/test}}. Though we have the following dependency: {noformat} org.apache.hadoop hadoop-common test test-jar {noformat} probably that's not enough. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16124) Extend documentation in testing.md about endpoint constants
Adam Antal created HADOOP-16124: --- Summary: Extend documentation in testing.md about endpoint constants Key: HADOOP-16124 URL: https://issues.apache.org/jira/browse/HADOOP-16124 Project: Hadoop Common Issue Type: Improvement Components: hadoop-aws Affects Versions: 3.2.0 Reporter: Adam Antal Assignee: Adam Antal Since HADOOP-14190 we had shortcuts for endpoints in the core-site.xml in hadoop-aws. This is useful to know when someone come across testing in hadoop-aws, so I suggest to add this little addition to the testing.md. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16043) NPE in ITestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is not set
Adam Antal created HADOOP-16043: --- Summary: NPE in ITestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is not set Key: HADOOP-16043 URL: https://issues.apache.org/jira/browse/HADOOP-16043 Project: Hadoop Common Issue Type: Bug Components: fs/s3 Affects Versions: 3.2.0 Reporter: Adam Antal When running {{org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore}} integration test, I got the following stack trace: {code:java} [ERROR] org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore Time elapsed: 0.333 s <<< ERROR! java.lang.NullPointerException at org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore.beforeClassSetup(ITestDynamoDBMetadataStore.java:164) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) {code} The NPE happened here: {code:java} assertTrue("Test DynamoDB table name: '" + S3GUARD_DDB_TEST_TABLE_NAME_KEY + "' and production table name: '" + S3GUARD_DDB_TABLE_NAME_KEY + "' can not be the same.", !conf.get(S3GUARD_DDB_TABLE_NAME_KEY).equals(testDynamoDBTableName)); {code} The problem is that though we check previously whether the variable testDynamoDBTableName ({{fs.s3a.s3guard.ddb.test.table}} config) is not null, but we don't do the same for {{fs.s3a.s3guard.ddb.table}} ({{S3GUARD_DDB_TABLE_NAME_KEY}}) before calling the .equals(), thus causing an NPE. Since we don't need the {{fs.s3a.s3guard.ddb.table}} config for the test, we should check first whether that config is given or not, and only comparing the two configs if they both exist. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15986) Allowing files to be moved between encryption zones having the same encryption key
Adam Antal created HADOOP-15986: --- Summary: Allowing files to be moved between encryption zones having the same encryption key Key: HADOOP-15986 URL: https://issues.apache.org/jira/browse/HADOOP-15986 Project: Hadoop Common Issue Type: Improvement Reporter: Adam Antal Currently HDFS blocks you from moving files from one encryption zone to another. On the surface this is fine, but we also allow multiple encryption zones to use the same encryption zone key. If we allow multiple zones to use the same zone key, we should also allow files to be moved between the zones. I believe this should be either we don't allow the same key to be used for multiple encryption zones, or we allow moving between zones when the key is the same. The latter is the most user-friendly and allows for different HDFS directory structures. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15914) hadoop jar command has no help argument
Adam Antal created HADOOP-15914: --- Summary: hadoop jar command has no help argument Key: HADOOP-15914 URL: https://issues.apache.org/jira/browse/HADOOP-15914 Project: Hadoop Common Issue Type: Improvement Components: common Reporter: Adam Antal {{hadoop jar --help}} and {{hadoop jar help}} commands show outputs like this: {noformat} WARNING: Use "yarn jar" to launch YARN applications. JAR does not exist or is not a normal file: /root/--help {noformat} Only if called with no arguments: {{hadoop jar}} we get the usage text, but even in that case we get: {noformat} WARNING: Use "yarn jar" to launch YARN applications. RunJar jarFile [mainClass] args... {noformat} Where RunJar is wrapped by the hadoop script (so it should not be displayed). {{hadoop --help}} displays the following: {noformat} jar run a jar file. NOTE: please use "yarn jar" to launch YARN applications, not this command. {noformat} which is fine, but {{CommandsManual.md}} tells a bit more information about the usage of this command: {noformat} Usage: hadoop jar [mainClass] args... {noformat} My suggestion is to add a {{--help}} option to the {{hadoop jar}} command that would display this message. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org