[jira] [Resolved] (HDFS-14100) TestConfigurationFieldsBase.testCompareConfigurationClassAgainstXml fails due to missing dfs.image.string-tables.expanded from hdfs-defaults.xml

2018-11-26 Thread Zsolt Venczel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel resolved HDFS-14100.
--
Resolution: Invalid

The failure had happened due to a local git issue.

> TestConfigurationFieldsBase.testCompareConfigurationClassAgainstXml fails due 
> to missing dfs.image.string-tables.expanded from hdfs-defaults.xml
> 
>
> Key: HDFS-14100
> URL: https://issues.apache.org/jira/browse/HDFS-14100
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Major
>
> After HDFS-13882 
> TestConfigurationFieldsBase.testCompareConfigurationClassAgainstXml requires 
> hdfs-defaults.xml to have dfs.image.string-tables.expanded added and 
> populated with a default value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14100) TestConfigurationFieldsBase.testCompareConfigurationClassAgainstXml fails due to missing dfs.image.string-tables.expanded from hdfs-defaults.xml

2018-11-26 Thread Zsolt Venczel (JIRA)
Zsolt Venczel created HDFS-14100:


 Summary: 
TestConfigurationFieldsBase.testCompareConfigurationClassAgainstXml fails due 
to missing dfs.image.string-tables.expanded from hdfs-defaults.xml
 Key: HDFS-14100
 URL: https://issues.apache.org/jira/browse/HDFS-14100
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Zsolt Venczel
Assignee: Zsolt Venczel


After HDFS-13882 
TestConfigurationFieldsBase.testCompareConfigurationClassAgainstXml requires 
hdfs-defaults.xml to have dfs.image.string-tables.expanded added and populated 
with a default value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14054) TestLeaseRecovery2: testHardLeaseRecoveryAfterNameNodeRestart2 and testHardLeaseRecoveryWithRenameAfterNameNodeRestart are flaky

2018-11-07 Thread Zsolt Venczel (JIRA)
Zsolt Venczel created HDFS-14054:


 Summary: TestLeaseRecovery2: 
testHardLeaseRecoveryAfterNameNodeRestart2 and 
testHardLeaseRecoveryWithRenameAfterNameNodeRestart are flaky
 Key: HDFS-14054
 URL: https://issues.apache.org/jira/browse/HDFS-14054
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.3, 2.6.0
Reporter: Zsolt Venczel
Assignee: Zsolt Venczel


---
 T E S T S
---
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was 
removed in 8.0
Running org.apache.hadoop.hdfs.TestLeaseRecovery2
Tests run: 7, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 68.971 sec <<< 
FAILURE! - in org.apache.hadoop.hdfs.TestLeaseRecovery2
testHardLeaseRecoveryAfterNameNodeRestart2(org.apache.hadoop.hdfs.TestLeaseRecovery2)
  Time elapsed: 4.375 sec  <<< FAILURE!
java.lang.AssertionError: lease holder should now be the NN
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.checkLease(TestLeaseRecovery2.java:568)
at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:520)
at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2(TestLeaseRecovery2.java:437)
testHardLeaseRecoveryWithRenameAfterNameNodeRestart(org.apache.hadoop.hdfs.TestLeaseRecovery2)
  Time elapsed: 4.339 sec  <<< FAILURE!
java.lang.AssertionError: lease holder should now be the NN
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.checkLease(TestLeaseRecovery2.java:568)
at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:520)
at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart(TestLeaseRecovery2.java:443)
Results :
Failed tests: 
  
TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2:437->hardLeaseRecoveryRestartHelper:520->checkLease:568
 lease holder should now be the NN
  
TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart:443->hardLeaseRecoveryRestartHelper:520->checkLease:568
 lease holder should now be the NN
Tests run: 7, Failures: 2, Errors: 0, Skipped: 0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13744) OIV tool should better handle control characters present in file or directory names

2018-07-18 Thread Zsolt Venczel (JIRA)
Zsolt Venczel created HDFS-13744:


 Summary: OIV tool should better handle control characters present 
in file or directory names
 Key: HDFS-13744
 URL: https://issues.apache.org/jira/browse/HDFS-13744
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs, tools
Affects Versions: 3.0.3, 2.7.6, 2.8.4, 2.9.1, 2.6.5
Reporter: Zsolt Venczel
Assignee: Zsolt Venczel


In certain cases when control characters or white space is present in file or 
directory names OIV tool processors can export data in a misleading format.

In the below examples we have EXAMPLE_NAME as a file and a directory name where 
the directory has a line feed character at the end (the actual production case 
has multiple line feeds and multiple spaces)
 * CSV processor case:
 ** misleading example:
{code:java}
/user/data/EXAMPLE_NAME
,0,2017-04-24 04:34,1969-12-31 16:00,0,0,0,-1,-1,drwxrwxr-x+,user,group
/user/data/EXAMPLE_NAME,2016-08-26 03:00,2017-05-16 
10:05,134217728,1,520,0,0,-rw-rwxr--+,user,group
{code}

 ** expected example as suggested by 
[https://tools.ietf.org/html/rfc4180#section-2:]
{code:java}
"/user/data/EXAMPLE_NAME%x0D",0,2017-04-24 04:34,1969-12-31 
16:00,0,0,0,-1,-1,drwxrwxr-x+,user,group
"/user/data/EXAMPLE_NAME",2016-08-26 03:00,2017-05-16 
10:05,134217728,1,520,0,0,-rw-rwxr--+,user,group
{code}

 * XML processor case:
 ** misleading example:
{code:java}
479867791DIRECTORYEXAMPLE_NAME
1493033668294user:group:0775

113632535FILEEXAMPLE_NAME314722056575041494954320141134217728user:group:0674
{code}

 ** expected example as specified in 
[https://www.w3.org/TR/REC-xml/#sec-line-ends:]
{code:java}
479867791DIRECTORYEXAMPLE_NAME#xA1493033668294user:group:0775

479867791DIRECTORYEXAMPLE_NAME
1493033668294user:group:0775
{code}

 * JSON:
 The OIV Web Processor behaves correctly and produces the following:
{code:java}
{
  "FileStatuses": {
"FileStatus": [
  {
"fileId": 113632535,
"accessTime": 1494954320141,
"replication": 3,
"owner": "user",
"length": 520,
"permission": "674",
"blockSize": 134217728,
"modificationTime": 1472205657504,
"type": "FILE",
"group": "group",
"childrenNum": 0,
"pathSuffix": "EXAMPLE_NAME"
  },
  {
"fileId": 479867791,
"accessTime": 0,
"replication": 0,
"owner": "user",
"length": 0,
"permission": "775",
"blockSize": 0,
"modificationTime": 1493033668294,
"type": "DIRECTORY",
"group": "group",
"childrenNum": 0,
"pathSuffix": "EXAMPLE_NAME\n"
  }
]
  }
}
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13697) EDEK decrypt fails due to proxy user being lost because of empty AccessControllerContext

2018-06-25 Thread Zsolt Venczel (JIRA)
Zsolt Venczel created HDFS-13697:


 Summary: EDEK decrypt fails due to proxy user being lost because 
of empty AccessControllerContext
 Key: HDFS-13697
 URL: https://issues.apache.org/jira/browse/HDFS-13697
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Zsolt Venczel
Assignee: Zsolt Venczel


While calling KeyProviderCryptoExtension decryptEncryptedKey the call stack 
might not have doAs privileged execution call (in the DFSClient for example). 
This results in loosing the proxy user from UGI as UGI.getCurrentUser finds no 
AccessControllerContext and does a re-login for the login user only.

This can cause the following for example: if we have set up the oozie user to 
be entitled to perform actions on behalf of example_user but oozie is forbidden 
to decrypt any EDEK (for security reasons), due to the above issue, 
example_user entitlements are lost from UGI and the following error is reported:

{code}
[0] 
SERVER[xxx] USER[example_user] GROUP[-] TOKEN[] APP[Test_EAR] 
JOB[0020905-180313191552532-oozie-oozi-W] 
ACTION[0020905-180313191552532-oozie-oozi-W@polling_dir_path] Error starting 
action [polling_dir_path]. ErrorType [ERROR], ErrorCode [FS014], Message 
[FS014: User [oozie] is not authorized to perform [DECRYPT_EEK] on key with ACL 
name [encrypted_key]!!]
org.apache.oozie.action.ActionExecutorException: FS014: User [oozie] is not 
authorized to perform [DECRYPT_EEK] on key with ACL name [encrypted_key]!!
 at 
org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:463)
 at 
org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:441)
 at 
org.apache.oozie.action.hadoop.FsActionExecutor.touchz(FsActionExecutor.java:523)
 at 
org.apache.oozie.action.hadoop.FsActionExecutor.doOperations(FsActionExecutor.java:199)
 at 
org.apache.oozie.action.hadoop.FsActionExecutor.start(FsActionExecutor.java:563)
 at 
org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:232)
 at 
org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:63)
 at org.apache.oozie.command.XCommand.call(XCommand.java:286)
 at 
org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:332)
 at 
org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:261)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
Caused by: org.apache.hadoop.security.authorize.AuthorizationException: User 
[oozie] is not authorized to perform [DECRYPT_EEK] on key with ACL name 
[encrypted_key]!!
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
 at 
org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:157)
 at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:607)
 at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:565)
 at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:832)
 at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:209)
 at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:205)
 at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:94)
 at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:205)
 at 
org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
 at 
org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1440)
 at 
org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1542)
 at 
org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1527)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:408)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:401)
 at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:401)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem.cr

[jira] [Created] (HDFS-13582) Improve backward compatibility for HDFS-13176 (WebHdfs file path gets truncated when having semicolon (;) inside)

2018-05-17 Thread Zsolt Venczel (JIRA)
Zsolt Venczel created HDFS-13582:


 Summary: Improve backward compatibility for HDFS-13176 (WebHdfs 
file path gets truncated when having semicolon (;) inside)
 Key: HDFS-13582
 URL: https://issues.apache.org/jira/browse/HDFS-13582
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Zsolt Venczel
Assignee: Zsolt Venczel
 Fix For: 3.2.0


Encode special character only if necessary in order to improve backward 
compatibility in the following scenario:

new (having HDFS-13176) WebHdfs client - > old (not having HDFS-13176) WebHdfs 
server 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13176) WebHdfs file path gets truncated if having semicolon (;) in the name

2018-02-21 Thread Zsolt Venczel (JIRA)
Zsolt Venczel created HDFS-13176:


 Summary: WebHdfs file path gets truncated if having semicolon (;) 
in the name
 Key: HDFS-13176
 URL: https://issues.apache.org/jira/browse/HDFS-13176
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.0.0
Reporter: Zsolt Venczel
Assignee: Zsolt Venczel
 Attachments: TestWebHdfsUrl.testWebHdfsSpecialCharacterFile.patch

Find attached a patch having a test case that tries to reproduce the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13004) TestLeaseRecoveryStriped#testLeaseRecovery is failing

2018-01-09 Thread Zsolt Venczel (JIRA)
Zsolt Venczel created HDFS-13004:


 Summary: TestLeaseRecoveryStriped#testLeaseRecovery is failing
 Key: HDFS-13004
 URL: https://issues.apache.org/jira/browse/HDFS-13004
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.0.0
Reporter: Zsolt Venczel
Assignee: Zsolt Venczel
 Fix For: 3.0.1


{code}
Error:
failed testCase at i=1, 
blockLengths=org.apache.hadoop.hdfs.TestLeaseRecoveryStriped$BlockLengths@5a4c638d[blockLengths=

{4194304,4194304,4194304,1048576,4194304,4194304,2097152,1048576,4194304},safeLength=25165824]
java.lang.AssertionError: File length should be the same expected:<25165824> 
but was:<18874368>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at 
org.apache.hadoop.hdfs.StripedFileTestUtil.verifyLength(StripedFileTestUtil.java:79)
at 
org.apache.hadoop.hdfs.StripedFileTestUtil.checkData(StripedFileTestUtil.java:362)
at 
org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.runTest(TestLeaseRecoveryStriped.java:198)
at 
org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.testLeaseRecovery(TestLeaseRecoveryStriped.java:182)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:236)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:386)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:323)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:143)

Stack:
java.lang.AssertionError: 
failed testCase at i=1, 
blockLengths=org.apache.hadoop.hdfs.TestLeaseRecoveryStriped$BlockLengths@5a4c638d[blockLengths={4194304,4194304,4194304,1048576,4194304,4194304,2097152,1048576,4194304}
,safeLength=25165824]
java.lang.AssertionError: File length should be the same expected:<25165824> 
but was:<18874368>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at 
org.apache.hadoop.hdfs.StripedFileTestUtil.verifyLength(StripedFileTestUtil.java:79)
at 
org.apache.hadoop.hdfs.StripedFileTestUtil.checkData(StripedFileTestUtil.java:362)
at 
org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.runTest(TestLeaseRecoveryStriped.java:198)
at 
org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.testLeaseRecovery(TestLeaseRecoveryStriped.java:182)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.r

[jira] [Created] (HDFS-12913) TestDNFencingWithReplication.testFencingStress:137 ? Runtime Deferred

2017-12-11 Thread Zsolt Venczel (JIRA)
Zsolt Venczel created HDFS-12913:


 Summary: TestDNFencingWithReplication.testFencingStress:137 ? 
Runtime Deferred
 Key: HDFS-12913
 URL: https://issues.apache.org/jira/browse/HDFS-12913
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Zsolt Venczel
Assignee: Zsolt Venczel


Once in every 5000 test run the following issue happens:
{code}
2017-12-11 10:33:09 [INFO] 
2017-12-11 10:33:09 [INFO] 
---
2017-12-11 10:33:09 [INFO]  T E S T S
2017-12-11 10:33:09 [INFO] 
---
2017-12-11 10:33:09 [INFO] Running 
org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
2017-12-11 10:37:32 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, 
Time elapsed: 262.641 s <<< FAILURE! - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
2017-12-11 10:37:32 [ERROR] 
testFencingStress(org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication)
  Time elapsed: 262.477 s  <<< ERROR!
2017-12-11 10:37:32 java.lang.RuntimeException: Deferred
2017-12-11 10:37:32 at 
org.apache.hadoop.test.MultithreadedTestUtil$TestContext.checkException(MultithreadedTestUtil.java:130)
2017-12-11 10:37:32 at 
org.apache.hadoop.test.MultithreadedTestUtil$TestContext.stop(MultithreadedTestUtil.java:166)
2017-12-11 10:37:32 at 
org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:137)
2017-12-11 10:37:32 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
2017-12-11 10:37:32 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2017-12-11 10:37:32 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2017-12-11 10:37:32 at java.lang.reflect.Method.invoke(Method.java:498)
2017-12-11 10:37:32 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
2017-12-11 10:37:32 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2017-12-11 10:37:32 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
2017-12-11 10:37:32 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2017-12-11 10:37:32 at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
2017-12-11 10:37:32 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
2017-12-11 10:37:32 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
2017-12-11 10:37:32 at 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
2017-12-11 10:37:32 at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
2017-12-11 10:37:32 at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
2017-12-11 10:37:32 at 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
2017-12-11 10:37:32 at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
2017-12-11 10:37:32 at 
org.junit.runners.ParentRunner.run(ParentRunner.java:309)
2017-12-11 10:37:32 at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
2017-12-11 10:37:32 at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
2017-12-11 10:37:32 at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
2017-12-11 10:37:32 at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
2017-12-11 10:37:32 at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
2017-12-11 10:37:32 at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
2017-12-11 10:37:32 at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119)
2017-12-11 10:37:32 at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)
2017-12-11 10:37:32 Caused by: java.lang.RuntimeException: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): 
Operation category READ is not supported in state standby. Visit 
https://s.apache.org/sbnn-error
2017-12-11 10:37:32 at 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88)
2017-12-11 10:37:32 at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1962)
2017-12-11 10:37:32 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1421)
2017-12-11 10:37:32 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1862)
2017-12-11 10:37:32 at 
or

[jira] [Reopened] (HDFS-12891) TestClusterTopology#testChooseRandom fails intermittently

2017-12-05 Thread Zsolt Venczel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel reopened HDFS-12891:
--

> TestClusterTopology#testChooseRandom fails intermittently
> -
>
> Key: HDFS-12891
> URL: https://issues.apache.org/jira/browse/HDFS-12891
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-beta1, 2.8.2
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>  Labels: flaky-test
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.3, 3.1.0
>
>
> {noformat}
> java.net.BindException: Problem binding to [localhost:36701] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:546)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:955)
>   at org.apache.hadoop.ipc.Server.(Server.java:2655)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:968)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:810)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:954)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1314)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:481)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2611)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2499)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2546)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2152)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks(TestPendingInvalidateBlock.java:175)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12892) TestClusterTopology#testChooseRandom fails intermittently

2017-12-05 Thread Zsolt Venczel (JIRA)
Zsolt Venczel created HDFS-12892:


 Summary: TestClusterTopology#testChooseRandom fails intermittently
 Key: HDFS-12892
 URL: https://issues.apache.org/jira/browse/HDFS-12892
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Zsolt Venczel


Flaky test failure:
{code:java}
java.lang.AssertionError
Error
Not choosing nodes randomly
Stack Trace
java.lang.AssertionError: Not choosing nodes randomly
at 
org.apache.hadoop.net.TestClusterTopology.testChooseRandom(TestClusterTopology.java:170)
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12891) TestClusterTopology#testChooseRandom fails intermittently

2017-12-05 Thread Zsolt Venczel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel resolved HDFS-12891.
--
Resolution: Invalid

> TestClusterTopology#testChooseRandom fails intermittently
> -
>
> Key: HDFS-12891
> URL: https://issues.apache.org/jira/browse/HDFS-12891
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-beta1, 2.8.2
>Reporter: Zsolt Venczel
>Assignee: Eric Badger
>  Labels: flaky-test
> Fix For: 2.8.3, 3.1.0, 3.0.0-beta1, 2.9.0
>
>
> {noformat}
> java.net.BindException: Problem binding to [localhost:36701] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:546)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:955)
>   at org.apache.hadoop.ipc.Server.(Server.java:2655)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:968)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:810)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:954)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1314)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:481)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2611)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2499)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2546)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2152)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks(TestPendingInvalidateBlock.java:175)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12891) TestClusterTopology#testChooseRandom fails intermittently

2017-12-05 Thread Zsolt Venczel (JIRA)
Zsolt Venczel created HDFS-12891:


 Summary: TestClusterTopology#testChooseRandom fails intermittently
 Key: HDFS-12891
 URL: https://issues.apache.org/jira/browse/HDFS-12891
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.9.0, 3.0.0-beta1, 2.8.2
Reporter: Zsolt Venczel
Assignee: Eric Badger
 Fix For: 2.9.0, 3.0.0-beta1, 2.8.3, 3.1.0


{noformat}
java.net.BindException: Problem binding to [localhost:36701] 
java.net.BindException: Address already in use; For more details see:  
http://wiki.apache.org/hadoop/BindException
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.ipc.Server.bind(Server.java:546)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:955)
at org.apache.hadoop.ipc.Server.(Server.java:2655)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:968)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:810)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:954)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1314)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:481)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2611)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2499)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2546)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2152)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks(TestPendingInvalidateBlock.java:175)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org