[jira] [Comment Edited] (HADOOP-13787) Azure testGlobStatusThrowsExceptionForUnreadableDir fails

2016-11-04 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15638684#comment-15638684
 ] 

John Zhuge edited comment on HADOOP-13787 at 11/5/16 5:35 AM:
--

Forgot to mention I ran the tests on grind. It turned out 4 of 5 tests are 
flaky. They did succeed after a few retries using grind. Only 
{{TestLocalDirAllocator}} failed on grind consistently.

All 5 tests passed {{mvn test}} on Ubuntu 14. This explains the passed 
pre-commit tests for HADOOP-7352.


was (Author: jzhuge):
Forgot to mention I ran the tests on grind. It turned out 4 of 5 tests are 
flaky. They did succeed after a few retries using grind. Only 
{{TestLocalDirAllocator}} failed on grind consistently.

All 5 tests passed {{mvn test}} on Ubuntu 14. This explains the pre-commit test 
success.

> Azure testGlobStatusThrowsExceptionForUnreadableDir fails
> -
>
> Key: HADOOP-13787
> URL: https://issues.apache.org/jira/browse/HADOOP-13787
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13787.001.patch
>
>
> Test 
> {{TestNativeAzureFileSystemOperationsMocked>FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir}}
>  failed in trunk:
> {noformat}
> Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.182 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked
> testGlobStatusThrowsExceptionForUnreadableDir(org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked)
>  Time elapsed: 1.111 sec <<< FAILURE!
> java.lang.AssertionError: Should throw IOException
> at org.junit.Assert.fail(Assert.java:88)
> at 
> org.apache.hadoop.fs.FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir(FSMainOperationsBaseTest.java:643)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:254)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:149)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Results :
> Failed tests:
> TestNativeAzureFileSystemOperationsMocked>FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir:643
>  Should throw IOException
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13787) Azure testGlobStatusThrowsExceptionForUnreadableDir fails

2016-11-04 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15638684#comment-15638684
 ] 

John Zhuge commented on HADOOP-13787:
-

Forgot to mention I ran the tests on grind. It turned out 4 of 5 tests are 
flaky. They did succeed after a few retries using grind. Only 
{{TestLocalDirAllocator}} failed on grind consistently.

All 5 tests passed {{mvn test}} on Ubuntu 14.

> Azure testGlobStatusThrowsExceptionForUnreadableDir fails
> -
>
> Key: HADOOP-13787
> URL: https://issues.apache.org/jira/browse/HADOOP-13787
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13787.001.patch
>
>
> Test 
> {{TestNativeAzureFileSystemOperationsMocked>FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir}}
>  failed in trunk:
> {noformat}
> Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.182 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked
> testGlobStatusThrowsExceptionForUnreadableDir(org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked)
>  Time elapsed: 1.111 sec <<< FAILURE!
> java.lang.AssertionError: Should throw IOException
> at org.junit.Assert.fail(Assert.java:88)
> at 
> org.apache.hadoop.fs.FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir(FSMainOperationsBaseTest.java:643)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:254)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:149)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Results :
> Failed tests:
> TestNativeAzureFileSystemOperationsMocked>FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir:643
>  Should throw IOException
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13787) Azure testGlobStatusThrowsExceptionForUnreadableDir fails

2016-11-04 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15638684#comment-15638684
 ] 

John Zhuge edited comment on HADOOP-13787 at 11/5/16 5:30 AM:
--

Forgot to mention I ran the tests on grind. It turned out 4 of 5 tests are 
flaky. They did succeed after a few retries using grind. Only 
{{TestLocalDirAllocator}} failed on grind consistently.

All 5 tests passed {{mvn test}} on Ubuntu 14. This explains the pre-commit test 
success.


was (Author: jzhuge):
Forgot to mention I ran the tests on grind. It turned out 4 of 5 tests are 
flaky. They did succeed after a few retries using grind. Only 
{{TestLocalDirAllocator}} failed on grind consistently.

All 5 tests passed {{mvn test}} on Ubuntu 14.

> Azure testGlobStatusThrowsExceptionForUnreadableDir fails
> -
>
> Key: HADOOP-13787
> URL: https://issues.apache.org/jira/browse/HADOOP-13787
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13787.001.patch
>
>
> Test 
> {{TestNativeAzureFileSystemOperationsMocked>FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir}}
>  failed in trunk:
> {noformat}
> Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.182 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked
> testGlobStatusThrowsExceptionForUnreadableDir(org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked)
>  Time elapsed: 1.111 sec <<< FAILURE!
> java.lang.AssertionError: Should throw IOException
> at org.junit.Assert.fail(Assert.java:88)
> at 
> org.apache.hadoop.fs.FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir(FSMainOperationsBaseTest.java:643)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:254)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:149)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Results :
> Failed tests:
> TestNativeAzureFileSystemOperationsMocked>FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir:643
>  Should throw IOException
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13787) Azure testGlobStatusThrowsExceptionForUnreadableDir fails

2016-11-04 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15638650#comment-15638650
 ] 

Xiao Chen commented on HADOOP-13787:


Thanks John! Will follow up on all related jiras.
Not sure why the hadoop-common ones were missed in pre-commit - Jenkins gave +1 
in HADOOP-7352.

> Azure testGlobStatusThrowsExceptionForUnreadableDir fails
> -
>
> Key: HADOOP-13787
> URL: https://issues.apache.org/jira/browse/HADOOP-13787
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13787.001.patch
>
>
> Test 
> {{TestNativeAzureFileSystemOperationsMocked>FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir}}
>  failed in trunk:
> {noformat}
> Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.182 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked
> testGlobStatusThrowsExceptionForUnreadableDir(org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked)
>  Time elapsed: 1.111 sec <<< FAILURE!
> java.lang.AssertionError: Should throw IOException
> at org.junit.Assert.fail(Assert.java:88)
> at 
> org.apache.hadoop.fs.FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir(FSMainOperationsBaseTest.java:643)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:254)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:149)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Results :
> Failed tests:
> TestNativeAzureFileSystemOperationsMocked>FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir:643
>  Should throw IOException
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13787) Azure testGlobStatusThrowsExceptionForUnreadableDir fails

2016-11-04 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15638543#comment-15638543
 ] 

John Zhuge commented on HADOOP-13787:
-

Here is the comparison test result:
* The current trunk is {{0aafc12}}, compared to {{29caf6d}}, the commit right 
before HADOOP-7352.
* All 173 {{hadoop-tools}} tests passed.
* 5 out of 441 {{hadoop-common}} tests failed, 3 regressions.
* I will file JIRAs for both regressions and non-regressions.

org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem
* 
TestFSMainOperationsLocalFileSystem>FSMainOperationsBaseTest.testListStatusThrowsExceptionForUnreadableDir:288
 Should throw IOException
* (regression) 
TestFSMainOperationsLocalFileSystem>FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir:643
 Should throw IOException

org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem
* 
TestFSMainOperationsLocalFileSystem>FSMainOperationsBaseTest.testListStatusThrowsExceptionForUnreadableDir:288
 Should throw IOException
* (regression) 
TestFSMainOperationsLocalFileSystem>FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir:643
 Should throw IOException

TestLocalDirAllocator
* 
TestLocalDirAllocator.testROBufferDirAndRWBufferDir:162->validateTempDirCreation:109
 Checking for build/test/temp/RELATIVE2 in 
build/test/temp/RELATIVE1/block6738757787047387788.tmp - FAILED!
* TestLocalDirAllocator.test0:140->validateTempDirCreation:109 Checking for 
build/test/temp/RELATIVE1 in 
build/test/temp/RELATIVE0/block125615631432807097.tmp - FAILED!
* 
TestLocalDirAllocator.testROBufferDirAndRWBufferDir:162->validateTempDirCreation:109
 Checking for 
/tmp/run_tha_testQ8gxo9/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE2
 in 
/tmp/run_tha_testQ8gxo9/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1/block3679320121221680948.tmp
 - FAILED!
* TestLocalDirAllocator.test0:141->validateTempDirCreation:109 Checking for 
/tmp/run_tha_testQ8gxo9/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1
 in 
/tmp/run_tha_testQ8gxo9/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE0/block5094057430925940349.tmp
 - FAILED!
* 
TestLocalDirAllocator.testROBufferDirAndRWBufferDir:163->validateTempDirCreation:109
 Checking for 
file:/tmp/run_tha_testQ8gxo9/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED2
 in 
/tmp/run_tha_testQ8gxo9/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1/block959204179043794136.tmp
 - FAILED!
* TestLocalDirAllocator.test0:140->validateTempDirCreation:109 Checking for 
file:/tmp/run_tha_testQ8gxo9/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1
 in 
/tmp/run_tha_testQ8gxo9/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED0/block8256098597810969453.tmp
 - FAILED!

TestPathData
* (regression) TestPathData.testGlobThrowsExceptionForUnreadableDir:230 Should 
throw IOException

TestRollingFileSystemSinkWithLocal
* TestRollingFileSystemSinkWithLocal.testFailedWrite:116 No exception was 
generated while writing metrics even though the target directory was not 
writable

> Azure testGlobStatusThrowsExceptionForUnreadableDir fails
> -
>
> Key: HADOOP-13787
> URL: https://issues.apache.org/jira/browse/HADOOP-13787
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13787.001.patch
>
>
> Test 
> {{TestNativeAzureFileSystemOperationsMocked>FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir}}
>  failed in trunk:
> {noformat}
> Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.182 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked
> testGlobStatusThrowsExceptionForUnreadableDir(org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked)
>  Time elapsed: 1.111 sec <<< FAILURE!
> java.lang.AssertionError: Should throw IOException
> at org.junit.Assert.fail(Assert.java:88)
> at 
> org.apache.hadoop.fs.FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir(FSMainOperationsBaseTest.java:643)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> 

[jira] [Commented] (HADOOP-13789) Hadoop Common includes generated test protos in both jar and test-jar

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15638371#comment-15638371
 ] 

Hadoop QA commented on HADOOP-13789:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  3m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 13m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
25s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
59s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 33s{color} | {color:orange} root: The patch generated 4 new + 1 unchanged - 
1 fixed = 5 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  8m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  3m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
15s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-maven-plugins generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
21s{color} | {color:red} 
patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests
 no findbugs output file 
(hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/findbugsXml.xml)
 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-maven-plugins in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
23s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 59m 
20s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
17s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
20s{color} | {color:green} hadoop-mapreduce-client-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-mapreduce-client-shuffle in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-api in the patch passed. 

[jira] [Commented] (HADOOP-13565) KerberosAuthenticationHandler#authenticate should not rebuild SPN based on client request

2016-11-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15638118#comment-15638118
 ] 

Hudson commented on HADOOP-13565:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10776 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10776/])
Revert "HADOOP-13565. KerberosAuthenticationHandler#authenticate should (xyao: 
rev 95665a6eea32ff7134ea556db4dd4ae068364fc0)
* (edit) 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/KerberosAuthenticationHandler.java


> KerberosAuthenticationHandler#authenticate should not rebuild SPN based on 
> client request
> -
>
> Key: HADOOP-13565
> URL: https://issues.apache.org/jira/browse/HADOOP-13565
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13565.00.patch
>
>
> In KerberosAuthenticationHandler#authenticate, we use canonicalized server 
> name derived from HTTP request to build server SPN and authenticate client. 
> This can be problematic if the HTTP client/server are running from a 
> non-local Kerberos realm that the local realm has trust with (e.g., NN UI).
> For example, 
> The server is running its HTTP endpoint using SPN from the client realm:
> hadoop.http.authentication.kerberos.principal
> HTTP/_HOST/TEST.COM
> When client sends request to namenode at http://NN1.example.com:50070 from 
> client.test@test.com.
> The client talks to KDC first and gets a service ticket 
> HTTP/NN1.example.com/TEST.COM to authenticate with the server via SPNEGO 
> negotiation. 
> The authentication will end up with either no valid credential error or 
> checksum failure depending on the HTTP client naming resolution or HTTP Host 
> field from the request header provided by the browser. 
> The root cause is KerberosUtil.getServicePrincipal("HTTP", serverName)}} will 
> always return a SPN with local realm (HTTP/nn.example@example.com) no 
> matter the server login SPN is from that domain or not. 
> The proposed fix is to change to use default server login principal (by 
> passing null as the 1st parameter to gssManager.createCredential()) instead. 
> This way we avoid dependency on HTTP client behavior (Host header or name 
> resolution like CNAME) or assumption on the local realm. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13565) KerberosAuthenticationHandler#authenticate should not rebuild SPN based on client request

2016-11-04 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reopened HADOOP-13565:
-

Reopen the issue as the change breaks the existing multiple HTTP principles 
support.  I will revert it from trunk and other branches. 

The original problem with the server SPN that always get default realm can be 
solved by improving KerberosUtil#getDomainRealm() to look up the domain_realm 
map from krb5 Config. 

> KerberosAuthenticationHandler#authenticate should not rebuild SPN based on 
> client request
> -
>
> Key: HADOOP-13565
> URL: https://issues.apache.org/jira/browse/HADOOP-13565
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13565.00.patch
>
>
> In KerberosAuthenticationHandler#authenticate, we use canonicalized server 
> name derived from HTTP request to build server SPN and authenticate client. 
> This can be problematic if the HTTP client/server are running from a 
> non-local Kerberos realm that the local realm has trust with (e.g., NN UI).
> For example, 
> The server is running its HTTP endpoint using SPN from the client realm:
> hadoop.http.authentication.kerberos.principal
> HTTP/_HOST/TEST.COM
> When client sends request to namenode at http://NN1.example.com:50070 from 
> client.test@test.com.
> The client talks to KDC first and gets a service ticket 
> HTTP/NN1.example.com/TEST.COM to authenticate with the server via SPNEGO 
> negotiation. 
> The authentication will end up with either no valid credential error or 
> checksum failure depending on the HTTP client naming resolution or HTTP Host 
> field from the request header provided by the browser. 
> The root cause is KerberosUtil.getServicePrincipal("HTTP", serverName)}} will 
> always return a SPN with local realm (HTTP/nn.example@example.com) no 
> matter the server login SPN is from that domain or not. 
> The proposed fix is to change to use default server login principal (by 
> passing null as the 1st parameter to gssManager.createCredential()) instead. 
> This way we avoid dependency on HTTP client behavior (Host header or name 
> resolution like CNAME) or assumption on the local realm. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13565) KerberosAuthenticationHandler#authenticate should not rebuild SPN based on client request

2016-11-04 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-13565:

Hadoop Flags:   (was: Reviewed)

> KerberosAuthenticationHandler#authenticate should not rebuild SPN based on 
> client request
> -
>
> Key: HADOOP-13565
> URL: https://issues.apache.org/jira/browse/HADOOP-13565
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13565.00.patch
>
>
> In KerberosAuthenticationHandler#authenticate, we use canonicalized server 
> name derived from HTTP request to build server SPN and authenticate client. 
> This can be problematic if the HTTP client/server are running from a 
> non-local Kerberos realm that the local realm has trust with (e.g., NN UI).
> For example, 
> The server is running its HTTP endpoint using SPN from the client realm:
> hadoop.http.authentication.kerberos.principal
> HTTP/_HOST/TEST.COM
> When client sends request to namenode at http://NN1.example.com:50070 from 
> client.test@test.com.
> The client talks to KDC first and gets a service ticket 
> HTTP/NN1.example.com/TEST.COM to authenticate with the server via SPNEGO 
> negotiation. 
> The authentication will end up with either no valid credential error or 
> checksum failure depending on the HTTP client naming resolution or HTTP Host 
> field from the request header provided by the browser. 
> The root cause is KerberosUtil.getServicePrincipal("HTTP", serverName)}} will 
> always return a SPN with local realm (HTTP/nn.example@example.com) no 
> matter the server login SPN is from that domain or not. 
> The proposed fix is to change to use default server login principal (by 
> passing null as the 1st parameter to gssManager.createCredential()) instead. 
> This way we avoid dependency on HTTP client behavior (Host header or name 
> resolution like CNAME) or assumption on the local realm. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13565) KerberosAuthenticationHandler#authenticate should not rebuild SPN based on client request

2016-11-04 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-13565:

Fix Version/s: (was: 3.0.0-alpha2)
   (was: 2.8.0)

> KerberosAuthenticationHandler#authenticate should not rebuild SPN based on 
> client request
> -
>
> Key: HADOOP-13565
> URL: https://issues.apache.org/jira/browse/HADOOP-13565
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13565.00.patch
>
>
> In KerberosAuthenticationHandler#authenticate, we use canonicalized server 
> name derived from HTTP request to build server SPN and authenticate client. 
> This can be problematic if the HTTP client/server are running from a 
> non-local Kerberos realm that the local realm has trust with (e.g., NN UI).
> For example, 
> The server is running its HTTP endpoint using SPN from the client realm:
> hadoop.http.authentication.kerberos.principal
> HTTP/_HOST/TEST.COM
> When client sends request to namenode at http://NN1.example.com:50070 from 
> client.test@test.com.
> The client talks to KDC first and gets a service ticket 
> HTTP/NN1.example.com/TEST.COM to authenticate with the server via SPNEGO 
> negotiation. 
> The authentication will end up with either no valid credential error or 
> checksum failure depending on the HTTP client naming resolution or HTTP Host 
> field from the request header provided by the browser. 
> The root cause is KerberosUtil.getServicePrincipal("HTTP", serverName)}} will 
> always return a SPN with local realm (HTTP/nn.example@example.com) no 
> matter the server login SPN is from that domain or not. 
> The proposed fix is to change to use default server login principal (by 
> passing null as the 1st parameter to gssManager.createCredential()) instead. 
> This way we avoid dependency on HTTP client behavior (Host header or name 
> resolution like CNAME) or assumption on the local realm. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2016-11-04 Thread Shi Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637970#comment-15637970
 ] 

Shi Wang commented on HADOOP-13119:
---

Hi [~yuanbo],

Your proposes looks good for me, for the second point, there is 
DelegationTokenAuthenticationFilter that extends AuthenticationFilter and 
supports proxy user. 
>From my opinion, either we can add proxy user support directly in 
>AuthenticationFilter or use the existing DelegationTokenAuthenticationFilter. 
It seems adding directly in AuthenticationFilter is more straight forward and 
touches less files, but need to verify it is harmless and make sense to add it 
here.
To use the existing code in DelegationTokenAuthenticationFilter, need to have 
an authenticationfilterinitializer to add DelegationTokenAuthenticationFilter 
in the filterchain. 
Because yarn is using RMAuthenticationFilterInitializer to support delegation 
token authentication and proxy user, we may apply it to  hadoop common.
And by configuring hadoop.http.filter.initializers to self defined 
authenticationfilterinitializer, we can add filters as needed.

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Attachments: screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13650) S3Guard: Provide command line tools to manipulate metadata store.

2016-11-04 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637956#comment-15637956
 ] 

Aaron Fabbri commented on HADOOP-13650:
---

Great progress [~eddyxu], just noticed you had a patch ready.

I haven't carefully reviewed the whole thing yet but here are some initial 
comments:

Test code: Instead of changing the MetadataStoreTestBase to be 
S3FileStatus-specific and removing the existing test code for FileStatus (which 
I added in response to review comments), please create a subclass 
TestDynamoDBMetadataStore and make the changes there.  Let me know if you need 
help with any refactoring.  You should have a subclass of AbstractMSContract as 
well.  You can add an override-able query function to MetadataStoreTestBase 
like {{boolean preservesFullFileStatus()}} which returns false for 
TestDynamoDBMetadataStore, but true in the base class.  In 
MetadataStoreTestBase, if that function returns true, you can skip assertions 
on those fields (accessTime, owner, group, etc.) instead of just removing the 
base test code.

Code churn: FYI, I'm working on the upstream rebase but I hit some issues and 
have to set up a new development machine.  Trunk has a changed S3AFileStatus 
constructor, so heads-up this will need to be rebased somewhat. I'll keep you 
updated.



> S3Guard: Provide command line tools to manipulate metadata store.
> -
>
> Key: HADOOP-13650
> URL: https://issues.apache.org/jira/browse/HADOOP-13650
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-13650-HADOOP-13345.000.patch, 
> HADOOP-13650-HADOOP-13345.001.patch
>
>
> Similar systems like EMRFS has the CLI tools to manipulate the metadata 
> store, i.e., create or delete metadata store, or {{import}}, {{sync}} the 
> file metadata between metadata store and S3. 
> http://docs.aws.amazon.com//ElasticMapReduce/latest/ReleaseGuide/emrfs-cli-reference.html
> S3Guard should offer similar functionality. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13660) Upgrade commons-configuration version

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637915#comment-15637915
 ] 

Hadoop QA commented on HADOOP-13660:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
39s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 39s{color} | {color:orange} root: The patch generated 7 new + 374 unchanged 
- 8 fixed = 381 total (was 382) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
14s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
21s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
2s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 17s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
31s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-kafka in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}159m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 

[jira] [Commented] (HADOOP-12554) Swift client to read credentials from a credential provider

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637908#comment-15637908
 ] 

Hadoop QA commented on HADOOP-12554:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
11s{color} | {color:green} hadoop-openstack in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-12554 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12824946/HADOOP-12554.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9db8d3f222ea 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0c0ab10 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11000/testReport/ |
| modules | C: hadoop-tools/hadoop-openstack U: hadoop-tools/hadoop-openstack |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11000/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Swift client to read credentials from a credential provider
> ---
>
> Key: HADOOP-12554
> URL: https://issues.apache.org/jira/browse/HADOOP-12554
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/swift
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: ramtin
>Priority: Minor
> Attachments: HADOOP-12554.001.patch, HADOOP-12554.002.patch
>
>
> As HADOOP-12548 is going to do for s3, Swift should be reading credentials, 
> particularly 

[jira] [Updated] (HADOOP-13650) S3Guard: Provide command line tools to manipulate metadata store.

2016-11-04 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-13650:
---
Attachment: HADOOP-13650-HADOOP-13345.001.patch

Hi, [~ste...@apache.org]. 

Updated the patch to address the most of the comments.

Regarding:

bq. Isn't the S3aFS going to create/init its store? So no need to do that again 
in this class, just ask for the (inited) one in S3AFS.

My thoughts were that, for commands like {{import}} and {{diff}}, it is better 
to avoid the filesystem holding a metadata store. It prevents that the file 
systems returning {{FileStatus}} from the metadata store during {{import}} or 
{{diff}} process. 

For {{initializeS3A()}}, it might be removed from the final patch. It needs to 
talk with [~liuml07] and [~fabbri] for a consensus of the interfaces used 
between Metadata Store and S3AFileSystem.

bq. this patch could go in as an interim measure, with a separate JIRA "migrate 
to s3cmd" being another subtask, one dependent on s3cmd existing.

That works for me. 

Thanks.

> S3Guard: Provide command line tools to manipulate metadata store.
> -
>
> Key: HADOOP-13650
> URL: https://issues.apache.org/jira/browse/HADOOP-13650
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-13650-HADOOP-13345.000.patch, 
> HADOOP-13650-HADOOP-13345.001.patch
>
>
> Similar systems like EMRFS has the CLI tools to manipulate the metadata 
> store, i.e., create or delete metadata store, or {{import}}, {{sync}} the 
> file metadata between metadata store and S3. 
> http://docs.aws.amazon.com//ElasticMapReduce/latest/ReleaseGuide/emrfs-cli-reference.html
> S3Guard should offer similar functionality. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12718) Incorrect error message by fs -put local dir without permission

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637875#comment-15637875
 ] 

Hadoop QA commented on HADOOP-12718:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
49s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-12718 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837289/HADOOP-12718.007.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bc496429f4ae 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / de01327 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10998/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10998/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Incorrect error message by fs -put local dir without permission
> ---
>
> Key: HADOOP-12718
> URL: https://issues.apache.org/jira/browse/HADOOP-12718
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
> Attachments: HADOOP-12718.001.patch, HADOOP-12718.002.patch, 
> HADOOP-12718.003.patch, HADOOP-12718.004.patch, HADOOP-12718.005.patch, 
> HADOOP-12718.006.patch, HADOOP-12718.007.patch, 
> 

[jira] [Commented] (HADOOP-12554) Swift client to read credentials from a credential provider

2016-11-04 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637831#comment-15637831
 ] 

Chen He commented on HADOOP-12554:
--

I test it against openstack object store. It does not work:
{quote}
httpclient.HttpMethodDirector: Unable to respond to any of these challenges: 
{token=Token}
{quote}
Maybe the ReadMe is not clear?

> Swift client to read credentials from a credential provider
> ---
>
> Key: HADOOP-12554
> URL: https://issues.apache.org/jira/browse/HADOOP-12554
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/swift
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: ramtin
>Priority: Minor
> Attachments: HADOOP-12554.001.patch, HADOOP-12554.002.patch
>
>
> As HADOOP-12548 is going to do for s3, Swift should be reading credentials, 
> particularly passwords, from a credential provider. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13789) Hadoop Common includes generated test protos in both jar and test-jar

2016-11-04 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637780#comment-15637780
 ] 

Sean Busbey commented on HADOOP-13789:
--

curl got a transient SSL error for the v2 patch in that run. resubmitted now 
and it looks like the correct patch is being tested.

> Hadoop Common includes generated test protos in both jar and test-jar
> -
>
> Key: HADOOP-13789
> URL: https://issues.apache.org/jira/browse/HADOOP-13789
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, common
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-13789.1.patch, HADOOP-13789.2.patch
>
>
> Right now our ProtocMojo always adds source directories to the main compile 
> phase and we use it in hadoop-common to both generate main files as well as 
> test files. This results in the test files getting added to both our test jar 
> (correct) and our main jar (not correct).
> We should either add a main-vs-test flag to the configuration for ProtocMojo 
> or make a ProtocTestMojo that always adds as a test sources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13789) Hadoop Common includes generated test protos in both jar and test-jar

2016-11-04 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637763#comment-15637763
 ] 

Sean Busbey commented on HADOOP-13789:
--

huh. that's the wrong patch.

> Hadoop Common includes generated test protos in both jar and test-jar
> -
>
> Key: HADOOP-13789
> URL: https://issues.apache.org/jira/browse/HADOOP-13789
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, common
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-13789.1.patch, HADOOP-13789.2.patch
>
>
> Right now our ProtocMojo always adds source directories to the main compile 
> phase and we use it in hadoop-common to both generate main files as well as 
> test files. This results in the test files getting added to both our test jar 
> (correct) and our main jar (not correct).
> We should either add a main-vs-test flag to the configuration for ProtocMojo 
> or make a ProtocTestMojo that always adds as a test sources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13789) Hadoop Common includes generated test protos in both jar and test-jar

2016-11-04 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637763#comment-15637763
 ] 

Sean Busbey edited comment on HADOOP-13789 at 11/4/16 9:36 PM:
---

huh. that's the wrong patch. (edit: that precommit ran on just now)


was (Author: busbey):
huh. that's the wrong patch.

> Hadoop Common includes generated test protos in both jar and test-jar
> -
>
> Key: HADOOP-13789
> URL: https://issues.apache.org/jira/browse/HADOOP-13789
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, common
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-13789.1.patch, HADOOP-13789.2.patch
>
>
> Right now our ProtocMojo always adds source directories to the main compile 
> phase and we use it in hadoop-common to both generate main files as well as 
> test files. This results in the test files getting added to both our test jar 
> (correct) and our main jar (not correct).
> We should either add a main-vs-test flag to the configuration for ProtocMojo 
> or make a ProtocTestMojo that always adds as a test sources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12718) Incorrect error message by fs -put local dir without permission

2016-11-04 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-12718:

Attachment: HADOOP-12718.007.patch

Patch 007:
* Add javadoc for the new ACE to throw

> Incorrect error message by fs -put local dir without permission
> ---
>
> Key: HADOOP-12718
> URL: https://issues.apache.org/jira/browse/HADOOP-12718
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
> Attachments: HADOOP-12718.001.patch, HADOOP-12718.002.patch, 
> HADOOP-12718.003.patch, HADOOP-12718.004.patch, HADOOP-12718.005.patch, 
> HADOOP-12718.006.patch, HADOOP-12718.007.patch, 
> TestFsShellCopyPermission-output.001.txt, 
> TestFsShellCopyPermission-output.002.txt, TestFsShellCopyPermission.001.patch
>
>
> When the user doesn't have access permission to the local directory, the 
> "hadoop fs -put" command prints a confusing error message "No such file or 
> directory".
> {noformat}
> $ whoami
> systest
> $ cd /home/systest
> $ ls -ld .
> drwx--. 4 systest systest 4096 Jan 13 14:21 .
> $ mkdir d1
> $ sudo -u hdfs hadoop fs -put d1 /tmp
> put: `d1': No such file or directory
> {noformat}
> It will be more informative if the message is:
> {noformat}
> put: d1 (Permission denied)
> {noformat}
> If the source is a local file, the error message is ok:
> {noformat}
> put: f1 (Permission denied)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13590) Retry until TGT expires even if the UGI renewal thread encountered exception

2016-11-04 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637719#comment-15637719
 ] 

Xiao Chen commented on HADOOP-13590:


Test failures on both trunk and branch-2 look unrelated and passed locally. 
Trunk's checkstyle is the same old to be overruled.

Appreciate the reviews. Thanks.

> Retry until TGT expires even if the UGI renewal thread encountered exception
> 
>
> Key: HADOOP-13590
> URL: https://issues.apache.org/jira/browse/HADOOP-13590
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13590.01.patch, HADOOP-13590.02.patch, 
> HADOOP-13590.03.patch, HADOOP-13590.04.patch, HADOOP-13590.05.patch, 
> HADOOP-13590.06.patch, HADOOP-13590.07.patch, HADOOP-13590.08.patch, 
> HADOOP-13590.09.patch, HADOOP-13590.10.patch, HADOOP-13590.branch-2.01.patch
>
>
> The UGI has a background thread to renew the tgt. On exception, it 
> [terminates 
> itself|https://github.com/apache/hadoop/blob/bee9f57f5ca9f037ade932c6fd01b0dad47a1296/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L1013-L1014]
> If something temporarily goes wrong that results in an IOE, even if it 
> recovered no renewal will be done and client will eventually fail to 
> authenticate. We should retry with our best effort, until tgt expires, in the 
> hope that the error recovers before that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13789) Hadoop Common includes generated test protos in both jar and test-jar

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637711#comment-15637711
 ] 

Hadoop QA commented on HADOOP-13789:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-maven-plugins in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
40s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13789 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836759/HADOOP-13789.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 8687b51da320 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / de01327 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10997/testReport/ |
| modules | C: hadoop-maven-plugins hadoop-common-project/hadoop-common U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10997/console |
| Powered by | Apache Yetus 

[jira] [Commented] (HADOOP-13590) Retry until TGT expires even if the UGI renewal thread encountered exception

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637590#comment-15637590
 ] 

Hadoop QA commented on HADOOP-13590:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
36s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
38s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
29s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
51s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
31s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
56s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_101 Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HADOOP-13590 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837233/HADOOP-13590.branch-2.01.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6eb5bfb12ae4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 0b36dcd |
| Default Java | 1.7.0_111 |
| Multi-JDK versions |  

[jira] [Updated] (HADOOP-13789) Hadoop Common includes generated test protos in both jar and test-jar

2016-11-04 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-13789:
-
Status: Patch Available  (was: In Progress)

> Hadoop Common includes generated test protos in both jar and test-jar
> -
>
> Key: HADOOP-13789
> URL: https://issues.apache.org/jira/browse/HADOOP-13789
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, common
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-13789.1.patch, HADOOP-13789.2.patch
>
>
> Right now our ProtocMojo always adds source directories to the main compile 
> phase and we use it in hadoop-common to both generate main files as well as 
> test files. This results in the test files getting added to both our test jar 
> (correct) and our main jar (not correct).
> We should either add a main-vs-test flag to the configuration for ProtocMojo 
> or make a ProtocTestMojo that always adds as a test sources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-11-04 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637499#comment-15637499
 ] 

Sean Busbey commented on HADOOP-11804:
--

sorry for the noise, v6 is from yesterday afternoon, apparently I forgot to hit 
submit. I almost have v7 done now with your current feedback [~andrew.wang]. 
the class not found exception is a :facepalm: thing I forgot to update in 
shading.

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.2.patch, 
> HADOOP-11804.3.patch, HADOOP-11804.4.patch, HADOOP-11804.5.patch, 
> HADOOP-11804.6.patch
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13794) JSON.org license is now CatX

2016-11-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637496#comment-15637496
 ] 

Steve Loughran commented on HADOOP-13794:
-

AWS SDK 1.11.0 declares that it expects Jackson 2.5+ 
https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-core/1.11.0 ; not 
tested it on older versions yet

> JSON.org license is now CatX
> 
>
> Key: HADOOP-13794
> URL: https://issues.apache.org/jira/browse/HADOOP-13794
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha2, 2.6.6
>Reporter: Sean Busbey
>Priority: Blocker
>
> per [update resolved legal|http://www.apache.org/legal/resolved.html#json]:
> {quote}
> CAN APACHE PRODUCTS INCLUDE WORKS LICENSED UNDER THE JSON LICENSE?
> No. As of 2016-11-03 this has been moved to the 'Category X' license list. 
> Prior to this, use of the JSON Java library was allowed. See Debian's page 
> for a list of alternatives.
> {quote}
> We have a test-time transitive dependency on the {{org.json:json}} artifact 
> in trunk and branch-2. AFAICT, this test time dependency doesn't get exposed 
> to downstream at all (I checked assemblies and test-jar artifacts we publish 
> to maven), so it can be removed or kept at our leisure. keeping it risks it 
> being promoted out of test scope by maven without us noticing. We might be 
> able to add an enforcer rule to check for this.
> We also distribute it in bundled form through our use of the AWS Java SDK 
> artifacts in trunk and branch-2. Looking at the github project, [their 
> dependency on JSON.org was removed in 
> 1.11|https://github.com/aws/aws-sdk-java/pull/417], so if we upgrade to 
> 1.11.0+ we should be good to go. (this might be hard in branch-2.6 and 
> branch-2.7 where we're on 1.7.4)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-11-04 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-11804:
-
Attachment: HADOOP-11804.6.patch

-06

- rebased to trunk (7154a20)
- consolidated maven-shade-plugin version
- updated MSHADE-182 implementation for Hadoop checkstyle rules
- updated MSHADE-182 implementation for Hadoop's findbugs configs
- fixed whitespace complaints
- fixed license header on dependency reduced poms (and cleaned up handling of 
them generally)

The failure in {{TestQueuingContainerManager}} was consistent, but unrelated to 
the patch and rebasing has fixed it.

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.2.patch, 
> HADOOP-11804.3.patch, HADOOP-11804.4.patch, HADOOP-11804.5.patch, 
> HADOOP-11804.6.patch
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13789) Hadoop Common includes generated test protos in both jar and test-jar

2016-11-04 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-13789:
-
Attachment: HADOOP-13789.2.patch

-02

- move to independent mojos for protoc and test-protoc
- clean up poms that give configuration items with defaults

Here's the refactoring, let's see how the test runs do in precommit. Making two 
Mojos was a bit awkward since it appears each has to directly extend 
AbstractMojo to work.

> Hadoop Common includes generated test protos in both jar and test-jar
> -
>
> Key: HADOOP-13789
> URL: https://issues.apache.org/jira/browse/HADOOP-13789
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, common
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-13789.1.patch, HADOOP-13789.2.patch
>
>
> Right now our ProtocMojo always adds source directories to the main compile 
> phase and we use it in hadoop-common to both generate main files as well as 
> test files. This results in the test files getting added to both our test jar 
> (correct) and our main jar (not correct).
> We should either add a main-vs-test flag to the configuration for ProtocMojo 
> or make a ProtocTestMojo that always adds as a test sources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13794) JSON.org license is now CatX

2016-11-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637467#comment-15637467
 ] 

Steve Loughran commented on HADOOP-13794:
-

I'd like to do a minimal update on branch-2.8+ which doesn't force an update to 
jackson just for the AWS library; that makes it more traumatic.

Like you note, branch-2.6-2.7 is going to be harder.

> JSON.org license is now CatX
> 
>
> Key: HADOOP-13794
> URL: https://issues.apache.org/jira/browse/HADOOP-13794
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha2, 2.6.6
>Reporter: Sean Busbey
>Priority: Blocker
>
> per [update resolved legal|http://www.apache.org/legal/resolved.html#json]:
> {quote}
> CAN APACHE PRODUCTS INCLUDE WORKS LICENSED UNDER THE JSON LICENSE?
> No. As of 2016-11-03 this has been moved to the 'Category X' license list. 
> Prior to this, use of the JSON Java library was allowed. See Debian's page 
> for a list of alternatives.
> {quote}
> We have a test-time transitive dependency on the {{org.json:json}} artifact 
> in trunk and branch-2. AFAICT, this test time dependency doesn't get exposed 
> to downstream at all (I checked assemblies and test-jar artifacts we publish 
> to maven), so it can be removed or kept at our leisure. keeping it risks it 
> being promoted out of test scope by maven without us noticing. We might be 
> able to add an enforcer rule to check for this.
> We also distribute it in bundled form through our use of the AWS Java SDK 
> artifacts in trunk and branch-2. Looking at the github project, [their 
> dependency on JSON.org was removed in 
> 1.11|https://github.com/aws/aws-sdk-java/pull/417], so if we upgrade to 
> 1.11.0+ we should be good to go. (this might be hard in branch-2.6 and 
> branch-2.7 where we're on 1.7.4)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13720) Add more info to "token ... is expired" message

2016-11-04 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637119#comment-15637119
 ] 

Yongjun Zhang edited comment on HADOOP-13720 at 11/4/16 7:40 PM:
-

Thanks [~steve_l], did not see your comment until now, I had a different 
solution in rev05, basically to use thread local variable to avoid the 
potential race condition. Would you please take a look? thanks!



was (Author: yzhangal):
Thanks [~steve_l], did not see your comment until now, I had a different 
solution in rev06, basically to use thread local variable to avoid the 
potential race condition. Would you please take a look? thanks!


> Add more info to "token ... is expired" message
> ---
>
> Key: HADOOP-13720
> URL: https://issues.apache.org/jira/browse/HADOOP-13720
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Trivial
>  Labels: supportability
> Attachments: HADOOP-13720.001.patch, HADOOP-13720.002.patch, 
> HADOOP-13720.003.patch, HADOOP-13720.004.patch, HADOOP-13720.005.patch
>
>
> Currently AbstractDelegationTokenSecretManager$checkToken does
> {code}
>   protected DelegationTokenInformation checkToken(TokenIdent identifier)
>   throws InvalidToken {
> assert Thread.holdsLock(this);
> DelegationTokenInformation info = getTokenInfo(identifier);
> if (info == null) {
>   throw new InvalidToken("token (" + identifier.toString()
>   + ") can't be found in cache");
> }
> if (info.getRenewDate() < Time.now()) {
>   throw new InvalidToken("token (" + identifier.toString() + ") is 
> expired");
> }
> return info;
>   } 
> {code}
> When a token is expried, we throw the above exception without printing out 
> the {{info.getRenewDate()}} in the message. If we print it out, we could know 
> for how long the token has not been renewed. This will help us investigate 
> certain issues.
> Create this jira as a request to add that part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13780) LICENSE/NOTICE are out of date for source artifacts

2016-11-04 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637433#comment-15637433
 ] 

Akira Ajisaka commented on HADOOP-13780:


Updated the list of bundled jars which was originally created for HADOOP-12893.
https://gist.github.com/aajisaka/6f61ae083770739d57720745bcb12f0d/revisions

> LICENSE/NOTICE are out of date for source artifacts
> ---
>
> Key: HADOOP-13780
> URL: https://issues.apache.org/jira/browse/HADOOP-13780
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Priority: Blocker
>
> we need to perform a check that all of our bundled works are properly 
> accounted for in our LICENSE/NOTICE files.
> At a minimum, it looks like HADOOP-10075 introduced some changes that have 
> not been accounted for.
> e.g. the jsTree plugin found at 
> {{hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jt/jquery.jstree.js}}
>  does not show up in LICENSE.txt to (a) indicate that we're redistributing it 
> under the MIT option and (b) give proper citation of the original copyright 
> holder per ASF policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13590) Retry until TGT expires even if the UGI renewal thread encountered exception

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637418#comment-15637418
 ] 

Hadoop QA commented on HADOOP-13590:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m  
4s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 5 new + 145 unchanged - 0 fixed = 150 total (was 145) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m  0s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13590 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837218/HADOOP-13590.10.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f35a0747a98b 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / abfc15d |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10994/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10994/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10994/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10994/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Retry until TGT expires even if the UGI renewal thread encountered exception
> 
>
>  

[jira] [Commented] (HADOOP-13590) Retry until TGT expires even if the UGI renewal thread encountered exception

2016-11-04 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637392#comment-15637392
 ] 

Xiao Chen commented on HADOOP-13590:


TestUGIWithMiniKdc wouldn't work well in Branch-2. This is due to kerby doesn't 
allow less-than-6-minutes ticket lifetime. This was [reported 
before|https://issues.apache.org/jira/browse/HADOOP-12559?focusedCommentId=15062670=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15062670]
 .

The error is:
{noformat}
javax.security.auth.login.LoginException: Requested start time is later than 
end time (11) - Requested start time is later than end time

at 
com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:804)
at 
com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:617)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755)
at 
javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)
at java.security.AccessController.doPrivileged(Native Method)
at 
javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
at javax.security.auth.login.LoginContext.login(LoginContext.java:587)
at 
org.apache.hadoop.security.TestUGIWithMiniKdc.testAutoRenewalThreadRetryWithKdc(TestUGIWithMiniKdc.java:126)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: KrbException: Requested start time is later than end time (11) - 
Requested start time is later than end time
at sun.security.krb5.KrbAsRep.(KrbAsRep.java:82)
at sun.security.krb5.KrbAsReqBuilder.send(KrbAsReqBuilder.java:316)
at sun.security.krb5.KrbAsReqBuilder.action(KrbAsReqBuilder.java:361)
at 
com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:776)
... 22 more
Caused by: KrbException: Identifier doesn't match expected value (906)
at sun.security.krb5.internal.KDCRep.init(KDCRep.java:140)
at sun.security.krb5.internal.ASRep.init(ASRep.java:64)
at sun.security.krb5.internal.ASRep.(ASRep.java:59)
at sun.security.krb5.KrbAsRep.(KrbAsRep.java:60)
... 25 more
{noformat}

I have manually verified the backported test to pass with 
{{MAX_TICKET_LIFETIME}} set to {{36}}. But propose not to include the test 
for branch-2 to save execution time.

> Retry until TGT expires even if the UGI renewal thread encountered exception
> 
>
> Key: HADOOP-13590
> URL: https://issues.apache.org/jira/browse/HADOOP-13590
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13590.01.patch, HADOOP-13590.02.patch, 
> HADOOP-13590.03.patch, HADOOP-13590.04.patch, HADOOP-13590.05.patch, 
> HADOOP-13590.06.patch, HADOOP-13590.07.patch, HADOOP-13590.08.patch, 
> HADOOP-13590.09.patch, HADOOP-13590.10.patch, HADOOP-13590.branch-2.01.patch
>
>
> The UGI has a background thread to renew the tgt. On exception, it 
> [terminates 
> itself|https://github.com/apache/hadoop/blob/bee9f57f5ca9f037ade932c6fd01b0dad47a1296/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L1013-L1014]
> If something temporarily goes wrong that results in an IOE, even if it 
> recovered no renewal will be done and client will eventually fail to 
> authenticate. We should retry with our best effort, until tgt expires, in the 
> 

[jira] [Updated] (HADOOP-13590) Retry until TGT expires even if the UGI renewal thread encountered exception

2016-11-04 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13590:
---
Attachment: HADOOP-13590.branch-2.01.patch

> Retry until TGT expires even if the UGI renewal thread encountered exception
> 
>
> Key: HADOOP-13590
> URL: https://issues.apache.org/jira/browse/HADOOP-13590
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13590.01.patch, HADOOP-13590.02.patch, 
> HADOOP-13590.03.patch, HADOOP-13590.04.patch, HADOOP-13590.05.patch, 
> HADOOP-13590.06.patch, HADOOP-13590.07.patch, HADOOP-13590.08.patch, 
> HADOOP-13590.09.patch, HADOOP-13590.10.patch, HADOOP-13590.branch-2.01.patch
>
>
> The UGI has a background thread to renew the tgt. On exception, it 
> [terminates 
> itself|https://github.com/apache/hadoop/blob/bee9f57f5ca9f037ade932c6fd01b0dad47a1296/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L1013-L1014]
> If something temporarily goes wrong that results in an IOE, even if it 
> recovered no renewal will be done and client will eventually fail to 
> authenticate. We should retry with our best effort, until tgt expires, in the 
> hope that the error recovers before that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12705) Upgrade Jackson 2.2.3 to 2.7.x or later

2016-11-04 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637346#comment-15637346
 ] 

Akira Ajisaka commented on HADOOP-12705:


+1 for 2.7.8 in trunk and branch-2. Thanks [~pj.fanning], [~mackrorysd], and 
[~ste...@apache.org].

> Upgrade Jackson 2.2.3 to 2.7.x or later
> ---
>
> Key: HADOOP-12705
> URL: https://issues.apache.org/jira/browse/HADOOP-12705
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
> Attachments: HADOOP-12705.002.patch, HADOOP-12705.01.patch, 
> HADOOP-13050-001.patch
>
>
> There's no rush to do this; this is just the JIRA to track versions. However, 
> without the upgrade, things written for Jackson 2.4.4 can break ( SPARK-12807)
> being Jackson, this is a potentially dangerous update.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13050) Upgrade to AWS SDK 10.10+ for Java 8u60+

2016-11-04 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13050:
---
Target Version/s: 2.8.0, 2.9.0, 3.0.0-alpha2  (was: 2.9.0, 3.0.0-alpha2)
Priority: Blocker  (was: Major)

Raised the priority to blocker because of the license issue reported by 
HADOOP-13794.

> Upgrade to AWS SDK 10.10+ for Java 8u60+
> 
>
> Key: HADOOP-13050
> URL: https://issues.apache.org/jira/browse/HADOOP-13050
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-13050-001.patch, HADOOP-13050-branch-2.002.patch, 
> HADOOP-13050-branch-2.003.patch
>
>
> HADOOP-13044 highlights that AWS SDK 10.6 —shipping in Hadoop 2.7+, doesn't 
> work on open jdk >= 8u60, because a change in the JDK broke the version of 
> Joda time that AWS uses.
> Fix, update the JDK. Though, that implies updating http components: 
> HADOOP-12767.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13590) Retry until TGT expires even if the UGI renewal thread encountered exception

2016-11-04 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13590:
---
Attachment: HADOOP-13590.10.patch

Thanks Andrew and Steve.

Patch 10 to:
- Added the space in TestUGIWithMiniKdc
- Added to string to assertion message. Extracted a method to make this cleaner.

Will provide a branch-2 patch shortly.

> Retry until TGT expires even if the UGI renewal thread encountered exception
> 
>
> Key: HADOOP-13590
> URL: https://issues.apache.org/jira/browse/HADOOP-13590
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13590.01.patch, HADOOP-13590.02.patch, 
> HADOOP-13590.03.patch, HADOOP-13590.04.patch, HADOOP-13590.05.patch, 
> HADOOP-13590.06.patch, HADOOP-13590.07.patch, HADOOP-13590.08.patch, 
> HADOOP-13590.09.patch, HADOOP-13590.10.patch
>
>
> The UGI has a background thread to renew the tgt. On exception, it 
> [terminates 
> itself|https://github.com/apache/hadoop/blob/bee9f57f5ca9f037ade932c6fd01b0dad47a1296/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L1013-L1014]
> If something temporarily goes wrong that results in an IOE, even if it 
> recovered no renewal will be done and client will eventually fail to 
> authenticate. We should retry with our best effort, until tgt expires, in the 
> hope that the error recovers before that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13720) Add more info to "token ... is expired" message

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637287#comment-15637287
 ] 

Hadoop QA commented on HADOOP-13720:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 53 unchanged - 23 fixed = 53 total (was 76) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
40s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13720 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837198/HADOOP-13720.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 287cb5f8a6b0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / abfc15d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10993/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10993/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add more info to "token ... is expired" message
> ---
>
> Key: HADOOP-13720
> URL: https://issues.apache.org/jira/browse/HADOOP-13720
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Trivial
>  Labels: supportability
> Attachments: HADOOP-13720.001.patch, 

[jira] [Commented] (HADOOP-13792) Stackoverflow for schemeless defaultFS with trailing slash

2016-11-04 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637245#comment-15637245
 ] 

John Zhuge commented on HADOOP-13792:
-

Thanks [~liuml07] for the review and commit! Thanks [~dariusgm] for reporting 
the issue.

> Stackoverflow for schemeless defaultFS with trailing slash
> --
>
> Key: HADOOP-13792
> URL: https://issues.apache.org/jira/browse/HADOOP-13792
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Darius Murawski
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13792.001.patch, HADOOP-13792.002.patch, 
> HADOOP-13792.003.patch
>
>
> Command: hadoop fs -fs 172.16.12.79/ -mkdir -p /usr/hduser
> Results in a Stack Overflow
> {code}
> Exception in thread "main" java.lang.StackOverflowError
>   at java.lang.String.indexOf(String.java:1503)
>   at java.net.URI$Parser.scan(URI.java:2951)
>   at java.net.URI$Parser.parseHierarchical(URI.java:3104)
>   at java.net.URI$Parser.parse(URI.java:3063)
>   at java.net.URI.(URI.java:588)
>   at java.net.URI.create(URI.java:850)
>   at org.apache.hadoop.fs.FileSystem.getDefaultUri(FileSystem.java:180)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:172)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:357)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:172)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:357)
> (...)
> {code}
> The Problem is the Slash at the End of the IP Address. When I remove it, the 
> command is executed correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13792) Stackoverflow for schemeless defaultFS with trailing slash

2016-11-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637239#comment-15637239
 ] 

Hudson commented on HADOOP-13792:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10772 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10772/])
HADOOP-13792. Stackoverflow for schemeless defaultFS with trailing (liuml07: 
rev abfc15d5ef966e99e8fe05d155ad2557e8cd67e8)
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDefaultUri.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java


> Stackoverflow for schemeless defaultFS with trailing slash
> --
>
> Key: HADOOP-13792
> URL: https://issues.apache.org/jira/browse/HADOOP-13792
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Darius Murawski
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13792.001.patch, HADOOP-13792.002.patch, 
> HADOOP-13792.003.patch
>
>
> Command: hadoop fs -fs 172.16.12.79/ -mkdir -p /usr/hduser
> Results in a Stack Overflow
> {code}
> Exception in thread "main" java.lang.StackOverflowError
>   at java.lang.String.indexOf(String.java:1503)
>   at java.net.URI$Parser.scan(URI.java:2951)
>   at java.net.URI$Parser.parseHierarchical(URI.java:3104)
>   at java.net.URI$Parser.parse(URI.java:3063)
>   at java.net.URI.(URI.java:588)
>   at java.net.URI.create(URI.java:850)
>   at org.apache.hadoop.fs.FileSystem.getDefaultUri(FileSystem.java:180)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:172)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:357)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:172)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:357)
> (...)
> {code}
> The Problem is the Slash at the End of the IP Address. When I remove it, the 
> command is executed correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13792) Stackoverflow for schemeless defaultFS with trailing slash

2016-11-04 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13792:
---
  Resolution: Fixed
Hadoop Flags: Incompatible change,Reviewed  (was: Incompatible change)
  Status: Resolved  (was: Patch Available)

Committed to {{trunk}} branch. Thanks [~jzhuge] for your contribution.

> Stackoverflow for schemeless defaultFS with trailing slash
> --
>
> Key: HADOOP-13792
> URL: https://issues.apache.org/jira/browse/HADOOP-13792
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Darius Murawski
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13792.001.patch, HADOOP-13792.002.patch, 
> HADOOP-13792.003.patch
>
>
> Command: hadoop fs -fs 172.16.12.79/ -mkdir -p /usr/hduser
> Results in a Stack Overflow
> {code}
> Exception in thread "main" java.lang.StackOverflowError
>   at java.lang.String.indexOf(String.java:1503)
>   at java.net.URI$Parser.scan(URI.java:2951)
>   at java.net.URI$Parser.parseHierarchical(URI.java:3104)
>   at java.net.URI$Parser.parse(URI.java:3063)
>   at java.net.URI.(URI.java:588)
>   at java.net.URI.create(URI.java:850)
>   at org.apache.hadoop.fs.FileSystem.getDefaultUri(FileSystem.java:180)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:172)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:357)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:172)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:357)
> (...)
> {code}
> The Problem is the Slash at the End of the IP Address. When I remove it, the 
> command is executed correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13650) S3Guard: Provide command line tools to manipulate metadata store.

2016-11-04 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637135#comment-15637135
 ] 

Lei (Eddy) Xu commented on HADOOP-13650:


Thanks for the suggestions, [~steve_l]. Will update a patch shortly.

> S3Guard: Provide command line tools to manipulate metadata store.
> -
>
> Key: HADOOP-13650
> URL: https://issues.apache.org/jira/browse/HADOOP-13650
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-13650-HADOOP-13345.000.patch
>
>
> Similar systems like EMRFS has the CLI tools to manipulate the metadata 
> store, i.e., create or delete metadata store, or {{import}}, {{sync}} the 
> file metadata between metadata store and S3. 
> http://docs.aws.amazon.com//ElasticMapReduce/latest/ReleaseGuide/emrfs-cli-reference.html
> S3Guard should offer similar functionality. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13720) Add more info to "token ... is expired" message

2016-11-04 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637119#comment-15637119
 ] 

Yongjun Zhang commented on HADOOP-13720:


Thanks [~steve_l], did not see your comment until now, I had a different 
solution in rev06, basically to use thread local variable to avoid the 
potential race condition. Would you please take a look? thanks!


> Add more info to "token ... is expired" message
> ---
>
> Key: HADOOP-13720
> URL: https://issues.apache.org/jira/browse/HADOOP-13720
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Trivial
>  Labels: supportability
> Attachments: HADOOP-13720.001.patch, HADOOP-13720.002.patch, 
> HADOOP-13720.003.patch, HADOOP-13720.004.patch, HADOOP-13720.005.patch
>
>
> Currently AbstractDelegationTokenSecretManager$checkToken does
> {code}
>   protected DelegationTokenInformation checkToken(TokenIdent identifier)
>   throws InvalidToken {
> assert Thread.holdsLock(this);
> DelegationTokenInformation info = getTokenInfo(identifier);
> if (info == null) {
>   throw new InvalidToken("token (" + identifier.toString()
>   + ") can't be found in cache");
> }
> if (info.getRenewDate() < Time.now()) {
>   throw new InvalidToken("token (" + identifier.toString() + ") is 
> expired");
> }
> return info;
>   } 
> {code}
> When a token is expried, we throw the above exception without printing out 
> the {{info.getRenewDate()}} in the message. If we print it out, we could know 
> for how long the token has not been renewed. This will help us investigate 
> certain issues.
> Create this jira as a request to add that part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13720) Add more info to "token ... is expired" message

2016-11-04 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-13720:
---
Attachment: HADOOP-13720.005.patch

Fix findbugs and checkstyle, with rev5.


> Add more info to "token ... is expired" message
> ---
>
> Key: HADOOP-13720
> URL: https://issues.apache.org/jira/browse/HADOOP-13720
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Trivial
>  Labels: supportability
> Attachments: HADOOP-13720.001.patch, HADOOP-13720.002.patch, 
> HADOOP-13720.003.patch, HADOOP-13720.004.patch, HADOOP-13720.005.patch
>
>
> Currently AbstractDelegationTokenSecretManager$checkToken does
> {code}
>   protected DelegationTokenInformation checkToken(TokenIdent identifier)
>   throws InvalidToken {
> assert Thread.holdsLock(this);
> DelegationTokenInformation info = getTokenInfo(identifier);
> if (info == null) {
>   throw new InvalidToken("token (" + identifier.toString()
>   + ") can't be found in cache");
> }
> if (info.getRenewDate() < Time.now()) {
>   throw new InvalidToken("token (" + identifier.toString() + ") is 
> expired");
> }
> return info;
>   } 
> {code}
> When a token is expried, we throw the above exception without printing out 
> the {{info.getRenewDate()}} in the message. If we print it out, we could know 
> for how long the token has not been renewed. This will help us investigate 
> certain issues.
> Create this jira as a request to add that part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-7930) Kerberos relogin interval in UserGroupInformation should be configurable

2016-11-04 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637038#comment-15637038
 ] 

Xiao Chen commented on HADOOP-7930:
---

Thanks a lot [~rkanter] for the review and commit!

> Kerberos relogin interval in UserGroupInformation should be configurable
> 
>
> Key: HADOOP-7930
> URL: https://issues.apache.org/jira/browse/HADOOP-7930
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.23.1
>Reporter: Alejandro Abdelnur
>Assignee: Robert Kanter
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-7930.branch-2.01.patch, HADOOP-7930.patch, 
> HADOOP-7930.patch, HADOOP-7930.patch
>
>
> Currently the check done in the *hasSufficientTimeElapsed()* method is 
> hardcoded to 10 mins wait.
> The wait time should be driven by configuration and its default value, for 
> clients should be 1 min. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13795) Skip testGlobStatusThrowsExceptionForUnreadableDir in TestFSMainOperationsSwift

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15636937#comment-15636937
 ] 

Hadoop QA commented on HADOOP-13795:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
11s{color} | {color:green} hadoop-openstack in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13795 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837175/HADOOP-13795.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 27eb6028e0e5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0aafc12 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10992/testReport/ |
| modules | C: hadoop-tools/hadoop-openstack U: hadoop-tools/hadoop-openstack |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10992/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Skip testGlobStatusThrowsExceptionForUnreadableDir in 
> TestFSMainOperationsSwift
> ---
>
> Key: HADOOP-13795
> URL: https://issues.apache.org/jira/browse/HADOOP-13795
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/swift, test
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13795.001.patch
>
>
> Swift object store does not honor directory 

[jira] [Updated] (HADOOP-13795) Skip testGlobStatusThrowsExceptionForUnreadableDir in TestFSMainOperationsSwift

2016-11-04 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13795:

Fix Version/s: 3.0.0-alpha2
   Status: Patch Available  (was: Open)

> Skip testGlobStatusThrowsExceptionForUnreadableDir in 
> TestFSMainOperationsSwift
> ---
>
> Key: HADOOP-13795
> URL: https://issues.apache.org/jira/browse/HADOOP-13795
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/swift, test
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13795.001.patch
>
>
> Swift object store does not honor directory permissions, thus we should skip 
> {{testGlobStatusThrowsExceptionForUnreadableDir}} in 
> {{TestFSMainOperationsSwift}}, similar to 
> {{testListStatusThrowsExceptionForUnreadableDir}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13795) Skip testGlobStatusThrowsExceptionForUnreadableDir in TestFSMainOperationsSwift

2016-11-04 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13795:

Component/s: test

> Skip testGlobStatusThrowsExceptionForUnreadableDir in 
> TestFSMainOperationsSwift
> ---
>
> Key: HADOOP-13795
> URL: https://issues.apache.org/jira/browse/HADOOP-13795
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/swift, test
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13795.001.patch
>
>
> Swift object store does not honor directory permissions, thus we should skip 
> {{testGlobStatusThrowsExceptionForUnreadableDir}} in 
> {{TestFSMainOperationsSwift}}, similar to 
> {{testListStatusThrowsExceptionForUnreadableDir}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13795) Skip testGlobStatusThrowsExceptionForUnreadableDir in TestFSMainOperationsSwift

2016-11-04 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13795:

Attachment: HADOOP-13795.001.patch

Patch 001:
* Skip Skip testGlobStatusThrowsExceptionForUnreadableDir in 
TestFSMainOperationsSwift

> Skip testGlobStatusThrowsExceptionForUnreadableDir in 
> TestFSMainOperationsSwift
> ---
>
> Key: HADOOP-13795
> URL: https://issues.apache.org/jira/browse/HADOOP-13795
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/swift, test
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13795.001.patch
>
>
> Swift object store does not honor directory permissions, thus we should skip 
> {{testGlobStatusThrowsExceptionForUnreadableDir}} in 
> {{TestFSMainOperationsSwift}}, similar to 
> {{testListStatusThrowsExceptionForUnreadableDir}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-7930) Kerberos relogin interval in UserGroupInformation should be configurable

2016-11-04 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-7930:
--
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Thanks [~xiaochen] for rebasing the patch.  Committed to branch-2 and 
branch-2.8!

> Kerberos relogin interval in UserGroupInformation should be configurable
> 
>
> Key: HADOOP-7930
> URL: https://issues.apache.org/jira/browse/HADOOP-7930
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.23.1
>Reporter: Alejandro Abdelnur
>Assignee: Robert Kanter
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-7930.branch-2.01.patch, HADOOP-7930.patch, 
> HADOOP-7930.patch, HADOOP-7930.patch
>
>
> Currently the check done in the *hasSufficientTimeElapsed()* method is 
> hardcoded to 10 mins wait.
> The wait time should be driven by configuration and its default value, for 
> clients should be 1 min. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13373) Add S3A implementation of FSMainOperationsBaseTest

2016-11-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15636890#comment-15636890
 ] 

Steve Loughran commented on HADOOP-13373:
-

there's an extra test in trunk which S3A will have to override & skip.

> Add S3A implementation of FSMainOperationsBaseTest
> --
>
> Key: HADOOP-13373
> URL: https://issues.apache.org/jira/browse/HADOOP-13373
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>
> There's a JUnit 4 test suite, {{FSMainOperationsBaseTest}}, which should be 
> implemented in the s3a tests, to add a bit more test coverage —including for 
> globbing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-7930) Kerberos relogin interval in UserGroupInformation should be configurable

2016-11-04 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15636875#comment-15636875
 ] 

Robert Kanter commented on HADOOP-7930:
---

+1 on the backport

> Kerberos relogin interval in UserGroupInformation should be configurable
> 
>
> Key: HADOOP-7930
> URL: https://issues.apache.org/jira/browse/HADOOP-7930
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.23.1
>Reporter: Alejandro Abdelnur
>Assignee: Robert Kanter
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-7930.branch-2.01.patch, HADOOP-7930.patch, 
> HADOOP-7930.patch, HADOOP-7930.patch
>
>
> Currently the check done in the *hasSufficientTimeElapsed()* method is 
> hardcoded to 10 mins wait.
> The wait time should be driven by configuration and its default value, for 
> clients should be 1 min. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13787) Azure testGlobStatusThrowsExceptionForUnreadableDir fails

2016-11-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15636862#comment-15636862
 ] 

Steve Loughran commented on HADOOP-13787:
-

...think so; an S3a implementation of that is still an open issue. Linking to 
HADOOP-13373 so that whoever implements that, knows to make sure that this 
override is done.

> Azure testGlobStatusThrowsExceptionForUnreadableDir fails
> -
>
> Key: HADOOP-13787
> URL: https://issues.apache.org/jira/browse/HADOOP-13787
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13787.001.patch
>
>
> Test 
> {{TestNativeAzureFileSystemOperationsMocked>FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir}}
>  failed in trunk:
> {noformat}
> Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.182 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked
> testGlobStatusThrowsExceptionForUnreadableDir(org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked)
>  Time elapsed: 1.111 sec <<< FAILURE!
> java.lang.AssertionError: Should throw IOException
> at org.junit.Assert.fail(Assert.java:88)
> at 
> org.apache.hadoop.fs.FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir(FSMainOperationsBaseTest.java:643)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:254)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:149)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Results :
> Failed tests:
> TestNativeAzureFileSystemOperationsMocked>FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir:643
>  Should throw IOException
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-7930) Kerberos relogin interval in UserGroupInformation should be configurable

2016-11-04 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15636858#comment-15636858
 ] 

Xiao Chen commented on HADOOP-7930:
---

Checkstyles are all direct backports. So I think we're good to check it in, to 
make future backports clean.

> Kerberos relogin interval in UserGroupInformation should be configurable
> 
>
> Key: HADOOP-7930
> URL: https://issues.apache.org/jira/browse/HADOOP-7930
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.23.1
>Reporter: Alejandro Abdelnur
>Assignee: Robert Kanter
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-7930.branch-2.01.patch, HADOOP-7930.patch, 
> HADOOP-7930.patch, HADOOP-7930.patch
>
>
> Currently the check done in the *hasSufficientTimeElapsed()* method is 
> hardcoded to 10 mins wait.
> The wait time should be driven by configuration and its default value, for 
> clients should be 1 min. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13787) Azure testGlobStatusThrowsExceptionForUnreadableDir fails

2016-11-04 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15636828#comment-15636828
 ] 

John Zhuge commented on HADOOP-13787:
-

:) Thanks [~ste...@apache.org] for the catch. Filed HADOOP-13795. With that, we 
should have covered all derived classes of {{FSMainOperationsBaseTest}}. Let me 
know anything is missing.

> Azure testGlobStatusThrowsExceptionForUnreadableDir fails
> -
>
> Key: HADOOP-13787
> URL: https://issues.apache.org/jira/browse/HADOOP-13787
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13787.001.patch
>
>
> Test 
> {{TestNativeAzureFileSystemOperationsMocked>FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir}}
>  failed in trunk:
> {noformat}
> Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.182 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked
> testGlobStatusThrowsExceptionForUnreadableDir(org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked)
>  Time elapsed: 1.111 sec <<< FAILURE!
> java.lang.AssertionError: Should throw IOException
> at org.junit.Assert.fail(Assert.java:88)
> at 
> org.apache.hadoop.fs.FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir(FSMainOperationsBaseTest.java:643)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:254)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:149)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Results :
> Failed tests:
> TestNativeAzureFileSystemOperationsMocked>FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir:643
>  Should throw IOException
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13787) Azure testGlobStatusThrowsExceptionForUnreadableDir fails

2016-11-04 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15636842#comment-15636842
 ] 

Xiao Chen commented on HADOOP-13787:


Thanks Steve for reporting! Hopefully John's full unit test run will find out 
whatever is left there

> Azure testGlobStatusThrowsExceptionForUnreadableDir fails
> -
>
> Key: HADOOP-13787
> URL: https://issues.apache.org/jira/browse/HADOOP-13787
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13787.001.patch
>
>
> Test 
> {{TestNativeAzureFileSystemOperationsMocked>FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir}}
>  failed in trunk:
> {noformat}
> Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.182 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked
> testGlobStatusThrowsExceptionForUnreadableDir(org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked)
>  Time elapsed: 1.111 sec <<< FAILURE!
> java.lang.AssertionError: Should throw IOException
> at org.junit.Assert.fail(Assert.java:88)
> at 
> org.apache.hadoop.fs.FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir(FSMainOperationsBaseTest.java:643)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:254)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:149)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Results :
> Failed tests:
> TestNativeAzureFileSystemOperationsMocked>FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir:643
>  Should throw IOException
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13795) Skip testGlobStatusThrowsExceptionForUnreadableDir in TestFSMainOperationsSwift

2016-11-04 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13795:
---
Priority: Major  (was: Minor)

> Skip testGlobStatusThrowsExceptionForUnreadableDir in 
> TestFSMainOperationsSwift
> ---
>
> Key: HADOOP-13795
> URL: https://issues.apache.org/jira/browse/HADOOP-13795
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/swift
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> Swift object store does not honor directory permissions, thus we should skip 
> {{testGlobStatusThrowsExceptionForUnreadableDir}} in 
> {{TestFSMainOperationsSwift}}, similar to 
> {{testListStatusThrowsExceptionForUnreadableDir}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13795) Skip testGlobStatusThrowsExceptionForUnreadableDir in TestFSMainOperationsSwift

2016-11-04 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-13795:
---

 Summary: Skip testGlobStatusThrowsExceptionForUnreadableDir in 
TestFSMainOperationsSwift
 Key: HADOOP-13795
 URL: https://issues.apache.org/jira/browse/HADOOP-13795
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/swift
Affects Versions: 3.0.0-alpha2
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Minor


Swift object store does not honor directory permissions, thus we should skip 
{{testGlobStatusThrowsExceptionForUnreadableDir}} in 
{{TestFSMainOperationsSwift}}, similar to 
{{testListStatusThrowsExceptionForUnreadableDir}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13788) datatransfer.Receiver.processOp error

2016-11-04 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HADOOP-13788.
-
Resolution: Invalid

> datatransfer.Receiver.processOp error
> -
>
> Key: HADOOP-13788
> URL: https://issues.apache.org/jira/browse/HADOOP-13788
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: xluren
>
> the hadoop version is CDH-5.8.2-1.cdh5.8.2.p0.3.
> 2016-11-03 14:04:16,566 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: 
> DataNode{data=FSDataset{dirpath='[/data/0/dfs/dn/current, 
> /data/1/dfs/dn/current, /data/10/dfs/dn/current, /data/11/dfs/dn/current, 
> /data/2/dfs/dn/current, /data/3/dfs/dn/current, /data/4/dfs/dn/current, 
> /data/5/dfs/dn/current, /data/6/dfs/dn/current, /data/7/dfs/dn/current, 
> /data/8/dfs/dn/current, /data/9/dfs/dn/current]'}, 
> localName='hadoop-2-10-104-1-31:50010', 
> datanodeUuid='811b3ca1-07e1-48b2-be78-6b3a7741eeb0', 
> xmitsInProgress=0}:Exception transfering block 
> BP-1587434107-10.104.1.19-1477411166086:blk_1089657978_18199662 to mirror 
> 10.104.1.27:50010: java.net.NoRouteToHostException: No route to host
> 2016-11-03 14:04:16,566 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> opWriteBlock BP-1587434107-10.104.1.19-1477411166086:blk_1089657978_18199662 
> received exception java.net.NoRouteToHostException: No route to host
> 2016-11-03 14:04:16,567 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: 
> hadoop-2-10-104-1-31:50010:DataXceiver error processing WRITE_BLOCK operation 
>  src: /10.104.2.26:33310 dst: /10.104.1.31:50010
> java.net.NoRouteToHostException: No route to host
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:700)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:246)
> at java.lang.Thread.run(Thread.java:745)
> I have try to  stop the  iptable service ,but it does not work 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10392) Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem)

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15636422#comment-15636422
 ] 

Hadoop QA commented on HADOOP-10392:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 22 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
26s{color} | {color:green} root generated 0 new + 666 unchanged - 28 fixed = 
666 total (was 694) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 48s{color} | {color:orange} root: The patch generated 4 new + 1312 unchanged 
- 14 fixed = 1316 total (was 1326) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 16s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}105m 
54s{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
31s{color} | {color:green} hadoop-streaming in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 30s{color} 
| {color:red} hadoop-archives in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-rumen in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
19s{color} | {color:green} hadoop-gridmix in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-openstack in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}233m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
| Timed out junit tests | org.apache.hadoop.tools.TestHadoopArchives |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-10392 |
| JIRA Patch URL | 

[jira] [Commented] (HADOOP-11614) Remove httpclient dependency from hadoop-openstack

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15636097#comment-15636097
 ] 

Hadoop QA commented on HADOOP-11614:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
12s{color} | {color:green} hadoop-tools_hadoop-openstack generated 0 new + 6 
unchanged - 1 fixed = 6 total (was 7) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-tools/hadoop-openstack: The patch 
generated 14 new + 132 unchanged - 126 fixed = 146 total (was 258) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 12 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
11s{color} | {color:green} hadoop-openstack in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-11614 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837102/HADOOP-11614-004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 25345434c47a 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 19b3779 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10991/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-openstack.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10991/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10991/testReport/ |
| modules | C: hadoop-tools/hadoop-openstack U: hadoop-tools/hadoop-openstack |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10991/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This 

[jira] [Updated] (HADOOP-11614) Remove httpclient dependency from hadoop-openstack

2016-11-04 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-11614:
---
Attachment: HADOOP-11614-004.patch

004 patch: Fixed findbugs, javac, and checkstyle warnings.

> Remove httpclient dependency from hadoop-openstack
> --
>
> Key: HADOOP-11614
> URL: https://issues.apache.org/jira/browse/HADOOP-11614
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Attachments: HADOOP-11614-002.patch, HADOOP-11614-003.patch, 
> HADOOP-11614-004.patch, HADOOP-11614.patch
>
>
> Remove httpclient dependency from hadoop-openstack and its pom.xml file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13037) Azure Data Lake Client: Support Azure data lake as a file system in Hadoop

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635981#comment-15635981
 ] 

Hadoop QA commented on HADOOP-13037:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 34 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
14s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
31s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13037 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837080/HADOOP-13037-004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 7223cfa66a4b 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 69dd5fa |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10989/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-tools/hadoop-azure-datalake U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10989/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Azure Data Lake 

[jira] [Commented] (HADOOP-13787) Azure testGlobStatusThrowsExceptionForUnreadableDir fails

2016-11-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635900#comment-15635900
 ] 

Steve Loughran commented on HADOOP-13787:
-

this'll need to be be done for Swift too —sorry. Welcome to the world of object 
store-related regressions :)

> Azure testGlobStatusThrowsExceptionForUnreadableDir fails
> -
>
> Key: HADOOP-13787
> URL: https://issues.apache.org/jira/browse/HADOOP-13787
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13787.001.patch
>
>
> Test 
> {{TestNativeAzureFileSystemOperationsMocked>FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir}}
>  failed in trunk:
> {noformat}
> Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.182 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked
> testGlobStatusThrowsExceptionForUnreadableDir(org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked)
>  Time elapsed: 1.111 sec <<< FAILURE!
> java.lang.AssertionError: Should throw IOException
> at org.junit.Assert.fail(Assert.java:88)
> at 
> org.apache.hadoop.fs.FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir(FSMainOperationsBaseTest.java:643)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:254)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:149)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Results :
> Failed tests:
> TestNativeAzureFileSystemOperationsMocked>FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir:643
>  Should throw IOException
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13651) S3Guard: S3AFileSystem Integration with MetadataStore

2016-11-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635887#comment-15635887
 ] 

Steve Loughran commented on HADOOP-13651:
-

Given that nothing is in trunk, just do a merge commit + fixup if it's easiest. 
Tip: always tag the top of the branch before you start anything like that; 
helps you compare before-after in the IDE.

> S3Guard: S3AFileSystem Integration with MetadataStore
> -
>
> Key: HADOOP-13651
> URL: https://issues.apache.org/jira/browse/HADOOP-13651
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13651-HADOOP-13345.001.patch, 
> HADOOP-13651-HADOOP-13345.002.patch, HADOOP-13651-HADOOP-13345.003.patch, 
> HADOOP-13651-HADOOP-13345.004.patch
>
>
> Modify S3AFileSystem et al. to optionally use a MetadataStore for metadata 
> consistency and caching.
> Implementation should have minimal overhead when no MetadataStore is 
> configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13793) s3guard: add inconsistency injection, integration tests

2016-11-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635880#comment-15635880
 ] 

Steve Loughran edited comment on HADOOP-13793 at 11/4/16 10:05 AM:
---

If you look at what's gone in HADOOP-13560, I moved all the s3 client 
operations out of the new output stream, and into S3aFileSystem. That is: no 
direct access/use of the AWS client lib.

if that is also done for s3guard calls, then inconsistency can be mocked by a 
subclass of S3AFileSystem whose low-level list call is overridden to return 
inconsistent data. It may also help future maintenance


was (Author: ste...@apache.org):
If you look at what's gone in HADOOP-13560, I moved all the s3 client 
operations out of the new output stream, and into S3aFileSystem. That is: no 
direct access/use of the AWS client lib.

if that is also done for s3guard calls, then inconsitency can be mocked by a 
subclass of S3AFileSystem whose low-level list call is overridden to return 
inconsistent data

> s3guard: add inconsistency injection, integration tests
> ---
>
> Key: HADOOP-13793
> URL: https://issues.apache.org/jira/browse/HADOOP-13793
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>
> Many of us share concerns that testing the consistency features of S3Guard 
> will be difficult if we depend on the rare and unpredictable occurrence of 
> actual inconsistency in S3 to exercise those code paths.
> I think we should have a mechanism for injecting failure to force exercising 
> of the consistency codepaths in S3Guard.
> Requirements:
> - Integration tests that cause S3A to see the types of inconsistency we 
> address with S3Guard.
> - These are deterministic integration tests.
> Unit tests are possible as well, if we were to stub out the S3Client.  That 
> may be less bang for the buck, though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13793) s3guard: add inconsistency injection, integration tests

2016-11-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635880#comment-15635880
 ] 

Steve Loughran commented on HADOOP-13793:
-

If you look at what's gone in HADOOP-13560, I moved all the s3 client 
operations out of the new output stream, and into S3aFileSystem. That is: no 
direct access/use of the AWS client lib.

if that is also done for s3guard calls, then inconsitency can be mocked by a 
subclass of S3AFileSystem whose low-level list call is overridden to return 
inconsistent data

> s3guard: add inconsistency injection, integration tests
> ---
>
> Key: HADOOP-13793
> URL: https://issues.apache.org/jira/browse/HADOOP-13793
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>
> Many of us share concerns that testing the consistency features of S3Guard 
> will be difficult if we depend on the rare and unpredictable occurrence of 
> actual inconsistency in S3 to exercise those code paths.
> I think we should have a mechanism for injecting failure to force exercising 
> of the consistency codepaths in S3Guard.
> Requirements:
> - Integration tests that cause S3A to see the types of inconsistency we 
> address with S3Guard.
> - These are deterministic integration tests.
> Unit tests are possible as well, if we were to stub out the S3Client.  That 
> may be less bang for the buck, though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10392) Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem)

2016-11-04 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-10392:
---
Attachment: HADOOP-10392.013.patch

013: rebased.

> Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem)
> 
>
> Key: HADOOP-10392
> URL: https://issues.apache.org/jira/browse/HADOOP-10392
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.3.0
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Minor
>  Labels: BB2015-05-TBR, newbie
> Attachments: HADOOP-10392.009.patch, HADOOP-10392.010.patch, 
> HADOOP-10392.011.patch, HADOOP-10392.012.patch, HADOOP-10392.013.patch, 
> HADOOP-10392.2.patch, HADOOP-10392.3.patch, HADOOP-10392.4.patch, 
> HADOOP-10392.4.patch, HADOOP-10392.5.patch, HADOOP-10392.6.patch, 
> HADOOP-10392.7.patch, HADOOP-10392.7.patch, HADOOP-10392.8.patch, 
> HADOOP-10392.patch
>
>
> There're some methods calling Path.makeQualified(FileSystem), which causes 
> javac warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12705) Upgrade Jackson 2.2.3 to 2.7.x or later

2016-11-04 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12705:

Summary: Upgrade Jackson 2.2.3 to 2.7.x or later  (was: Upgrade Jackson 
2.2.3 to 2.5.3 or later)

> Upgrade Jackson 2.2.3 to 2.7.x or later
> ---
>
> Key: HADOOP-12705
> URL: https://issues.apache.org/jira/browse/HADOOP-12705
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
> Attachments: HADOOP-12705.002.patch, HADOOP-12705.01.patch, 
> HADOOP-13050-001.patch
>
>
> There's no rush to do this; this is just the JIRA to track versions. However, 
> without the upgrade, things written for Jackson 2.4.4 can break ( SPARK-12807)
> being Jackson, this is a potentially dangerous update.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13790) Make qbt script executable

2016-11-04 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635828#comment-15635828
 ] 

Akira Ajisaka commented on HADOOP-13790:


LGTM, +1.

> Make qbt script executable
> --
>
> Key: HADOOP-13790
> URL: https://issues.apache.org/jira/browse/HADOOP-13790
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Trivial
> Attachments: HADOOP-13790.001.patch
>
>
> Trivial, the qbt script isn't executable, unlike the other scripts in 
> {{dev-support/bin}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13590) Retry until TGT expires even if the UGI renewal thread encountered exception

2016-11-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635825#comment-15635825
 ] 

Steve Loughran commented on HADOOP-13590:
-

I so wish we could use L-exp in branch-2...for now keep the await() call, just 
use an anon class, at least for the branch-2 version, leaving trunk as is.

Given that the {{assertTrue}} calls are preceded by log@Info calls, how about 
building a string which is then logged and used as the text in the asserts?

> Retry until TGT expires even if the UGI renewal thread encountered exception
> 
>
> Key: HADOOP-13590
> URL: https://issues.apache.org/jira/browse/HADOOP-13590
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13590.01.patch, HADOOP-13590.02.patch, 
> HADOOP-13590.03.patch, HADOOP-13590.04.patch, HADOOP-13590.05.patch, 
> HADOOP-13590.06.patch, HADOOP-13590.07.patch, HADOOP-13590.08.patch, 
> HADOOP-13590.09.patch
>
>
> The UGI has a background thread to renew the tgt. On exception, it 
> [terminates 
> itself|https://github.com/apache/hadoop/blob/bee9f57f5ca9f037ade932c6fd01b0dad47a1296/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L1013-L1014]
> If something temporarily goes wrong that results in an IOE, even if it 
> recovered no renewal will be done and client will eventually fail to 
> authenticate. We should retry with our best effort, until tgt expires, in the 
> hope that the error recovers before that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12705) Upgrade Jackson 2.2.3 to 2.5.3 or later

2016-11-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635809#comment-15635809
 ] 

Steve Loughran commented on HADOOP-12705:
-

sounds good; I'm wondering if we should raise it as a branch-2 change too, 
given the security implications, and now compatibility @ compile/link

thanks for doing this.

> Upgrade Jackson 2.2.3 to 2.5.3 or later
> ---
>
> Key: HADOOP-12705
> URL: https://issues.apache.org/jira/browse/HADOOP-12705
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
> Attachments: HADOOP-12705.002.patch, HADOOP-12705.01.patch, 
> HADOOP-13050-001.patch
>
>
> There's no rush to do this; this is just the JIRA to track versions. However, 
> without the upgrade, things written for Jackson 2.4.4 can break ( SPARK-12807)
> being Jackson, this is a potentially dangerous update.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13586) Hadoop 3.0 build broken on windows

2016-11-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635804#comment-15635804
 ] 

Steve Loughran commented on HADOOP-13586:
-

can you paste in the output?

> Hadoop 3.0 build broken on windows
> --
>
> Key: HADOOP-13586
> URL: https://issues.apache.org/jira/browse/HADOOP-13586
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
> Environment: Windows Server
>Reporter: Steve Loughran
>Priority: Blocker
>
> Builds on windows fail, even before getting to the native bits
> Looks like dev-support/bin/dist-copynativelibs isn't windows-ready



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13720) Add more info to "token ... is expired" message

2016-11-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635801#comment-15635801
 ] 

Steve Loughran commented on HADOOP-13720:
-

aah, findbugs is still complaining. You could just go "current time ="  + new 
Date(now()}; maybe add a deprecation warning, but shut up findbugs

> Add more info to "token ... is expired" message
> ---
>
> Key: HADOOP-13720
> URL: https://issues.apache.org/jira/browse/HADOOP-13720
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Trivial
>  Labels: supportability
> Attachments: HADOOP-13720.001.patch, HADOOP-13720.002.patch, 
> HADOOP-13720.003.patch, HADOOP-13720.004.patch
>
>
> Currently AbstractDelegationTokenSecretManager$checkToken does
> {code}
>   protected DelegationTokenInformation checkToken(TokenIdent identifier)
>   throws InvalidToken {
> assert Thread.holdsLock(this);
> DelegationTokenInformation info = getTokenInfo(identifier);
> if (info == null) {
>   throw new InvalidToken("token (" + identifier.toString()
>   + ") can't be found in cache");
> }
> if (info.getRenewDate() < Time.now()) {
>   throw new InvalidToken("token (" + identifier.toString() + ") is 
> expired");
> }
> return info;
>   } 
> {code}
> When a token is expried, we throw the above exception without printing out 
> the {{info.getRenewDate()}} in the message. If we print it out, we could know 
> for how long the token has not been renewed. This will help us investigate 
> certain issues.
> Create this jira as a request to add that part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13768) AliyunOSS: handle deleteDirs reliably when too many objects to delete

2016-11-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635794#comment-15635794
 ] 

Steve Loughran commented on HADOOP-13768:
-

assuming that the file limit is  always1000, why not just list the path in 1000 
blocks and issue delete requests in that size. There are ultimate limits to the 
size of responses in path listings (max size of an HTTP request), and 
inevitably heap problems well before then.

> AliyunOSS: handle deleteDirs reliably when too many objects to delete
> -
>
> Key: HADOOP-13768
> URL: https://issues.apache.org/jira/browse/HADOOP-13768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13768.001.patch, HADOOP-13768.002.patch
>
>
> Note in Aliyun OSS SDK, DeleteObjectsRequest has 1000 objects limit. This 
> needs to improve {{deleteDirs}} operation to make it pass when more objects 
> than the limit to delete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13037) Azure Data Lake Client: Support Azure data lake as a file system in Hadoop

2016-11-04 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-13037:
---
Status: Patch Available  (was: Open)

> Azure Data Lake Client: Support Azure data lake as a file system in Hadoop
> --
>
> Key: HADOOP-13037
> URL: https://issues.apache.org/jira/browse/HADOOP-13037
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure, tools
>Reporter: Shrikant Naidu
>Assignee: Vishwajeet Dusane
> Fix For: 2.9.0
>
> Attachments: HADOOP-13037 Proposal.pdf, HADOOP-13037-001.patch, 
> HADOOP-13037-002.patch, HADOOP-13037-003.patch, HADOOP-13037-004.patch
>
>
> The jira proposes an improvement over HADOOP-12666 to remove webhdfs 
> dependencies from the ADL file system client and build out a standalone 
> client. At a high level, this approach would extend the Hadoop file system 
> class to provide an implementation for accessing Azure Data Lake. The scheme 
> used for accessing the file system will continue to be 
> adl://.azuredatalake.net/path/to/file. 
> The Azure Data Lake Cloud Store will continue to provide a webHDFS rest 
> interface. The client will  access the ADLS store using WebHDFS Rest APIs 
> provided by the ADLS store. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13037) Azure Data Lake Client: Support Azure data lake as a file system in Hadoop

2016-11-04 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-13037:
---
Attachment: HADOOP-13037-004.patch

Thanks for the detail review [~chris.douglas]. I have incorporated the code 
review comments. and removed the newly added live test cases. 

I will raise separate patch on HADOOP-13257 with newly live test cases and 
update.

> Azure Data Lake Client: Support Azure data lake as a file system in Hadoop
> --
>
> Key: HADOOP-13037
> URL: https://issues.apache.org/jira/browse/HADOOP-13037
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure, tools
>Reporter: Shrikant Naidu
>Assignee: Vishwajeet Dusane
> Fix For: 2.9.0
>
> Attachments: HADOOP-13037 Proposal.pdf, HADOOP-13037-001.patch, 
> HADOOP-13037-002.patch, HADOOP-13037-003.patch, HADOOP-13037-004.patch
>
>
> The jira proposes an improvement over HADOOP-12666 to remove webhdfs 
> dependencies from the ADL file system client and build out a standalone 
> client. At a high level, this approach would extend the Hadoop file system 
> class to provide an implementation for accessing Azure Data Lake. The scheme 
> used for accessing the file system will continue to be 
> adl://.azuredatalake.net/path/to/file. 
> The Azure Data Lake Cloud Store will continue to provide a webHDFS rest 
> interface. The client will  access the ADLS store using WebHDFS Rest APIs 
> provided by the ADLS store. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13037) Azure Data Lake Client: Support Azure data lake as a file system in Hadoop

2016-11-04 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-13037:
---
Status: Open  (was: Patch Available)

> Azure Data Lake Client: Support Azure data lake as a file system in Hadoop
> --
>
> Key: HADOOP-13037
> URL: https://issues.apache.org/jira/browse/HADOOP-13037
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure, tools
>Reporter: Shrikant Naidu
>Assignee: Vishwajeet Dusane
> Fix For: 2.9.0
>
> Attachments: HADOOP-13037 Proposal.pdf, HADOOP-13037-001.patch, 
> HADOOP-13037-002.patch, HADOOP-13037-003.patch
>
>
> The jira proposes an improvement over HADOOP-12666 to remove webhdfs 
> dependencies from the ADL file system client and build out a standalone 
> client. At a high level, this approach would extend the Hadoop file system 
> class to provide an implementation for accessing Azure Data Lake. The scheme 
> used for accessing the file system will continue to be 
> adl://.azuredatalake.net/path/to/file. 
> The Azure Data Lake Cloud Store will continue to provide a webHDFS rest 
> interface. The client will  access the ADLS store using WebHDFS Rest APIs 
> provided by the ADLS store. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13037) Azure Data Lake Client: Support Azure data lake as a file system in Hadoop

2016-11-04 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635656#comment-15635656
 ] 

Vishwajeet Dusane commented on HADOOP-13037:


Thanks for the heads up [~cnauroth]. 

> Azure Data Lake Client: Support Azure data lake as a file system in Hadoop
> --
>
> Key: HADOOP-13037
> URL: https://issues.apache.org/jira/browse/HADOOP-13037
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure, tools
>Reporter: Shrikant Naidu
>Assignee: Vishwajeet Dusane
> Fix For: 2.9.0
>
> Attachments: HADOOP-13037 Proposal.pdf, HADOOP-13037-001.patch, 
> HADOOP-13037-002.patch, HADOOP-13037-003.patch
>
>
> The jira proposes an improvement over HADOOP-12666 to remove webhdfs 
> dependencies from the ADL file system client and build out a standalone 
> client. At a high level, this approach would extend the Hadoop file system 
> class to provide an implementation for accessing Azure Data Lake. The scheme 
> used for accessing the file system will continue to be 
> adl://.azuredatalake.net/path/to/file. 
> The Azure Data Lake Cloud Store will continue to provide a webHDFS rest 
> interface. The client will  access the ADLS store using WebHDFS Rest APIs 
> provided by the ADLS store. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13742) Expose "NumOpenConnectionsPerUser" as a metric

2016-11-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635567#comment-15635567
 ] 

Hadoop QA commented on HADOOP-13742:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
23s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13742 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837061/HADOOP-13742-004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7b62a6824457 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 69dd5fa |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10988/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10988/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Expose "NumOpenConnectionsPerUser" as a metric
> --
>
> Key: HADOOP-13742
> URL: https://issues.apache.org/jira/browse/HADOOP-13742
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-13742-002.patch, HADOOP-13742-003.patch, 
> HADOOP-13742-004.patch, HADOOP-13742.patch
>
>
> To track user level connections( How many connections for each user) in busy 
> cluster where so many connections to server.



--

[jira] [Updated] (HADOOP-13742) Expose "NumOpenConnectionsPerUser" as a metric

2016-11-04 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-13742:
--
Attachment: HADOOP-13742-004.patch

Uploaded the patch to fix the check-style and findbugs.

> Expose "NumOpenConnectionsPerUser" as a metric
> --
>
> Key: HADOOP-13742
> URL: https://issues.apache.org/jira/browse/HADOOP-13742
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-13742-002.patch, HADOOP-13742-003.patch, 
> HADOOP-13742-004.patch, HADOOP-13742.patch
>
>
> To track user level connections( How many connections for each user) in busy 
> cluster where so many connections to server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org